Kubelet report 0 memory usage

Cluster information:

Kubernetes version: v1.21.1
Cloud being used: bare-metal (Vagrant)
Installation method: Vagrant + Ansible
Host OS: fedora/33-cloud-base, v33.20201019.0
CNI and version: cilium v1.10.0-rc2
CRI and version: Docker version 20.10.6, build 370c289

Hi, i installed a cluster in my machine.

I get it working in with cilium + docker + metric-server, and the comand kubectl top nodes, report the percentaje of memory usage. It was in ubuntu/bionic64.

I switch to:

config.vm.box = "fedora/33-cloud-base"
config.vm.box_version = "33.20201019.0"

And now i have the same cluster working with cilium. The main difference that i had to do this was this:

echo 'net.ipv4.conf.lxc*.rp_filter = 0' > /etc/sysctl.d/99-override_cilium_rp_filter.conf
systemctl restart systemd-sysctl

The problem is that kubectl top nodes report always 0% of memory usage.

W0522 00:34:13.472110   93585 top_node.go:119] Using json format to get metrics. Next release will switch to protocol-buffers, switch early by passing --use-protocol-buffers flag
NAME       CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
master-1   348m         17%    0Mi             0%        
worker-1   48m          4%     0Mi             0%  

I use this command to install metric-server

kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.4.4/components.yaml

Althought the problem is not with metric-server itself. The problem is this:

The command:
kubectl get --raw /api/v1/nodes/worker-1/proxy/stats/summary | jq '.node.memory'

Report:

    {
      "time": "2021-05-21T22:27:22Z",
      "availableBytes": 4119363584,
      "usageBytes": 0,
      "workingSetBytes": 0,
      "rssBytes": 230178816,
      "pageFaults": 3686635,
      "majorPageFaults": 726
    }

This 0 usageBytes is the problem.

Kubelet have certs signed through bootstraping, and all csr are approved.

The cluster is bootstraping with kubeadm:

apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.5.11
  bindPort: 6443
certificateKey: "e6a2eb8581237ab72a4f494f30285ec12a9694d750b9785706a83bfcbbbd2204"
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
networking:
  serviceSubnet: 10.96.0.0/16
  podSubnet: 10.1.1.0/24 # podCIDR
controlPlaneEndpoint: "192.168.5.30:6443"
kubernetesVersion: "v1.21.1"
---
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
cgroupDriver: systemd
serverTLSBootstrap: true

It has a haproxy balancing the masters ( now only use one master, but it works with multiple masters), with this configuration:

# Frontend configuration
frontend kubernetes
    bind 192.168.5.30:6443
    option tcplog
    mode tcp
    default_backend kubernetes-master-nodes
# Frontend configuration
# Backend configuration
backend kubernetes-master-nodes
    mode tcp
    balance roundrobin
    option tcp-check
    server master-1 192.168.5.11:6443 check fall 3 rise 2
# Backend configuration

This option was added to kube-api:
--enable-aggregator-routing=true

The info of hosts:

NAME=Fedora
VERSION="33 (Cloud Edition)"
ID=fedora
VERSION_ID=33
VERSION_CODENAME=""
PLATFORM_ID="platform:f33"
PRETTY_NAME="Fedora 33 (Cloud Edition)"
ANSI_COLOR="0;38;2;60;110;180"
LOGO=fedora-logo-icon
CPE_NAME="cpe:/o:fedoraproject:fedora:33"
HOME_URL="https://fedoraproject.org/"
DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora/f33/system-administrators-guide/"
SUPPORT_URL="https://fedoraproject.org/wiki/Communicating_and_getting_help"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Fedora"
REDHAT_BUGZILLA_PRODUCT_VERSION=33
REDHAT_SUPPORT_PRODUCT="Fedora"
REDHAT_SUPPORT_PRODUCT_VERSION=33
PRIVACY_POLICY_URL="https://fedoraproject.org/wiki/Legal:PrivacyPolicy"
VARIANT="Cloud Edition"
VARIANT_ID=cloud

I cannot find any clue to this problem. ¿Anyone know what is happening?

The problem was with Fedora. In rhel8 this error is not happening