Data discrepancy in current memory usage metrics between kubectl top command and metrics exposed by kubelet(scraped by Prometheus)

There is a metrics discrepancy in current memory usage metrics for pods.
I am using the following sources to view the memory usage for a pod

  1. Kubectl top po <pod_name>
  2. container_memory_usage_bytes metrics exposed by CAdvisor and scraped by Prometheus.

For example, a pod with memory request: 16GB, memory limit: 26 GB has current memory. Kubectl command is returning 7547Mi. Metrics container_memory_usage_bytes is showing 42GB of memory usage.

As per Kubernetes monitoring Doc it should be same as the source of the metrics data is CAdvisor for both kubectl top and Kubelet.

Can someone from the community help me understand the difference between top command and metrics exposed by CAdvisor?

1 Like