Data discrepancy in current memory usage metrics between kubectl top command and metrics exposed by kubelet(scraped by Prometheus)

There is a metrics discrepancy in current memory usage metrics for pods.
I am using the following sources to view the memory usage for a pod

  1. Kubectl top po <pod_name>
  2. container_memory_usage_bytes metrics exposed by CAdvisor and scraped by Prometheus.

For example, a pod with memory request: 16GB, memory limit: 26 GB has current memory. Kubectl command is returning 7547Mi. Metrics container_memory_usage_bytes is showing 42GB of memory usage.

As per Kubernetes monitoring Doc it should be same as the source of the metrics data is CAdvisor for both kubectl top and Kubelet.

Can someone from the community help me understand the difference between top command and metrics exposed by CAdvisor?

1 Like

I’m facing same issue.

Client Version: v1.28.2
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.28.2

If anyone has resolved it, kindly let me know .

Using below command .
helm install prometheus prometheus-community/kube-prometheus-stack

Discussion links:

kubectl top displays Working Set Size (WSS) memory for a pod, which is container_memory_working_set_bytes. Usually, it is total mem usage of a container - `container_memory_usage_bytes, which include application allocated memory (anon) and page file cahce, but WSS is totl mem minus inactive file bytes.

Why container_memory_usage_bytes is more than limits it is a bit strange, but it depends, in case of Linux is it cgroup v1 or v2 I assume