There is a metrics discrepancy in current memory usage metrics for pods.
I am using the following sources to view the memory usage for a pod
Kubectl top po <pod_name>
container_memory_usage_bytes metrics exposed by CAdvisor and scraped by Prometheus.
For example, a pod with memory request: 16GB, memory limit: 26 GB has current memory. Kubectl command is returning 7547Mi. Metrics container_memory_usage_bytes is showing 42GB of memory usage.
As per Kubernetes monitoring Doc it should be same as the source of the metrics data is CAdvisor for both kubectl top and Kubelet.
Can someone from the community help me understand the difference between top command and metrics exposed by CAdvisor?
kubectl top displays Working Set Size (WSS) memory for a pod, which is container_memory_working_set_bytes. Usually, it is total mem usage of a container - `container_memory_usage_bytes, which include application allocated memory (anon) and page file cahce, but WSS is totl mem minus inactive file bytes.
Why container_memory_usage_bytes is more than limits it is a bit strange, but it depends, in case of Linux is it cgroup v1 or v2 I assume