There is a metrics discrepancy in current memory usage metrics for pods.
I am using the following sources to view the memory usage for a pod
- Kubectl top po <pod_name>
container_memory_usage_bytesmetrics exposed by CAdvisor and scraped by Prometheus.
For example, a pod with memory request: 16GB, memory limit: 26 GB has current memory. Kubectl command is returning
container_memory_usage_bytes is showing 42GB of memory usage.
As per Kubernetes monitoring Doc it should be same as the source of the metrics data is CAdvisor for both kubectl top and Kubelet.
Can someone from the community help me understand the difference between top command and metrics exposed by CAdvisor?