There is a metrics discrepancy in current memory usage metrics for pods.
I am using the following sources to view the memory usage for a pod
Kubectl top po <pod_name>
container_memory_usage_bytes
metrics exposed by CAdvisor and scraped by Prometheus.
For example, a pod with memory request: 16GB, memory limit: 26 GB has current memory. Kubectl command is returning 7547Mi
. Metrics container_memory_usage_bytes
is showing 42GB of memory usage.
As per Kubernetes monitoring Doc it should be same as the source of the metrics data is CAdvisor for both kubectl top and Kubelet.
Can someone from the community help me understand the difference between top command and metrics exposed by CAdvisor?
1 Like
I’m facing same issue.
Client Version: v1.28.2
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.28.2
If anyone has resolved it, kindly let me know .
Using below command .
helm install prometheus prometheus-community/kube-prometheus-stack
Discussion links:
opened 09:44AM - 30 Oct 20 UTC
closed 12:48PM - 04 Dec 20 UTC
kind/bug
**What happened?**
I installed latest version of kube-prometheus. with kube-sta… te-metrics 1.9.7 on my self managed kubernetes cluster
**Did you expect to see some different?**
I expected that the metric service will provide metrics about pod memory consumption.
I verified the metrics with:
`$ kubectl top pod my-app`
and on the corresponding worker node with:
`$ docker stats`
It shows that the metric data concerning memory usage of a pod shows the double size as it should be.
**How to reproduce it (as minimally and precisely as possible)**:
You can compare the metric data with `kubectl top` and `docker stats`
After I uninstalled kube-prometheus stack and installed instead the [metric-server](https://github.com/kubernetes-sigs/metrics-server) all memory is displayed correctly and kubernetes scheduling was again as expected
Does anybody know how this can happen and what I can do about this issue?
See also discussions here:
https://stackoverflow.com/questions/64582065/why-is-openjdk-docker-container-ignoring-memory-limits-in-kubernetes
https://stackoverflow.com/questions/64440319/why-java-container-in-kubernetes-takes-more-memory-as-limits
**Environment**
Debian Buster
Kubernetes 1.19.3
```
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.3", GitCommit:"1e11e4a2108024935ecfcb2912226cedeafd99df", GitTreeState:"clean", BuildDate:"2020-10-14T12:50:19Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"clean", BuildDate:"2020-04-16T11:48:36Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
```
* Manifests:
kube-state-metrics 1.9.7