K8s eats my RAM?!

Cluster information:

Kubernetes version: 1.21.6
Cloud being used: Google Cloud
Installation method: GKE
Host OS: linux
CNI and version: -
CRI and version: -

Dear community,
asking for some help and knowledge.

I’ve deployed my own golang app into k8s cluster and performed some tests.

I do see in k9s dashboard (and in any other) that 269 Mb of memory was used.

Then, I’ve checked the POD’s container processes, and it turned out that there is no process(s) that consumes such amount of memory. But if I’d apply low memory limits, let’s say 700Mb for the container, my application is being OOM killed.

I’ve double-checked RES field in top when the application is under the load - it stays the same. Internally, my app calls some shell scripts. I used a pprof profiler, but it shows me the same picture as top inside the container (like 18Mb of RAM - not 300Mb)

This confuses me a lot. Can’t understand how to measure memory consumption properly. Who ate my RAM? =)

And what is that memory metric is being shown to me in k9s (and any other dashboard)? Is it one of /sys/fs/cgroup/memory metrics?

Ps. I’ve checked k9s / Lens and kubectl top pod - all of them shows +/- similar amount of used memory (~300Mb).

Thank you,
Pasha

Please disregard,
the issue was in go app (for curious people, here’s a link)

But I’m still curious of what’s the main metric showing in dashboards, like K8s dashboard or k9s in “Memory” column, how it’s being calculated?