TL;DR; Does a Memory Volume on a container spec, get counted for the container’s resource memory limit?
Cloud being used: bare metal
Installation method: yum
Host OS: RedHat 7.8
CNI and version: Calico 3.14.1
CRI and version: Docker 1.13.1-161.git64e9980.el7_8.x86_64 (RedHat)
We’re using a server with 240G Memory, and wanted to provide 20G to the docker application, 200G as memory filesystem and another 20G for the OS itself to do whatever it needs to do.
Our logic was to put a resource limit of 20G on the container first. We then have a volume of type memory. The application running in the container tracks that volume as scratch space and makes sure we do not go over 200G.
We saw as users were using this application a lot of restarts. Looking at the logs our application is generated, our application was able to see the scratch space’s disk utilization grow to 34G, but then still get killed:
Jul 3 07:47:56 k8s-app-prd1-1 kernel: Killed process 35599 (candela), UID 0, total-vm:1488716kB, anon-rss:928300kB, file-rss:20796kB, shmem-rss:0kB Jul 3 07:55:32 k8s-app-prd1-1 kernel: Task in /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbfc70681_74a0_45fb_9bd3_4d1d068c5834.slice/docker-a0a1680602859549c74ab233faebf1428000e92ef7c4e762df7eb14698961a71.scope killed as a result of limit of /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbfc70681_74a0_45fb_9bd3_4d1d068c5834.slice
The restarts occurred at a rapid pace, and the application never reached 20G in memory.
So we started to wonder whether it’s possible that actually the Memory backed volume is being accounted for. But then we wouldn’t be able to see 31G of usage in the logs… as then for sure the pod’s memory + the scratch space would be much higher than the 20G limit.
Right now I’m at loss what’s really happening. Without the limits set, the application runs stable and healthy. We just want to put a limit on the application itself, to avoid if ever there’s a bug, the OS remains stable and can tell k8s to kill the pod.
Should we count for Memory volumes when putting memory limits? ie: should I set it to 220G instead of 20G? If that’s the case, then I have the problem that potentially the application can go over 20G, leaving less than 200G as scratch space, so the application would incorrectly think it has 200G available.
I was reading about this in the kubernetes documentation, and would have sworn that previously read Memory volumes are not considered, but I can’t find it anywhere anymore.
Could anyone confirm?