I’m running a container with a JVM process with a 30 GB heap on a K8s pod, and it’s the only container on the pod.
resources:
limits:
cpu: "2"
memory: 35000Mi
requests:
cpu: "2"
memory: 35000Mi
If I “kubectl exec bash” into the container and accidentally run a command that uses 10GB of memory, the Linux OOM killer will be triggered, and it will kill the JVM process since it’s using the most memory.
What I would like to happen instead is that it would kill the process from the bash shell.
I’ve managed to get close to a solution by making sure the QoS level for the container is set to Guaranteed (as described here: Configure Out Of Resource Handling - Kubernetes). This results in the container pid=1 JVM process in getting an oom_score_adj value of -998. However, that means that the “kubectl exec bash” process also has an oom_score_adj of -998, meaning that the JVM process will still be killed.
I was able to workaround this too, but in a fairly clunky fashion. By running:
echo 1000 > /proc/self/oom_score_adj
as the first command in the bash shell, it then ensures that container pid=1 JVM process will be less likely to be killed by the Linux OOM killer than any process spawned from the bash shell.
Is this the recommended approach or is there a simpler way to address this issue?
Cluster information:
Kubernetes version: v 1.12.13
Cloud being used: AWS
Installation method: Manual
Host OS: CentOS