Hi, we have some Kubernetes Jobs that sometimes got killed due to running out of requested memory. We’d like to correctly update the job status in database when this happens, and are considering using something like shutdown hook to catch it. However, we haven’t found an easy way to detect whether the job pod exits normally or it’s killed due to OOM from the shutdown hook (or any other way to get notified when a job is evicted in general). Any suggestions? Thanks!
hi, I got similar issue maybe interest to you, I have many php-fpm child process ( I think for now, only main process exit, it then will recognize as oom) , sometimes it’s oom killed, but pod have not restarted, the oom signal somehow not passed to docker.