I have a deployment that ensures there are x number of pods running. These pods use emptyDir for their working files, but the data is not needed. When my pod fails/restarts (OOM or similar), since the pod tries to restart on the same node, it still has the same data available to it in its emptyDir. In this case, that causes a number of problems for me, since the application behaves differently when it sees it has existing data (attempts repair, which is not wanted here), and also I have other systems trying to reconnect to the same pod.
IF there is a way for me to tell the pod to just delete on failure instead of restart on failure, the deployment will spin up a new pod and my issue with pods trying to repair from their partial storage and some other headaches just go away, so Iām hoping there is a way for me to have pods delete on failure.
Cluster information:
Kubernetes version: 1.14
Cloud being used: aws
Thanks!