I’ve created a Cronjob in kubernetes, with job’s backoffLimit defaulting to 6 and pod’s RestartPolicy to Never , the pods are deliberately configured to FAIL. As I understand, (for podSpec with restartPolicy : Never) Job controller will try to create backoffLimit number of pods and then it marks the job as Failed, so, I expected that there would be 6 pods in Error state.
This is the actual Job’s status:
status:
conditions:
- lastProbeTime: 2019-02-20T05:11:58Z
lastTransitionTime: 2019-02-20T05:11:58Z
message: Job has reached the specified backoff limit
reason: BackoffLimitExceeded
status: "True"
type: Failed
failed: 5
Why were there only 5 failed pods instead of 6 ? Or is my understanding about backoffLimit in-correct?