Assuming I have a deployment which runs one pod. If for some reason(for example: node restart, OOM in container) a replacement pod is created, how can I be sure that the old pod is completely dead and that the process being run in its container is definitely not running anymore. From Deployment docs, I can see that if .spec.strategy.type==Recreate then ‘all existing pods are killed before new ones are created’. How does k8s confirm that the pod is indeed ‘killed’. Is there any case where the process in old pod is still running and the new replacement pod starts the process again? If yes, can k8s somehow prevent this from happening? Here’s a sample deployment with one pod:
apiVersion: apps/v1beta2
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
field.cattle.io/creatorId: u-hxddpeaoep
creationTimestamp: "2020-03-18T18:27:46Z"
generation: 1
labels:
cattle.io/creator: norman
workload.user.cattle.io/workloadselector: deployment-repacking-test-one-at-a-time-test
name: one-at-a-time-test
namespace: repacking-test
resourceVersion: "162550248"
selfLink: /apis/apps/v1beta2/namespaces/repacking-test/deployments/one-at-a-time-test
uid: d41cb627-b84f-4d7f-aef6-dd4c48efd79e
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
workload.user.cattle.io/workloadselector: deployment-repacking-test-one-at-a-time-test
strategy:
type: ReCreate
template:
metadata:
annotations:
cattle.io/timestamp: "2020-03-18T18:27:45Z"
creationTimestamp: null
labels:
workload.user.cattle.io/workloadselector: deployment-repacking-test-one-at-a-time-test
spec:
containers:
- image: ubuntu:xenial
imagePullPolicy: Always
name: one-at-a-time-test
resources: {}
securityContext:
allowPrivilegeEscalation: false
capabilities: {}
privileged: false
readOnlyRootFilesystem: false
runAsNonRoot: false
stdin: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
tty: true
dnsConfig: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 1
conditions:
- lastTransitionTime: "2020-03-18T18:27:46Z"
lastUpdateTime: "2020-03-18T18:27:49Z"
message: ReplicaSet "one-at-a-time-test-559bdcc84b" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
- lastTransitionTime: "2020-03-18T18:42:28Z"
lastUpdateTime: "2020-03-18T18:42:28Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
observedGeneration: 1
readyReplicas: 1
replicas: 1
updatedReplicas: 1
Cluster information:
Kubernetes version:Client Version: version.Info{Major:“1”, Minor:“14”, GitVersion:“v1.14.7”, GitCommit:“8fca2ec50a6133511b771a11559e24191b1aa2b4”, GitTreeState:“clean”, BuildDate:“2019-09-18T14:47:22Z”, GoVersion:“go1.12.9”, Compiler:“gc”, Platform:“linux/amd64”}
Server Version: version.Info{Major:“1”, Minor:“15”, GitVersion:“v1.15.5”, GitCommit:“20c265fef0741dd71a66480e35bd69f18351daea”, GitTreeState:“clean”, BuildDate:“2019-10-15T19:07:57Z”, GoVersion:“go1.12.10”, Compiler:“gc”, Platform:“linux/amd64”}
Cloud being used: bare-metal
Installation method:
Host OS: Ubuntu 16.04
CNI and version:
CRI and version: