Hi Kubernetes community,
I’m running into an issue when deploying a Deployment with host port enabled. Couldn’t find any exact same issues after searching for a while so I’m here:) Environment setup can be simplified to the list below:
- Kubernetes 1.11.6-gke.2
- Google Kubernetes Engine
- A regular cluster with 3 nodes
- The Deployment has 3 replicas and host port 80 enabled
- Each node gets 1 pod deployed
kubectl apply -f <file>
is used to trigger the deployment- The manifest file looks like this
apiVersion: apps/v1
kind: Deployment
metadata:
name: helloworld
spec:
replicas: 3
selector:
matchLabels:
name: helloworld
template:
metadata:
labels:
name: helloworld
spec:
containers:
- name: helloworld
image: helloworld:v1
ports:
- containerPort: 80
hostPort: 80
The first time deployment always works as there were no pods in the cluster. However, subsequent deployments (with a different image) couldn’t finish. There is always 1 new pod showing up with “Pending” status and it’ll be stuck there until I manually delete it.
$ kubectl get pods -n helloworld
…
helloworld-5879f57677-cvbnv 1/1 Running 0 2h
helloworld-5879f57677-cxrbl 1/1 Running 0 2h
helloworld-5879f57677-glsc9 1/1 Running 0 2h
helloworld-5c786bf9fc-29mcc 0/1 Pending 0 1h
kubectl describe
this new pod shows something like:
Type Reason Age From Message
Warning FailedScheduling 2s (x108 over 5m) default-scheduler 0/3 nodes are available: 3 node(s) didn’t have free ports for the requested pod ports.
It appears that default-scheduler doesn’t attempt to terminate any running pods first before creating a new pod, which explains why scheduling failed as there is no free port. It puzzles me even more that the problem disappeared after I switched from Deployment to DaemonSet, which ends up with the same effect of having 1 pod per node. Any subsequent deployments to DaemonSet simply go through, for example:
$ kubectl get pods -n helloworld --watch
NAME READY STATUS RESTARTS AGE
helloworld-6h4b4 1/1 Running 0 37s
helloworld-9zvrr 1/1 Running 0 37s
helloworld-hx84p 1/1 Running 0 37s
helloworld-hx84p 1/1 Terminating 0 1m
helloworld-hx84p 0/1 Terminating 0 1m
helloworld-hx84p 0/1 Terminating 0 1m
helloworld-hx84p 0/1 Terminating 0 1m
helloworld-9hslf 0/1 Pending 0 0s
helloworld-9hslf 0/1 ContainerCreating 0 0s
helloworld-9hslf 1/1 Running 0 2s
helloworld-9zvrr 1/1 Terminating 0 1m
helloworld-9zvrr 0/1 Terminating 0 1m
helloworld-9zvrr 0/1 Terminating 0 1m
helloworld-9zvrr 0/1 Terminating 0 1m
helloworld-4f4cc 0/1 Pending 0 0s
helloworld-4f4cc 0/1 ContainerCreating 0 0s
helloworld-4f4cc 1/1 Running 0 2s
helloworld-6h4b4 1/1 Terminating 0 1m
helloworld-6h4b4 0/1 Terminating 0 1m
helloworld-6h4b4 0/1 Terminating 0 1m
helloworld-6h4b4 0/1 Terminating 0 1m
helloworld-6h4b4 0/1 Terminating 0 1m
helloworld-lbqbd 0/1 Pending 0 0s
helloworld-lbqbd 0/1 ContainerCreating 0 0s
helloworld-lbqbd 1/1 Running 0 2s
Note that it’s trying to terminate “helloworld-hx84p” before creating “helloworld-9hslf”
My questions are:
- Why Deployment doesn’t work for this scenario (number of pods with host port is equal to number of nodes and each node gets 1 pod) but DaemonSet does?
- Is DaemonSet controller v.s. default scheduler related to this issue? DaemonSet | Kubernetes
- If the difference is caused by DaemonSet controller v.s. default scheduler, how to handle this scenario with default scheduler?
Any pointers will be of great help to me and my team! Thanks!