Kubernetes Master Worker Node issue

Hi every ond
I have one problem , i create new kubernetes cluster with k3s ! I have one master node and one worker node , all node status is ready in kubectl get node output ! I create 5 nginx pod , 2 pod on master node and 3 pod on worker node! My problem is , when i turn off my worker node all pods move to master node , but after my worker node is turn on and the status is ready no pods move from master node to worker node! How i can fix this issue ?

Hi farhad8463:

This is not an issue :wink:

A given Pod (as defined by a UID) is never “rescheduled” to a different node; instead, that Pod can be replaced by a new, near-identical Pod, with even the same name if desired, but with a different UID.
(From Pod Lifecycle | Kubernetes)

You requested 5 pods, and they are placed 2 in the “master-worker” and 3 in the “worker-only” node. You power off the “worker-only” node. Kubernetes sees that the desired stated is not satisfied and request the creation of 3 new pods.

The Scheduler checks the available nodes (just one, the “master-worker”), so it schedules the 3 new pods there.

Kubernetes sees that you requested 5 pods and they are now 5 pods again; and Kubernetes is happy :wink:

The important thing to consider is that the Scheduler only checks for the available/schedulable nodes when new pods are scheduled to be created.

As you requested 5 pods and there are 5 pods running (all in the same node, but that’s ok), Kubernetes is happy, so bringing the “worker-only” back into the cluster changes nothing for existing pods; Kubernetes will not “balance” the number of pods on each node or anything like that.

If new pods are to be created, the Scheduler will take into account the availability of the “worker-only” node. You can terminate one pod on the “master-worker” node. The current state of the deployment (4 pods) does not match the desired state (5 pods). So a new pod is requested.

The scheduler checks the available nodes. As the default policy is to spread the pods across as many nodes as possible, on nodes with less load, and so on, the “worker-only” node will score better and will be chosen by the scheduler and the new pod will be created there. So the cluster will have 4 pods in the “master-worker” nodes and 1 in the “worker-only” node.

There are many ways to influence how the scheduler chooses available nodes to deploy a new pod, so please take a look at Kubernetes Scheduler | Kubernetes.

Best regards,

Xavi

1 Like

Thanks for your respond and complete explanation