How does scheduler decides which pod to run on which node?

Hello there,

I have this topology: 1 master and two workers (all are in my laptop using VirtualBox).
Let us call them as: 1)master 2)worker1 3)worker2
They all have exact the same resources (2vCPU, 4GB RAM)

I used Exposing an External IP Address to Access an Application in a Cluster | Kubernetes to run a pod with replica on different nodes, but when I set replica value to 2, it runs both two pods on worker1, while I expected to distribute it between both nodes (1 on worker1 and 1 on worker2). I did try this by removing service and deployment but the result was the same.

I also increased the replica number to 20, and I saw 12 on worker1 and 8 on worker2.

  1. Why does this happen? I mean I know scheduling distributes pods between nodes based on free resources, but I see every time worker1 has more running pods than worker2.
  2. I see that when I turn worker nodes on, pods are running on both two workers but none of them run on the master node. Why does this happen? Only if workers are turned off pods will run on master?

If you want to “enforce” the node distribution, you can use PodAntiAffinity:

And regarding the default scheduler, you can find more information in:

1 Like