Kubernetes Alogrithm

#1

We would like to know the algorithm or logic behind the pods getting shuffled between different worker nodes, is there any known threshold for each worker nodes to move a pod from itself(assume a worker node is at 90% usage) to another worker node which is found to be having comparatively more space on it, we have noticed that in our setup(3 master with 30 worker nodes), few of the nodes are frequently hitting 90% and above, where as there are many worker nodes still with less than 40% usage on it, so want to know on which criteria a pod is moved from one to another and is there anyway to set custom threshold say beyond 75% a worker nodes should not be overwhelmed, please provide suggestions at the earliest as we get hiccups now and then in our production environment.

0 Likes

#2

Pods are not moved, usually, unless needed (like the node crashes). There are there components that might evict pods, for autoscaling or to schedule pods with more priority, etc. But you don’t get any of that unless you configure it.

Nevertheless, I’m not sure you need any of that.

If your pods request (as request in the pod spec) all that they need to work correctly, it shouldn’t be an issue that server is at 80%, right? I guess in your case, maybe pods request we not adjusted correctly? Or are you using intensively other resource (like disk IO) that is shared and not possible to specify on requests?

Do you know what happens exactly when you have these “hiccups”?

0 Likes