I am running a on-prem Kubernetes cluster with 30 nodes, including 3 master nodes and the rest as worker nodes.
Kubernetes version: 1.29.7
CNI and version: 3.27.4
CRI and version: 1.7.12
Recently, I added 2 new master nodes to the cluster. As per the default behavior of Kubernetes, new master nodes are tainted to prevent workloads from being scheduled on them unless explicitly tolerated.
However, after adding the new master nodes, I noticed that my longhorn pod that is running as a DaemonSet were scheduled on these nodes(master) and non of the other pods scheduled but DaemonSet is.
This behavior seems unexpected since the default taint should prevent pods from being scheduled on master nodes.
Is this an expected behavior in the latest version of Kubernetes?
Could this be a bug, or is there some additional configuration required to enforce the default taints?
Any insights or guidance on this issue would be appreciated.
I believe thats an expected behavior and not bug.
- Please check following on your master nodes
kubectl describe node | grep Taints
- Look for toleration and affinity in your daemonsets
kubectl get daemonset -n -o yaml
- By default daemonsets have tolerations for below and master nodes automatically add this.
key: node-role.kubernetes.io/master
effect: NoSchedule
DaemonSet pods automatically tolerate this taint, as they are often used for cluster-wide operations that need to run on all nodes.
Look for the tolerations
field in the DaemonSet spec and remove or modify it to exclude the toleration for the node-role.kubernetes.io/master
taint
tolerations:
- key: "node-role.kubernetes.io/master"
effect: "NoSchedule"
operator: "Exists" # Remove this entry to disallow scheduling on master nodes
- You can also add this affinity
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: node-role.kubernetes.io/master
operator: DoesNotExist