Kubernetes version: v1.25.9
Cloud being used: bare-metal
Installation method: RKE 1
Host OS: Ubuntu 18.04-20.04
CNI and version: Calico
CRI and version: docker 19-20 ~
Yesterday we change all node’s DNS servers to our new servers.
Seem we had some issues with the application installed (cnvrg https://cnvrg.io/) when resolving to some URLs failed.
We tried to “restart” coredns by deleting running pods and new pods took their place.
But during doing this, we notice coredns has scheduled some pods on nodes that “cordon”, with taint
How come? What we did do wrong with our cluster? Does coredns have privileged?