Coredns pods are schedule on cordon node

Cluster information:

Kubernetes version: v1.25.9
Cloud being used: bare-metal
Installation method: RKE 1
Host OS: Ubuntu 18.04-20.04
CNI and version: Calico
CRI and version: docker 19-20 ~

Yesterday we change all node’s DNS servers to our new servers.
Seem we had some issues with the application installed (cnvrg when resolving to some URLs failed.
We tried to “restart” coredns by deleting running pods and new pods took their place.
But during doing this, we notice coredns has scheduled some pods on nodes that “cordon”, with taint
How come? What we did do wrong with our cluster? Does coredns have privileged?

Can you explain? based on the article it won’t schedule pods when unreachable or not-ready. But no option to run when the node is cordon.