DNS fail in worker node but fine in master node

Asking for help? Comment out what you need so we can get more information to help you!

Cluster information:

Kubernetes version:
kubeadm version: &version.Info{Major:“1”, Minor:“26”, GitVersion:“v1.26.1”, GitCommit:“8f94681cd294aa8cfd3407b8191f6c70214973a4”, GitTreeState:“clean”, BuildDate:“2023-01-18T15:56:50Z”, GoVersion:“go1.19.5”, Compiler:“gc”, Platform:“linux/amd64”}
Installation method:
Host OS: ubuntu 10.3.0
CNI and version:
CRI and version: containerd GitHub - containerd/containerd: An open and reliable container runtime v1.6.19 1e1ea6e986c6c86565bc33d52e34b81b3e2bc71f

I use dnsutils.yaml to test dns. Everything is ok in master node, but in worker node is not.

/ # dig

; <<>> DiG 9.11.6-P1 <<>>
;; global options: +cmd
;; connection timed out; no servers could be reached

/ # cat /etc/resolv.conf 
search default.svc.cluster.local svc.cluster.local cluster.local
nameserver 10.96.0.10
options ndots:5

## server ip is right and I didn't find any error in the log of kube-proxy --v=5.
kube-system   kube-dns                      ClusterIP   10.96.0.10     <none>        53/UDP,53/TCP,9153/TCP                          7d7h    k8s-app=kube-dns

It looks like the host of 10.96.0.10 can’t unreachable, but I have no idea to solve the problem.

What CNI are you using?

I use calico and found error message “Could not resolve CalicoNetwork IPPool and kubeadm configuration: IPPool 10.224.0.0/16 is not within the platform’s configured pod network CIDR(s) [10.244.0.0/16]”

the dns pods will not come up correctly until you have a functional CNI network

Thanks, I fixed it, configuration ippool is wrong