One of node cannot resolve domain name

Asking for help? Comment out what you need so we can get more information to help you!

Cluster information:

Kubernetes version: 1.18
Cloud being used: (put bare-metal if not on a public cloud)
Installation method:
Host OS: linux
CNI and version: flannel
CRI and version:

You can format your yaml by highlighting it and pressing Ctrl-Shift-C, it will make your output easier to read.

one node cannot resolve domain name

I have a k8s cluster.It has one master and three nodes.

K8smaster, k8snode1, k8snode2, k8snode3

I run this yaml for test.

apiVersion: v1
kind: Pod
metadata:
apiVersion: v1
kind: Pod
metadata:
  name: busybox-0
  namespace: default
spec:
  containers:
  - image: busybox:1.28
    command:
      - sleep
      - "3600"
    imagePullPolicy: IfNotPresent
    name: busybox-0
  restartPolicy: Always
  hostNetwork: true
  dnsPolicy: ClusterFirstWithHostNet
  nodeName: k8snode1
---
apiVersion: v1
kind: Pod
metadata:
  name: busybox-1
  namespace: default
spec:
  containers:
  - image: busybox:1.28
    command:
      - sleep
      - "3600"
    imagePullPolicy: IfNotPresent
    name: busybox-1
  restartPolicy: Always
  hostNetwork: true
  dnsPolicy: ClusterFirstWithHostNet
  nodeName: k8snode2
---
apiVersion: v1
kind: Pod
metadata:
  name: busybox-2
  namespace: default
spec:
  containers:
  - image: busybox:1.28
    command:
      - sleep
      - "3600"
    imagePullPolicy: IfNotPresent
    name: busybox-2
  restartPolicy: Always
  hostNetwork: true
  dnsPolicy: ClusterFirstWithHostNet
  nodeName: k8snode3
[root@k8smaster wxms]# kubectl get pod -o wide | grep busy
busybox-0   1/1     Running   53   2d5h    192.168.0.117   k8snode1   <none>    <none>
busybox-1   1/1     Running   53   2d5h    192.168.0.128   k8snode2   <none>    <none>
busybox-2   1/1     Running   53   2d5h    192.168.0.73    k8snode3   <none>    <none>

node1

[root@k8smaster wxms]# kubectl exec -it busybox-0 -- sh
/ #
/ # hostname
K8snode1
/ #
/ # ping www.baidu.com
ping: bad address 'www.baidu.com'
/ #

node2

[root@k8smaster ~]#  kubectl exec -it busybox-1 -- sh
/ # hostname
k8snode2
/ # ping www.baidu.com
PING www.baidu.com (180.101.49.12): 56 data bytes
64 bytes from 180.101.49.12: seq=0 ttl=47 time=14.850 ms
64 bytes from 180.101.49.12: seq=1 ttl=47 time=14.731 ms
64 bytes from 180.101.49.12: seq=2 ttl=47 time=14.708 ms
^C
--- www.baidu.com ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 14.708/14.763/14.850 ms
/ #

node3

[root@k8smaster ~]#  kubectl exec -it busybox-2 -- sh
/ # hostname
k8snode3
/ #
/ # ping www.baidu.com
PING www.baidu.com (180.101.49.12): 56 data bytes
64 bytes from 180.101.49.12: seq=0 ttl=47 time=17.010 ms
64 bytes from 180.101.49.12: seq=1 ttl=47 time=14.680 ms
64 bytes from 180.101.49.12: seq=2 ttl=47 time=14.414 ms
64 bytes from 180.101.49.12: seq=3 ttl=47 time=14.408 ms
64 bytes from 180.101.49.12: seq=4 ttl=47 time=14.502 ms
64 bytes from 180.101.49.12: seq=5 ttl=47 time=14.427 ms
^C
--- www.baidu.com ping statistics ---
6 packets transmitted, 6 packets received, 0% packet loss
round-trip min/avg/max = 14.408/14.906/17.010 ms
/ #

coredns

[root@k8smaster ~]# kubectl get po -n kube-system -o wide | grep dns
coredns-5ffc8cf9c9-56lh2   1/1     Running   0  25d    10.244.0.16     k8smaster   <none>           <none>
coredns-5ffc8cf9c9-8jrtf   1/1     Running   0  25d    10.244.3.46     k8snode3    <none>           <none>

If I change “/etc/resolv.conf” nameserver to a coreDNS ip ,it`s working.

#nameserver 10.96.0.10
nameserver 10.244.0.16
search default.svc.cluster.local svc.cluster.local cluster.local openstacklocal
options ndots:5

then node1

/ # ping www.baidu.com
PING www.baidu.com (180.101.49.11): 56 data bytes
64 bytes from 180.101.49.11: seq=0 ttl=47 time=13.709 ms
64 bytes from 180.101.49.11: seq=1 ttl=47 time=11.078 ms
^C
--- www.baidu.com ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 11.078/12.393/13.709 ms

`Kube-dns. logs

[root@k8smaster ~]# kubectl logs --namespace=kube-system -l k8s-app=kube-dns
[ERROR] plugin/errors: 2 143.3.244.10.in-addr.arpa. PTR: read udp 10.244.0.16:59179->114.114.114.114:53: i/o timeout
[ERROR] plugin/errors: 2 143.3.244.10.in-addr.arpa. PTR: read udp 10.244.0.16:56471->114.114.114.114:53: i/o timeout
[ERROR] plugin/errors: 2 144.3.244.10.in-addr.arpa. PTR: read udp 10.244.0.16:36941->114.114.114.114:53: i/o timeout
[ERROR] plugin/errors: 2 144.3.244.10.in-addr.arpa. PTR: read udp 10.244.0.16:59206->114.114.114.114:53: i/o timeout
[ERROR] plugin/errors: 2 145.3.244.10.in-addr.arpa. PTR: read udp 10.244.0.16:37024->114.114.114.114:53: i/o timeout
[ERROR] plugin/errors: 2 145.3.244.10.in-addr.arpa. PTR: read udp 10.244.0.16:52478->114.114.114.114:53: i/o timeout
[ERROR] plugin/errors: 2 148.3.244.10.in-addr.arpa. PTR: read udp 10.244.0.16:46214->114.114.114.114:53: i/o timeout
[ERROR] plugin/errors: 2 148.3.244.10.in-addr.arpa. PTR: read udp 10.244.0.16:55425->114.114.114.114:53: i/o timeout
[ERROR] plugin/errors: 2 158.3.244.10.in-addr.arpa. PTR: read udp 10.244.0.16:46974->114.114.114.114:53: i/o timeout
[ERROR] plugin/errors: 2 158.3.244.10.in-addr.arpa. PTR: read udp 10.244.0.16:49570->114.114.114.114:53: i/o timeout

[INFO] 10.244.2.132:58592 - 2530 "A IN seata-server.default.svc.cluster.local.openstacklocal. udp 71 false 512" NXDOMAIN qr,aa,rd,ra 146 0.000128288s
[INFO] 10.244.1.83:37946 - 34309 "A IN seata-server.default.svc.cluster.local.default.svc.cluster.local. udp 82 false 512" NXDOMAIN qr,aa,rd 175 0.000155371s
[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
[INFO] Reloading complete
[ERROR] plugin/errors: 2 138.3.244.10.in-addr.arpa. PTR: read udp 10.244.3.46:49514->114.114.114.114:53: i/o timeout
[ERROR] plugin/errors: 2 138.3.244.10.in-addr.arpa. PTR: read udp 10.244.3.46:45862->114.114.114.114:53: i/o timeout
[ERROR] plugin/errors: 2 140.3.244.10.in-addr.arpa. PTR: read udp 10.244.3.46:57703->114.114.114.114:53: i/o timeout
[ERROR] plugin/errors: 2 140.3.244.10.in-addr.arpa. PTR: read udp 10.244.3.46:35707->114.114.114.114:53: i/o timeout
[ERROR] plugin/errors: 2 159.3.244.10.in-addr.arpa. PTR: read udp 10.244.3.46:34596->114.114.114.114:53: i/o timeout
[ERROR] plugin/errors: 2 159.3.244.10.in-addr.arpa. PTR: read udp 10.244.3.46:49975->114.114.114.114:53: i/o timeout
[root@k8smaster ~]# kubectl get ep kube-dns --namespace=kube-system
NAME       ENDPOINTS                                                    AGE
kube-dns   10.244.0.16:53,10.244.3.46:53,10.244.0.16:9153 + 3 more...   392d
[root@k8smaster ~]# kubectl get svc --namespace=kube-system
NAME             TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                  AGE
kube-dns         ClusterIP   10.96.0.10      <none>        53/UDP,53/TCP,9153/TCP   392d
metrics-server   ClusterIP   10.108.23.241   <none>        443/TCP                  26d

Kube-proxy

[root@k8smaster ~]# kubectl get pods -n kube-system -o wide | grep kube-proxy
kube-proxy-2j28w  1/1     Running   0   2d5h   192.168.0.56    k8smaster   <none>           <none>
kube-proxy-bxzkt  1/1     Running   0   2d5h   192.168.0.117   k8snode1    <none>           <none>
kube-proxy-lnmjt  1/1     Running   0   2d5h   192.168.0.128   k8snode2    <none>           <none>
kube-proxy-th9pp  1/1     Running   0   2d5h   192.168.0.73    k8snode3    <none>           <none>
[root@k8smaster ~]#
[root@k8smaster ~]# kubectl logs kube-proxy-bxzkt --tail=5  -n kube-system
W0520 08:28:44.004213 1 iptables.go:562] Could not check for iptables canary mangle/KUBE-PROXY-CANARY: exit status 4
W0520 08:29:14.004129 1 iptables.go:562] Could not check for iptables canary mangle/KUBE-PROXY-CANARY: exit status 4
W0520 08:29:44.004042 1 iptables.go:562] Could not check for iptables canary mangle/KUBE-PROXY-CANARY: exit status 4
[root@k8smaster ~]#
[root@k8smaster ~]# kubectl logs kube-proxy-lnmjt --tail=5  -n kube-system
I0519 07:53:16.612070 1 shared_informer.go:230] Caches are synced for endpoints config
W0520 08:26:21.522852 1 iptables.go:562] Could not check for iptables canary mangle/KUBE-PROXY-CANARY: exit status 4
W0520 08:26:51.522669 1 iptables.go:562] Could not check for iptables canary mangle/KUBE-PROXY-CANARY: exit status 4
W0520 08:27:21.522677 1 iptables.go:562] Could not check for iptables canary mangle/KUBE-PROXY-CANARY: exit status 4
[root@k8smaster ~]#
[root@k8smaster ~]# kubectl logs kube-proxy-th9pp --tail=5  -n kube-system
W0520 08:24:59.419474       1 iptables.go:562] Could not check for iptables canary mangle/KUBE-PROXY-CANARY: exit status 4
W0520 08:25:29.408271       1 iptables.go:562] Could not check for iptables canary mangle/KUBE-PROXY-CANARY: exit status 4
W0520 08:25:59.409644       1 iptables.go:562] Could not check for iptables canary mangle/KUBE-PROXY-CANARY: exit status 4