CoreDNS service pingable, but DNS resolution is not querying the service

Cluster information:

Kubernetes version: 1.16.3
Installation method: bare-metal
Host OS: Gentoo Linux
CNI and version: kube-proxy 1.16.3
CRI and version: docker v1.18.3

Hey there again,

after having configured the master node and joined a worker node (on a Debian Buster VM). I wanted to install Helm. It fails with getting a DNS timeout error. So I got into debugging DNS (as seen in [1]). I found my server beeing able to ping the coredns service but no dns queries are beeing send. I suspect a missing IPTables rule. Can anyone give me a hint?

[1] Debugging DNS Resolution - Kubernetes

master ~ # kubectl apply -f https://k8s.io/examples/admin/dns/busybox.yaml
pod/busybox created
master ~ # kubectl exec -ti busybox -- nslookup kubernetes.default
Server:    10.96.0.10
Address 1: 10.96.0.10

nslookup: can't resolve 'kubernetes.default'
command terminated with exit code 1
master ~ # kubectl exec -ti busybox -- ping -c1 10.96.0.10
PING 10.96.0.10 (10.96.0.10): 56 data bytes
64 bytes from 10.96.0.10: seq=0 ttl=64 time=0.117 ms

--- 10.96.0.10 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 0.117/0.117/0.117 ms
master ~ # kubectl exec -ti busybox -- telnet 10.96.0.10 53
telnet: can't connect to remote host (10.96.0.10): No route to host
command terminated with exit code 1
master ~ # kubectl exec -ti busybox -- traceroute 10.96.0.10
traceroute to 10.96.0.10 (10.96.0.10), 30 hops max, 46 byte packets
 1  10.96.0.10 (10.96.0.10)  0.007 ms  0.035 ms  0.010 ms
master ~ # kubectl get pods -A
NAMESPACE             NAME                           READY   STATUS    RESTARTS   AGE
default               busybox                        1/1     Running   0          5m17s
gitlab-managed-apps   install-helm                   0/1     Error     0          7m6s
kube-system           coredns-5644d7b6d9-pqgzl       1/1     Running   1          20h
kube-system           coredns-5644d7b6d9-xpgg8       1/1     Running   1          20h
kube-system           etcd-master                      1/1     Running   1          20h
kube-system           kube-apiserver-master            1/1     Running   1          20h
kube-system           kube-controller-manager-master   1/1     Running   1          20h
kube-system           kube-proxy-92tz5               1/1     Running   1          20h
kube-system           kube-proxy-jcv2h               1/1     Running   2          20h
kube-system           kube-router-swmkq              1/1     Running   1          17h
kube-system           kube-router-z4km2              1/1     Running   1          17h
kube-system           kube-scheduler-master            1/1     Running   1          20h
server ~ # kubectl describe pods kube-proxy-92tz5 -n kube-system
Name:                 kube-proxy-92tz5
Namespace:            kube-system
Priority:             2000001000
Priority Class Name:  system-node-critical
Node:                 master/1.2.3.4
Start Time:           Thu, 05 Dec 2019 13:44:36 +0100
Labels:               controller-revision-hash=56c95f6b7b
                      k8s-app=kube-proxy
                      pod-template-generation=1
Annotations:          <none>
Status:               Running
IP:                   1.2.3.4
IPs:
  IP:           1.2.3.4
Controlled By:  DaemonSet/kube-proxy
Containers:
  kube-proxy:
    Container ID:  docker://9bf29f35c08252ff47c0203e92094fdfcf60bc1aac0a81faf267cf17d689c997
    Image:         k8s.gcr.io/kube-proxy:v1.16.3
    Image ID:      docker-pullable://k8s.gcr.io/kube-proxy@sha256:6c09387bbee4e58eb923695da4fdfa3c37adec632862e79f419f0b5b16865f34
    Port:          <none>
    Host Port:     <none>
    Command:
      /usr/local/bin/kube-proxy
      --config=/var/lib/kube-proxy/config.conf
      --hostname-override=$(NODE_NAME)
    State:          Running
      Started:      Fri, 06 Dec 2019 09:34:20 +0100
    Last State:     Terminated
      Reason:       Error
      Exit Code:    2
      Started:      Thu, 05 Dec 2019 13:44:39 +0100
      Finished:     Fri, 06 Dec 2019 09:30:31 +0100
    Ready:          True
    Restart Count:  1
    Environment:
      NODE_NAME:   (v1:spec.nodeName)
    Mounts:
      /lib/modules from lib-modules (ro)
      /run/xtables.lock from xtables-lock (rw)
      /var/lib/kube-proxy from kube-proxy (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-proxy-token-2wwld (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  kube-proxy:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      kube-proxy
    Optional:  false
  xtables-lock:
    Type:          HostPath (bare host directory volume)
    Path:          /run/xtables.lock
    HostPathType:  FileOrCreate
  lib-modules:
    Type:          HostPath (bare host directory volume)
    Path:          /lib/modules
    HostPathType:  
  kube-proxy-token-2wwld:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kube-proxy-token-2wwld
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  beta.kubernetes.io/os=linux
Tolerations:     
                 CriticalAddonsOnly
                 node.kubernetes.io/disk-pressure:NoSchedule
                 node.kubernetes.io/memory-pressure:NoSchedule
                 node.kubernetes.io/network-unavailable:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute
                 node.kubernetes.io/pid-pressure:NoSchedule
                 node.kubernetes.io/unreachable:NoExecute
                 node.kubernetes.io/unschedulable:NoSchedule
Events:
  Type    Reason          Age   From           Message
  ----    ------          ----  ----           -------
  Normal  SandboxChanged  38m   kubelet, master  Pod sandbox changed, it will be killed and re-created.
  Normal  Pulled          38m   kubelet, master  Container image "k8s.gcr.io/kube-proxy:v1.16.3" already present on machine
  Normal  Created         38m   kubelet, master  Created container kube-proxy
  Normal  Started         38m   kubelet, master  Started container kube-proxy
master ~ # kubectl describe pods kube-proxy-jcv2h -n kube-system                
Name:                 kube-proxy-jcv2h
Namespace:            kube-system
Priority:             2000001000
Priority Class Name:  system-node-critical
Node:                 debian-vm/192.168.122.76
Start Time:           Thu, 05 Dec 2019 13:48:37 +0100
Labels:               controller-revision-hash=56c95f6b7b
                      k8s-app=kube-proxy
                      pod-template-generation=1
Annotations:          <none>
Status:               Running
IP:                   192.168.122.76
IPs:
  IP:           192.168.122.76
Controlled By:  DaemonSet/kube-proxy
Containers:
  kube-proxy:
    Container ID:  docker://68d97c857a7e1048b9bb2ed252666b5627d9b648c5806b6735108bfb97fc4a67
    Image:         k8s.gcr.io/kube-proxy:v1.16.3
    Image ID:      docker-pullable://k8s.gcr.io/kube-proxy@sha256:6c09387bbee4e58eb923695da4fdfa3c37adec632862e79f419f0b5b16865f34
    Port:          <none>
    Host Port:     <none>
    Command:
      /usr/local/bin/kube-proxy
      --config=/var/lib/kube-proxy/config.conf
      --hostname-override=$(NODE_NAME)
    State:          Running
      Started:      Fri, 06 Dec 2019 09:51:44 +0100
    Last State:     Terminated
      Reason:       Error
      Exit Code:    2
      Started:      Thu, 05 Dec 2019 14:17:08 +0100
      Finished:     Fri, 06 Dec 2019 09:29:41 +0100
    Ready:          True
    Restart Count:  2
    Environment:
      NODE_NAME:   (v1:spec.nodeName)
    Mounts:
      /lib/modules from lib-modules (ro)
      /run/xtables.lock from xtables-lock (rw)
      /var/lib/kube-proxy from kube-proxy (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-proxy-token-2wwld (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  kube-proxy:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      kube-proxy
    Optional:  false
  xtables-lock:
    Type:          HostPath (bare host directory volume)
    Path:          /run/xtables.lock
    HostPathType:  FileOrCreate
  lib-modules:
    Type:          HostPath (bare host directory volume)
    Path:          /lib/modules
    HostPathType:  
  kube-proxy-token-2wwld:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kube-proxy-token-2wwld
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  beta.kubernetes.io/os=linux
Tolerations:     
                 CriticalAddonsOnly
                 node.kubernetes.io/disk-pressure:NoSchedule
                 node.kubernetes.io/memory-pressure:NoSchedule
                 node.kubernetes.io/network-unavailable:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute
                 node.kubernetes.io/pid-pressure:NoSchedule
                 node.kubernetes.io/unreachable:NoExecute
                 node.kubernetes.io/unschedulable:NoSchedule
Events:
  Type    Reason          Age   From                Message
  ----    ------          ----  ----                -------
  Normal  SandboxChanged  21m   kubelet, debian-vm  Pod sandbox changed, it will be killed and re-created.
  Normal  Pulled          21m   kubelet, debian-vm  Container image "k8s.gcr.io/kube-proxy:v1.16.3" already present on machine
  Normal  Created         21m   kubelet, debian-vm  Created container kube-proxy
  Normal  Started         21m   kubelet, debian-vm  Started container kube-proxy
server ~ # for p in $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name); do     kubectl logs --namespace=kube-system $p; done
.:53
2019-12-06T08:34:29.658Z [INFO] plugin/reload: Running configuration MD5 = ed017d072d4dd28f5c79c00674bf5857
2019-12-06T08:34:29.658Z [INFO] CoreDNS-1.6.2
2019-12-06T08:34:29.658Z [INFO] linux/amd64, go1.12.8, 795a3eb
CoreDNS-1.6.2
linux/amd64, go1.12.8, 795a3eb
2019-12-06T08:34:29.782Z [INFO] 127.0.0.1:43673 - 29415 "HINFO IN 1634064924605486062.6862218265656759711. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.025162608s
.:53
2019-12-06T08:34:29.658Z [INFO] plugin/reload: Running configuration MD5 = ed017d072d4dd28f5c79c00674bf5857
2019-12-06T08:34:29.658Z [INFO] CoreDNS-1.6.2
2019-12-06T08:34:29.658Z [INFO] linux/amd64, go1.12.8, 795a3eb
CoreDNS-1.6.2
linux/amd64, go1.12.8, 795a3eb
2019-12-06T08:34:29.789Z [INFO] 127.0.0.1:44707 - 290 "HINFO IN 1539767525161333548.3248652758480068173. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.031925168s
master ~ #

Hi again,

one (more) thing is strange here: coredns is listening to docker ip adresses, and the node name is master in both containers. Might that be the source of the problem? And can this be configured?

master ~ # kubectl get ep kube-dns --namespace=kube-system -o yaml
apiVersion: v1
kind: Endpoints
metadata:
  creationTimestamp: "2019-12-07T16:09:02Z"
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    kubernetes.io/name: KubeDNS
  name: kube-dns
  namespace: kube-system
  resourceVersion: "349631"
  selfLink: /api/v1/namespaces/kube-system/endpoints/kube-dns
  uid: c09058e0-94e5-4150-8640-7c1ec4b60194
subsets:
- addresses:
  - ip: 172.17.0.2
    nodeName: master
    targetRef:
      kind: Pod
      name: coredns-5644d7b6d9-55ff9
      namespace: kube-system
      resourceVersion: "349518"
      uid: 17bd6500-0643-4707-a284-d56edfbdea76
  - ip: 172.17.0.3
    nodeName: master
    targetRef:
      kind: Pod
      name: coredns-5644d7b6d9-g4qjf
      namespace: kube-system
      resourceVersion: "349527"
      uid: dcca863b-78bd-4a15-ad47-1a704f2a6882
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP
  - name: metrics
    port: 9153
    protocol: TCP
master ~ #