CoreDNS and problem with resolving hostnames

I have two kubernetes (v.1.18.3) pods running via Rancher:

#1 - busybox
#2 - dnsutils

From the pod #1:

/ # cat /etc/resolv.conf 
nameserver 10.43.0.10
search testspace.svc.cluster.local svc.cluster.local cluster.local
options ndots:5

and then

/ # nslookup kubernetes.default
Server:    10.43.0.10
Address 1: 10.43.0.10 kube-dns.kube-system.svc.cluster.local

nslookup: can't resolve 'kubernetes.default'
/ # nslookup kubernetes.default
Server:    10.43.0.10
Address 1: 10.43.0.10 kube-dns.kube-system.svc.cluster.local

nslookup: can't resolve 'kubernetes.default'
/ # nslookup kubernetes.default
Server:    10.43.0.10
Address 1: 10.43.0.10 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes.default
Address 1: 10.43.0.1 kubernetes.default.svc.cluster.local

so sometimes it works but mostly not.

then from the pod #2:

nameserver 10.43.0.10
search testspace.svc.cluster.local svc.cluster.local cluster.local
options ndots:5

and then:

/ # nslookup kubernetes.default
;; connection timed out; no servers could be reached

/ # nslookup kubernetes.default
;; connection timed out; no servers could be reached

/ # nslookup kubernetes.default
Server:         10.43.0.10
Address:        10.43.0.10#53

Name:   kubernetes.default.svc.cluster.local
Address: 10.43.0.1
;; connection timed out; no servers could be reached

so it mostly doesn’t work.

The same problem is when I try to reach any external hostname.

Also tried to troubleshoot based on article from here

ConfigMap:

# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
  Corefile: |
    .:53 {
        log
        errors
        health {
          lameduck 5s
        }
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
          pods insecure
          fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        forward . "/etc/resolv.conf"
        cache 30
        loop
        reload
        loadbalance
    }
kind: ConfigMap
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","data":{"Corefile":".:53 {\n    errors\n    health {\n      lameduck 5s\n    }\n    ready\n    kubernetes cluster.local in-addr.arpa ip6.arpa {\n      pods insecure\n      fallthrough in-addr.arpa ip6.arpa\n    }\n    prometheus :9153\n    forward . \"/etc/resolv.conf\"\n    cache 30\n    loop\n    reload\n    loadbalance\n}\n"},"kind":"ConfigMap","metadata":{"annotations":{},"name":"coredns","namespace":"kube-system"}}
  creationTimestamp: "2020-08-07T19:28:25Z"
  managedFields:
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:data:
        .: {}
        f:Corefile: {}
      f:metadata:
        f:annotations:
          .: {}
          f:kubectl.kubernetes.io/last-applied-configuration: {}
    manager: kubectl
    operation: Update
    time: "2020-08-24T19:22:17Z"
  name: coredns
  namespace: kube-system
  resourceVersion: "4118524"
  selfLink: /api/v1/namespaces/kube-system/configmaps/coredns
  uid: 1f3615b0-9349-4bc5-990b-7fed31879fa2
~                                          

Any thought on that?

Did you ever find a resolution for this? I have a new clean cluster and see the same behavior. Anything I try to nslookup comes back with the error “;; connection timed out; no servers could be reached”.

Still looking for solution. I have set 2 other clusters which have the same problem.

I am getting the same issue as resolving services from a pod. CoreDNS and flannel CNI is deployed in my cluster. POD cidr is 10.244.0.0/16 and coreDNS is 10.96.0.10

Hi,
did you find a solution?
I have a very similiar issue.

Same issue on minikube with CoreDNS reporting a cluster ip of 10.96.0.10 yet nslookup results in “;; connection timed out; no servers could be reached”

Did anyone find a solution yet?

1 Like

Hi,

I had a similar issue with k3s - worker node won’t be able to ping coredns service or pod, I ended up resolving it by moving from fedora 34 to ubuntu 20.04; the problem seemed similar to this

I think the issue was the Fedora 34 image I was running seemed to have neither iptables nor nftables installed.

Hope it helps