Dns not resolving .cluster.local

Cluster information:

Kubernetes version: v1.23.1
Cloud being used: bare-metal
Installation method: kubeadm
Host OS: Ubuntu 20.04
CNI and version: Flannel
CRI and version: cri-o v1.23.1

Hi Everybody,

I am having some strange DNS issues with a fresh cluster setup.
I am not able to resolve pods by their full FQDN, e.g. postgres.namespace.svc.cluster.local.
Everything is working just fine if I am not querying the whole FQDN, like postgres.namespace or postgres.namespace.svc.

Below are some example queries and their result:


nslookup kube-dns.kube-system.svc

Server:         10.96.0.10
Address:        10.96.0.10#53

Name:   kube-dns.kube-system.svc.cluster.local
Address: 10.96.0.10

nslookup kube-dns.kube-system.svc.cluster.local
Server:         10.96.0.10
Address:        10.96.0.10#53

** server can't find kube-dns.kube-system.svc.cluster.local.localdomain.lan: SERVFAIL

command terminated with exit code 1

As you can see in the second dns response, localdomain.lan was appended;
localdomain.lan is my networks local search domain.

Any ideas why this is happening?
Is this expected behaviour?
Thanks in advance!

Could you please paste here the output of:

kubectl get cm coredns -n kube-system -o yaml

Sure, here is the output:

apiVersion: v1
data:
  Corefile: |
    .:53 {
        errors
        health {
           lameduck 5s
        }
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
           pods insecure
           fallthrough in-addr.arpa ip6.arpa
           ttl 30
        }
        prometheus :9153
        forward . /etc/resolv.conf {
           max_concurrent 1000
        }
        cache 30
        loop
        reload
        loadbalance
    }
kind: ConfigMap
metadata:
  creationTimestamp: "2022-02-21T20:08:05Z"
  name: coredns
  namespace: kube-system
  resourceVersion: "280"
  uid: e23c4048-8064-4e06-98ed-c7f36089c956

@Theog75 any ideas ?

@cneumaier i have the save problem since i upgrade my k8s from 1.24 to 1.27
did you find a solution ?
thanks for your return

I solved this problem by editing the coredns ConfigMap:

apiVersion: v1
data:
  Corefile: |
    .:53 {
        errors
        health {
           lameduck 5s
        }
        ready
        kubernetes cluster.local cluster.local.localdomain in-addr.arpa ip6.arpa {
           pods insecure
           fallthrough in-addr.arpa ip6.arpa
           ttl 30
        }
        prometheus :9153
        forward . /etc/resolv.conf {
           max_concurrent 1000
        }
        cache 30
        loop
        reload
        loadbalance
    }
kind: ConfigMap
metadata:
  creationTimestamp: "2024-03-07T07:28:57Z"
  name: coredns
  namespace: kube-system
  resourceVersion: "39298"
  uid: dc855c34-fe5c-4c5d-b103-64a49db86769

I add the “cluster.local.localdomain” in kubernetes plugins, and then the plugins will accept those domains.