Coredns resolution between pods not working

Hello, I am kinda new to kubernetes and never used coredns before, so apology in advance if any of my question may be dumb.

I am running on k3s v1.28.5+k3s1 and coredns-1.29.0 installed using helm.

I am just trying to achieve basic DNS functionality between the pods and to forward public DNS requests to google.

Currently clients can indeed reach external/public urls, however internal DNS resolution doesn’t seem to be working.

Here is the servers part of the coredns helm values.yaml file.
Full values.yaml config file can be found here.

servers:
- zones:
  - zone: .
  port: 53
  # If serviceType is nodePort you can specify nodePort here
  # nodePort: 30053
  # hostPort: 53
  plugins:
  # Allows public DNS resolution
  - name: forward
    parameters: . 8.8.8.8 9.9.9.9
  #
  - name: errors
  # Serves a /health endpoint on :8080, required for livenessProbe
  - name: health
    configBlock: |-
      lameduck 5s
  # Serves a /ready endpoint on :8181, required for readinessProbe
  - name: ready
  # Required to query kubernetes API for data
  - name: kubernetes
    parameters: intranet.local in-addr.arpa ip6.arpa
    configBlock: |-
      pods verified 
      fallthrough in-addr.arpa ip6.arpa
      ttl 30
  # Serves a /metrics endpoint on :9153, required for serviceMonitor
  - name: prometheus
    parameters: 0.0.0.0:9153
  - name: forward
    parameters: . /etc/resolv.conf
  - name: cache
    parameters: 30
  - name: loop
  - name: reload
  - name: loadbalance

Simple nginx deployment which uses coredns as dns server

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: httpd-test
spec:
  replicas: 3
  selector:
    matchLabels:
      app: httpd-test
  template:
    metadata:
      labels:
        app: httpd-test
    spec:
      dnsPolicy: "None"  # Set to "None" to use custom DNS settings
      dnsConfig:
        nameservers:
        - 10.43.122.198 # Replace with the IP of the CoreDNS service
        searches:
        - httpd-test.default.svc.intranet.local  # replace with yours
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: httpd-test
spec:
  selector:
    app: httpd-test
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80

Internal dns resolution does not work

admin@store0:~$ sudo kubectl get pods
NAME                          READY   STATUS    RESTARTS   AGE
coredns-55c557f5f9-p2c55      1/1     Running   0          4h35m
httpd-test-67cf9c55b8-8mhzp   1/1     Running   0          4h19m
httpd-test-67cf9c55b8-2g4dx   1/1     Running   0          4h19m
httpd-test-67cf9c55b8-2dbzp   1/1     Running   0          4h19m
admin@store0:~$ sudo kubectl exec -it httpd-test-67cf9c55b8-8mhzp -- bash
root@httpd-test-67cf9c55b8-8mhzp:/# curl httpd-test-67cf9c55b8-2g4dx
curl: (6) Could not resolve host: httpd-test-67cf9c55b8-2g4dx
root@httpd-test-67cf9c55b8-8mhzp:/# curl httpd-test-67cf9c55b8-2g4dx.httpd-test.default.svc.intranet.local
curl: (6) Could not resolve host: httpd-test-67cf9c55b8-2g4dx.httpd-test.default.svc.intranet.local
root@httpd-test-67cf9c55b8-8mhzp:/# cat /etc/resolv.conf 
search httpd-test.default.svc.intranet.local
nameserver 10.43.122.198
root@httpd-test-67cf9c55b8-8mhzp:/#

Any help will be greatly appreciated.

Thanks

Hi,
CoreDNS gets installed “out-of-the-box”. Have you installed it on your own using helm?

Hello, yes I have installed it using helm. Should I use the built-in instead?

PS: didn’t know coredns was part of k3s.

I have never used k3s, but I assume coredns should be installed. Check kube-system namespace.

Yes you are right, coredns came installed with k3s, along side with trafeik and some other stuff. However because I was having issues due to my inexperience with kubernetes, I did fully remove both coredns and trafeik which came with the installation.

So yes, I am only dealing with coredns that I installed using helm and the following values.yaml.

To recap, I seem to be unable to reach the pods neither by their domain name nor by their fqdn, however I can reach the application service domain name.

On that note, when I query the application service domain name “httpd-test.default.svc.intranet.local” is the http request being automatically balanced between the 3 replicas or is it always hitting the same whatever pod? I have no ingress controller setup yet. In fact that is my very next project.

Thanks

Coredns logs

admin@store0:~$ sudo kubectl logs coredns-54d89b7cdf-22gff
.:53
[INFO] plugin/reload: Running configuration SHA512 = 181fe63395f004da0a416279dc2b7b674db68669
fde812ef372547d4c09940afec6296b58a0ed48a1a6cc45bf93b3eef70a07f242262bc9e7b036fb375054238
CoreDNS-1.11.1
linux/arm64, go1.20.7, ae2bbc2
[INFO] 127.0.0.1:52697 - 38195 "HINFO IN 6184499733950150015.6526856049820179902. udp 57 fals
e 512" NXDOMAIN qr,rd,ra 132 0.018919952s
[INFO] 10.42.3.106:53831 - 9285 "A IN httpd-test-759d5f565d-tkm8l.intranet.local. udp 60 fals
e 512" NXDOMAIN qr,aa,rd 156 0.000474862s
[INFO] 10.42.3.106:48914 - 10799 "A IN httpd-test-759d5f565d-tkm8l. udp 45 false 512" NXDOMAI
N qr,rd,ra 120 0.035961234s
[INFO] 10.42.3.106:42866 - 36342 "A IN httpd-test.default.svc.intranet.local. udp 55 false 51
2" NOERROR qr,aa,rd 108 0.000458678s
admin@store0:~$

Sorry, but I do not get it. Why do you want to install coredns on your own? You can use coredns that comes with k3s. It works “out of the box”.

As I mentioned before I did remove the coredns that came with k3s due to inexperience from my side. However I have to tell you the basic configuration that came with the k3s built-in coredns has the same “issue”. Likely a config issue I need to figure out.

As this seems to be a coredns specific question/issue, I figure I am probably best off raising a ticket with them instead.

Thanks for the help so far.