When I change the DNS configuration to my internal DNS, example 192.168.10.10 (DNS server), it works for both the internal addresses on my network and the public ones. Also the svc cluster address.
The problem occurs when I increase the number of cluster nodes. Resolution no longer works.
I’m having the same issue with microk8s. Everything was working perfect with a single node. After adding another node, pods that are deployed there can’t be accesible.
If i try “microk8s kubectl get pods --all-namespaces” I can see that the pods are there. But if i try “microk8s kubectl exec -it name-of-pod bash” it doesn’t work. It seems a DNS issue as I also tried 'ping name-of-pod" and it can’t resolve.
In the dashboard I can see everything up and running, but the apps are broken.
Kind of sad that multinode are not fully working (or maybe i’m making a mistake…)
But after looking at the configmap again in the last applied configuration section, it is still the old one applied. I have tried rollout restart, deleting pods, rescaling deployments, but none change the last applied configuration. The only way that i found that is able to change the configmap is by disabling dns and enabling it again. Please help.
Edit: Solved by re-applying the coredns configmap yaml kubectl apply -f {coredns.yaml}