Kubernetes Cluster on nodes with multiple NICs

I have a 4 node Kubernetes cluster, 1 x controller and 3 x workers. The following shows how they are configured with the versions.

NAME             STATUS    ROLES     AGE       VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
k8s-ctrl-1       Ready     master    1h        v1.11.2   192.168.191.100       <none>        Ubuntu 18.04.1 LTS   4.15.0-1021-aws     docker://18.6.1
turtle-host-01   Ready     <none>    1h        v1.11.2   192.168.191.53   <none>        Ubuntu 18.04.1 LTS   4.15.0-29-generic   docker://18.6.1
turtle-host-02   Ready     <none>    1h        v1.11.2   192.168.191.2    <none>        Ubuntu 18.04.1 LTS   4.15.0-34-generic   docker://18.6.1
turtle-host-03   Ready     <none>    1h        v1.11.2   192.168.191.3    <none>        Ubuntu 18.04.1 LTS   4.15.0-33-generic   docker://18.6.1

Each of the nodes has two network interfaces, for arguments sake eth0 and eth1. eth1 is the network that I want to the cluster to work on. I setup the controller using kubeadm init and passed --api-advertise-address 192.168.191.100. The worker nodes where then joined using this address.

Finally on each node I modified the kubelet service to have --node-ip set so that the layout looks as above.

The cluster appears to be working correctly and I can create pods, deployments etc. However the issue I have is that none of the pods are able to use the kube-dns service for DNS resolution.

This is not a problem with resolution, rather that the machines cannot connect to the DNS service to perform the resolution. For example if I run a busybox container and access it to perform nslookup i get the following:

/ # nslookup www.google.co.uk
nslookup: read: Connection refused
nslookup: write to '10.96.0.10': Connection refused

I have a feeling that this is down to not using the default network and because of that I suspect some Iptables rules are not correct, that being said these are just guesses.

I have tried both the Flannel overlay and now Weave net. The pod CIDR range is 10.32.0.0/16 and the service CIDR is as default.

I have noticed that with Kubernetes 1.11 there are now pods called coredns rather than one kube-dns.

I hope that this is a good place to ask this question. I am sure I am missing something small but vital so if anyone has any ideas that would be most welcome.

1 Like

I apologise for posting a technical question on here. I have now asked it on SO.

Hey no worries! Can you post a link so we can close the loop, perhaps get some more eyes on your question? Thanks!

It’s all good :slight_smile: Other folk do post technical questions on here as well.

You may just wind up getting more eyes on it over there from a troubleshooting perspective.

As requested here is the link to my SO post - https://stackoverflow.com/questions/52280629/how-does-dns-resolution-work-on-kubernetes-with-multiple-networks

Thanks for the replies :-).

1 Like