Kubernetes Pods do not have internet access

Hi all,

Am setting up a kubernetes cluster with 1 master 2 nodes in locally. Everything is working fine except pods does not have access to internet/local network access, Below are my kubernetes details,

Could you please help me to resolve this issue,

Below the required details,

and coredns is restarting automatically.

Kubernetes version : 1.13.4
Docker version : 1.13.1
OS : CentOS Linux release 7.6.1810
Network : Flannel

Please help me to fix this issue, i have been troubling from past 2 days.

1 Like

What do the coreDNS logs say?

below the coredns logs,

[INFO] plugin/reload: Running configuration MD5 = f65c4821c8a9b7b5eb30fa4fbc167769
[ERROR] plugin/errors: 2 6762108197227624157.2425755665350263251. HINFO: unreachable backend: read udp 192.168.1.5:43183->103.8.46.5:53: i/o timeout
[ERROR] plugin/errors: 2 6762108197227624157.2425755665350263251. HINFO: unreachable backend: read udp 192.168.1.5:33599->103.8.46.5:53: i/o timeout
E0305 13:42:20.942323 1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:311: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0305 13:42:20.942326 1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:318: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0305 13:42:20.942411 1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:313: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
[ERROR] plugin/errors: 2 6762108197227624157.2425755665350263251. HINFO: unreachable backend: read udp 192.168.1.5:53401->103.8.44.5:53: i/o timeout
[ERROR] plugin/errors: 2 6762108197227624157.2425755665350263251. HINFO: unreachable backend: read udp 192.168.1.5:38159->103.8.44.5:53: i/o timeout
[ERROR] plugin/errors: 2 6762108197227624157.2425755665350263251. HINFO: unreachable backend: read udp 192.168.1.5:33022->8.8.8.8:53: i/o timeout
[ERROR] plugin/errors: 2 6762108197227624157.2425755665350263251. HINFO: unreachable backend: read udp 192.168.1.5:41361->103.8.44.5:53: i/o timeout

You might want to double check to make sure all the required ports are opened.

https://kubernetes.io/docs/setup/independent/install-kubeadm/#check-required-ports

It is my testing environment and all the ports are opened and below is my iptables output,

[root@k8smaster ~]# iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
KUBE-EXTERNAL-SERVICES all – anywhere anywhere ctstate NEW /* kubernetes externally-visible service portals */
KUBE-FIREWALL all – anywhere anywhere

Chain FORWARD (policy ACCEPT)
target prot opt source destination
KUBE-FORWARD all – anywhere anywhere /* kubernetes forwarding rules */
DOCKER-ISOLATION all – anywhere anywhere
DOCKER all – anywhere anywhere
ACCEPT all – anywhere anywhere ctstate RELATED,ESTABLISHED
ACCEPT all – anywhere anywhere
ACCEPT all – anywhere anywhere
ACCEPT all – 10.244.0.0/16 anywhere
ACCEPT all – anywhere 10.244.0.0/16

Chain OUTPUT (policy ACCEPT)
target prot opt source destination
KUBE-SERVICES all – anywhere anywhere ctstate NEW /* kubernetes service portals */
KUBE-FIREWALL all – anywhere anywhere

Chain DOCKER (1 references)
target prot opt source destination

Chain DOCKER-ISOLATION (1 references)
target prot opt source destination
RETURN all – anywhere anywhere

Chain KUBE-EXTERNAL-SERVICES (1 references)
target prot opt source destination
REJECT tcp – anywhere anywhere /* kube-system/kubernetes-dashboard: has no endpoints */ ADDRTYPE match dst-type LOCAL tcp dpt:31187 reject-with icmp-port-unreachable

Chain KUBE-FIREWALL (2 references)
target prot opt source destination
DROP all – anywhere anywhere /* kubernetes firewall for dropping marked packets */ mark match 0x8000/0x8000

Chain KUBE-FORWARD (1 references)
target prot opt source destination
ACCEPT all – anywhere anywhere /* kubernetes forwarding rules / mark match 0x4000/0x4000
ACCEPT all – 192.168.0.0/16 anywhere /
kubernetes forwarding conntrack pod source rule / ctstate RELATED,ESTABLISHED
ACCEPT all – anywhere 192.168.0.0/16 /
kubernetes forwarding conntrack pod destination rule */ ctstate RELATED,ESTABLISHED

Chain KUBE-SERVICES (1 references)
target prot opt source destination
REJECT tcp – anywhere 10.96.0.10 /* kube-system/kube-dns:dns-tcp has no endpoints / tcp dpt:domain reject-with icmp-port-unreachable
REJECT udp – anywhere 10.96.0.10 /
kube-system/kube-dns:dns has no endpoints / udp dpt:domain reject-with icmp-port-unreachable
REJECT tcp – anywhere 10.103.29.149 /
kube-system/kubernetes-dashboard: has no endpoints */ tcp dpt:https reject-with icmp-port-unreachable
[root@k8smaster ~]#

How about your firewall is that disabled? I’ve been caught by that a few times.

yes firewall service is disabled in the K8s master and other nodes.

The only other thing I can think of that I have tried in the past when DNS went down is to restart the restart the service.

You may want to check in here first for something that could help you along, Debugging DNS Resolution - Kubernetes

Thanks, I will check it.

Hi, i am having similar issue
In my case internet is connecting for some pods and some pod it is not connecting
is there any reason? @macintoshprime

That seems odd that it would be split like that but I would check to see if there are any network policies being deployed in the namespaces that have pods that can’t connect and also confirm that the nodes themselves can connect to the internet.

yes, i deployed network policy on that namespace
it’s a egress policy that allow all pod and all the
nodes are connecting to the internet
@macintoshprime

Hi @karthikjagadeeswaran,

did u solve?

Kind regards,
Simone

Dear All,

I solve the issue. My problem to calico default configuration was related, which assigned subnet 192.168.0.0/16 at pods (as defaults).

I’ve followed this thread: Calico + KIND pods unable to communicate externally · Issue #2962 · projectcalico/calico · GitHub

Hope this can help,
Simone

Same issue. With flannel - no interent on pods. Its not about DNS…I could even not reach 8.8.8.8 from pod (even from dnsutils util pod).

Since it was brand new cluster…replaced flanel with calico and all sorted. Strange