Kubernetes Pods do not have internet access


#1

Hi all,

Am setting up a kubernetes cluster with 1 master 2 nodes in locally. Everything is working fine except pods does not have access to internet/local network access, Below are my kubernetes details,

Could you please help me to resolve this issue,

Below the required details,

and coredns is restarting automatically.

Kubernetes version : 1.13.4
Docker version : 1.13.1
OS : CentOS Linux release 7.6.1810
Network : Flannel

Please help me to fix this issue, i have been troubling from past 2 days.


#2

What do the coreDNS logs say?


#3

below the coredns logs,

[INFO] plugin/reload: Running configuration MD5 = f65c4821c8a9b7b5eb30fa4fbc167769
[ERROR] plugin/errors: 2 6762108197227624157.2425755665350263251. HINFO: unreachable backend: read udp 192.168.1.5:43183->103.8.46.5:53: i/o timeout
[ERROR] plugin/errors: 2 6762108197227624157.2425755665350263251. HINFO: unreachable backend: read udp 192.168.1.5:33599->103.8.46.5:53: i/o timeout
E0305 13:42:20.942323 1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:311: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0305 13:42:20.942326 1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:318: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0305 13:42:20.942411 1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:313: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
[ERROR] plugin/errors: 2 6762108197227624157.2425755665350263251. HINFO: unreachable backend: read udp 192.168.1.5:53401->103.8.44.5:53: i/o timeout
[ERROR] plugin/errors: 2 6762108197227624157.2425755665350263251. HINFO: unreachable backend: read udp 192.168.1.5:38159->103.8.44.5:53: i/o timeout
[ERROR] plugin/errors: 2 6762108197227624157.2425755665350263251. HINFO: unreachable backend: read udp 192.168.1.5:33022->8.8.8.8:53: i/o timeout
[ERROR] plugin/errors: 2 6762108197227624157.2425755665350263251. HINFO: unreachable backend: read udp 192.168.1.5:41361->103.8.44.5:53: i/o timeout


#4

You might want to double check to make sure all the required ports are opened.

https://kubernetes.io/docs/setup/independent/install-kubeadm/#check-required-ports


#5

It is my testing environment and all the ports are opened and below is my iptables output,

[root@k8smaster ~]# iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
KUBE-EXTERNAL-SERVICES all – anywhere anywhere ctstate NEW /* kubernetes externally-visible service portals */
KUBE-FIREWALL all – anywhere anywhere

Chain FORWARD (policy ACCEPT)
target prot opt source destination
KUBE-FORWARD all – anywhere anywhere /* kubernetes forwarding rules */
DOCKER-ISOLATION all – anywhere anywhere
DOCKER all – anywhere anywhere
ACCEPT all – anywhere anywhere ctstate RELATED,ESTABLISHED
ACCEPT all – anywhere anywhere
ACCEPT all – anywhere anywhere
ACCEPT all – 10.244.0.0/16 anywhere
ACCEPT all – anywhere 10.244.0.0/16

Chain OUTPUT (policy ACCEPT)
target prot opt source destination
KUBE-SERVICES all – anywhere anywhere ctstate NEW /* kubernetes service portals */
KUBE-FIREWALL all – anywhere anywhere

Chain DOCKER (1 references)
target prot opt source destination

Chain DOCKER-ISOLATION (1 references)
target prot opt source destination
RETURN all – anywhere anywhere

Chain KUBE-EXTERNAL-SERVICES (1 references)
target prot opt source destination
REJECT tcp – anywhere anywhere /* kube-system/kubernetes-dashboard: has no endpoints */ ADDRTYPE match dst-type LOCAL tcp dpt:31187 reject-with icmp-port-unreachable

Chain KUBE-FIREWALL (2 references)
target prot opt source destination
DROP all – anywhere anywhere /* kubernetes firewall for dropping marked packets */ mark match 0x8000/0x8000

Chain KUBE-FORWARD (1 references)
target prot opt source destination
ACCEPT all – anywhere anywhere /* kubernetes forwarding rules / mark match 0x4000/0x4000
ACCEPT all – 192.168.0.0/16 anywhere /
kubernetes forwarding conntrack pod source rule / ctstate RELATED,ESTABLISHED
ACCEPT all – anywhere 192.168.0.0/16 /
kubernetes forwarding conntrack pod destination rule */ ctstate RELATED,ESTABLISHED

Chain KUBE-SERVICES (1 references)
target prot opt source destination
REJECT tcp – anywhere 10.96.0.10 /* kube-system/kube-dns:dns-tcp has no endpoints / tcp dpt:domain reject-with icmp-port-unreachable
REJECT udp – anywhere 10.96.0.10 /
kube-system/kube-dns:dns has no endpoints / udp dpt:domain reject-with icmp-port-unreachable
REJECT tcp – anywhere 10.103.29.149 /
kube-system/kubernetes-dashboard: has no endpoints */ tcp dpt:https reject-with icmp-port-unreachable
[root@k8smaster ~]#


#6

How about your firewall is that disabled? I’ve been caught by that a few times.


#7

yes firewall service is disabled in the K8s master and other nodes.


#8

The only other thing I can think of that I have tried in the past when DNS went down is to restart the restart the service.

You may want to check in here first for something that could help you along, Debugging DNS Resolution - Kubernetes


#9

Thanks, I will check it.