Am setting up a kubernetes cluster with 1 master 2 nodes in locally. Everything is working fine except pods does not have access to internet/local network access, Below are my kubernetes details,
Chain KUBE-EXTERNAL-SERVICES (1 references)
target prot opt source destination
REJECT tcp – anywhere anywhere /* kube-system/kubernetes-dashboard: has no endpoints */ ADDRTYPE match dst-type LOCAL tcp dpt:31187 reject-with icmp-port-unreachable
Chain KUBE-FIREWALL (2 references)
target prot opt source destination
DROP all – anywhere anywhere /* kubernetes firewall for dropping marked packets */ mark match 0x8000/0x8000
Chain KUBE-FORWARD (1 references)
target prot opt source destination
ACCEPT all – anywhere anywhere /* kubernetes forwarding rules / mark match 0x4000/0x4000
ACCEPT all – 192.168.0.0/16 anywhere / kubernetes forwarding conntrack pod source rule / ctstate RELATED,ESTABLISHED
ACCEPT all – anywhere 192.168.0.0/16 / kubernetes forwarding conntrack pod destination rule */ ctstate RELATED,ESTABLISHED
Chain KUBE-SERVICES (1 references)
target prot opt source destination
REJECT tcp – anywhere 10.96.0.10 /* kube-system/kube-dns:dns-tcp has no endpoints / tcp dpt:domain reject-with icmp-port-unreachable
REJECT udp – anywhere 10.96.0.10 / kube-system/kube-dns:dns has no endpoints / udp dpt:domain reject-with icmp-port-unreachable
REJECT tcp – anywhere 10.103.29.149 / kube-system/kubernetes-dashboard: has no endpoints */ tcp dpt:https reject-with icmp-port-unreachable
[root@k8smaster ~]#
That seems odd that it would be split like that but I would check to see if there are any network policies being deployed in the namespaces that have pods that can’t connect and also confirm that the nodes themselves can connect to the internet.
yes, i deployed network policy on that namespace
it’s a egress policy that allow all pod and all the
nodes are connecting to the internet @macintoshprime