I installed a kubernetes cluster with one master and two workers nodes. When I created the first pod, I realized that the cluster networking did not work properly. I could not access the pod from outside the cluster. Both kubernetes and docker make changes in my iptables rules. I googled this problem and found that docker chages default iptables’ FORWARD rule to DROP, which messes up the cluster networking. I fixed the issue with adding the option --iptables=false in docker daemon running command. It helped, now I can access any exposed pod from outside the cluster.
Before installing the kubernetes cluster, I added some iptables rules to open some ports, which kubernetes would need.
Added these ones:
# master node
-A INPUT -p tcp --dport 6443 -j ACCEPT
-A INPUT -p tcp --dport 2379:2380 -j ACCEPT
-A INPUT -p tcp --dport 10250:10252 -j ACCEPT
-A INPUT -p tcp --dport 10255 -j ACCEPT
# udp 8285 and 8472 flannel requirements
-A INPUT -p udp --dport 8285 -j ACCEPT
-A INPUT -p udp --dport 8472 -j ACCEPT
#Commented out this one:
#-A FORWARD -j REJECT --reject-with icmp-host-prohibited
# worker nodes
-A INPUT -p tcp --dport 10250 -j ACCEPT
-A INPUT -p tcp --dport 30000:32767 -j ACCEPT
# udp 8285 and 8472 flannel requirements
-A INPUT -p udp --dport 8285 -j ACCEPT
-A INPUT -p udp --dport 8472 -j ACCEPT
#Commented out this one:
#-A FORWARD -j REJECT --reject-with icmp-host-prohibited
Now I see, that kubernetes controls the entire host firewall and I am not sure if what I did was nesessary.
The question is how to set up firewall correctly for a kubernetes claster? Do I need to do it manually or a cluster can manage it itself?
Now, all exposed pods are accessible from any node (master node including). Is that how it’s supposed to be?
Cluster information:
Kubernetes version: 1.16.3
Cloud being used: (put bare-metal if not on a public cloud) bare-metal
Installation method: from kube/docker repos
Host OS: CentOS 8
CNI and version: Flannel, latest
CRI and version: Docker 18.06.2-ce