Why is firewalld is showing errors in kubernetes cluster?

#1

I have configured the kubernetes cluster and deployed the flannel network. The core DNS pod is running but firewalld is showing some errors.
The DNS debugging works while stopping the firewalld but it doesnt works when firewalld is enabled.

#2

@Bidhan The screenshots didn’t get attached, is it possible to copy and paste the errors so we can see them?

#3

I have updated the image. Please review it.

#4

Have you tried setting /proc/sys/net/bridge/bridge-nf-call-iptables to 1

Set /proc/sys/net/bridge/bridge-nf-call-iptables to 1 by running sysctl net.bridge.bridge-nf-call-iptables=1 to pass bridged IPv4 traffic to iptables’ chains. 
This is a requirement for some CNI plugins to work, for more information please see here.

Full reference here, Creating a single master cluster with kubeadm - Kubernetes

#5

I have already set it to 1.

#6

awesome :slight_smile: What errors do you get from coreDNS when firewalld is running? There might be a clue in there as to what is going on. Is DNS just not resolving when firewalld is on or something else?

Assuming all the required ports are opened as well.

#7

I have opened all required ports. Whenever i stop the firewalld service then
$ kubectl exec -ti busybox – nslookup kubernetes.default
This command resolves the domain but after i restart the firewalld service then it doesnt resolves though the status of core-dns pod is running.
This is the result of the command $iptables -save
With some research on this topic i have found in a article that the running firewalld and iptables can cause a conflicting issues but we cant stop firewalld service due to security issues.

#8

Strange. We have firewalld enabled and we haven’t come across those issues yet. What are the logs saying in core-dns, they might have some hint as to what is causing the conflict.

#9

This is the logs of core DNS and error message while starting firewalld service

#10

Assuming this is a multi-node cluster. Can you confirm 10250 is open on all the nodes?

#11

Yes all nodes have this port open.

#12

I’m not sure what exactly it could be. I keep coming back to something on the server 192.168.1.104 not being right.

What method did you use to install k8s (kubeadm, kubespray, the hard way)? Might be something in that causing an issue.

#13

I viewed this article for installation

#14

Hello sir. Is there any issues regarding this article installation steps ?

#15

The instructions looked fine, haven’t been able to get my head off of what is going on with the one server. I would inspect that one in particular to see of it is giving errors in the kubelet and other components.

Outside of that try a fresh install, sometimes typos happen.