Why is firewalld is showing errors in kubernetes cluster?

I have configured the kubernetes cluster and deployed the flannel network. The core DNS pod is running but firewalld is showing some errors.
The DNS debugging works while stopping the firewalld but it doesnt works when firewalld is enabled.

@Bidhan The screenshots didn’t get attached, is it possible to copy and paste the errors so we can see them?

I have updated the image. Please review it.

Have you tried setting /proc/sys/net/bridge/bridge-nf-call-iptables to 1

Set /proc/sys/net/bridge/bridge-nf-call-iptables to 1 by running sysctl net.bridge.bridge-nf-call-iptables=1 to pass bridged IPv4 traffic to iptables’ chains. 
This is a requirement for some CNI plugins to work, for more information please see here.

Full reference here, Creating a single master cluster with kubeadm - Kubernetes

I have already set it to 1.

awesome :slight_smile: What errors do you get from coreDNS when firewalld is running? There might be a clue in there as to what is going on. Is DNS just not resolving when firewalld is on or something else?

Assuming all the required ports are opened as well.

I have opened all required ports. Whenever i stop the firewalld service then
$ kubectl exec -ti busybox – nslookup kubernetes.default
This command resolves the domain but after i restart the firewalld service then it doesnt resolves though the status of core-dns pod is running.
This is the result of the command $iptables -save
With some research on this topic i have found in a article that the running firewalld and iptables can cause a conflicting issues but we cant stop firewalld service due to security issues.

Strange. We have firewalld enabled and we haven’t come across those issues yet. What are the logs saying in core-dns, they might have some hint as to what is causing the conflict.

This is the logs of core DNS and error message while starting firewalld service

Assuming this is a multi-node cluster. Can you confirm 10250 is open on all the nodes?

Yes all nodes have this port open.

I’m not sure what exactly it could be. I keep coming back to something on the server 192.168.1.104 not being right.

What method did you use to install k8s (kubeadm, kubespray, the hard way)? Might be something in that causing an issue.

I viewed this article for installation

Hello sir. Is there any issues regarding this article installation steps ?

The instructions looked fine, haven’t been able to get my head off of what is going on with the one server. I would inspect that one in particular to see of it is giving errors in the kubelet and other components.

Outside of that try a fresh install, sometimes typos happen.

Hello.

Did you ever figure this one out? I am having the same issue.

Thank you.

Hi. I was stucked as you because of firewalld conflict.
I tried many things to solve it smoothly. But I can’t.

Otherwise I suggest you to disable firewalld.
“systemctl stop firewalld”

Then the traffic between another CIDR will be healthy.

But you should concern that some minor issues can be occur.

Hello @cafealternativo, Try opening cni0 interface with firewalld. Or follow this article for opening ports :-
https://docs.oracle.com/en/operating-systems/olcne/1.1/start/ports.html

Hi @Jpkim, Thank you