Iptables rules not added after node restart

I have a problem that do not know how to fix it. I have on-site kubernetes cluster with 3 nodes (1 CP, 2 workers). But the problem is whenever I reboot the node I lose the iptables rules on that node and they are not getting added. So the result is that this node can’t call kubeapi server via default kubernetes service IP (in my case 10.96.0.1). Restarting kubelet service and kube-proxy pod on that node is not working. How can I resolve this?

Try to flush Iptables rules:

#!/bin/bash
iptables -P INPUT ACCEPT
iptables -P FORWARD ACCEPT
iptables -P OUTPUT ACCEPT

# Flush All Iptables Chains/Firewall rules #
iptables -F

# Delete all Iptables Chains #
iptables -X

# Flush all counters too #
iptables -Z
# Flush and delete all nat and  mangle #
iptables -t nat -F
iptables -t nat -X
iptables -t mangle -F
iptables -t mangle -X
iptables -t raw -F
iptables -t raw -X

Sadly this not worked for me

If you are saying that kube-proxy is not installing iptables rules, then:

a) check that it is in iptables mode and not IPVS mode or the new nftables mode
b) look at the logs

logs from kube-proxy pod on that node:

As far as I know kube-proxy is responsible for adding rules to iptables? But he is trying to call kube-api server via this service (I guess to ask which services and pods should add to iptables rules?) but it is unreachable because there are no iptables rules yet on that node.

The error is right there. You have specified neither a kubeconfig nor a master URL, so kube-proxy has no way to reach the apiserver. It doesn’t know who to talk to!

well on healthy node I see the same message but this node can talk to kubeapi-server via kubernetes default service:

IIRC the in-cluster config just looks for env bars or tries to talk to the service IP (which is cyclical). If the iptables rules exist, it might actually WORK right until those rules stop existing.

Regardless, this is obviously not a viable config.

I added kubeconfig file in which i set master url and path to ca cert. Everything works fine now. I just don’t understand why it was working on cluster creation (this was default settings of kube-proxy daemonset → no kuubeconifg file, no master address).