Kube-proxy can't talk to localhost in cluster

We have installed 5 servers , 3 server and 2 agent with uipath suite package.
cluster is up and running, seems to be working BUT we get some error in the logs for kube-proxy and if we enable firewall ( nftables) , trafic stops working.
Kube-config is pointing to localhost but we also added tls-san and advertise-address pointing to the servers IP

We get following error
Failed to retrieve node info" err=“Get "https://127.0.0.1:6443/api/v1/nodes/sr004379.*.*\”: dial tcp 127.0.0.1:6443: connect: connection refused"

Cluster information:

Kubernetes version:
Client Version: v1.31.4+rke2r1
Kustomize Version: v5.4.2
Server Version: v1.31.4+rke2r1

Cloud being used: no
Installation method: Uipath+kubernetes
Host OS: RHE 9.5
CNI and version: Cilium - not sure of version
CRI and version: containerd with version 1.7.23-k3s2

Hii, can we deep dive?

Hi

of course

what do you need to see ?

We have 3 servers with kube-proxy on each node and 2 agent node with kube-proxy.
The agent don’t have same behavior but the servers do.

kube-proxy manages Service routing via iptables or IPVS. It doesn’t touch localhost behavior directly.

So if behavior involving localhost differs, it usually means:

  1. The software running inside the Pod or node is referencing localhost in a way that only works when certain assumptions are true (e.g. that a service is running on the same machine).
  2. There’s a difference in Pod placement, container config, or host networking setup across your nodes

Well i try to change the iptables from iptables-nft to iptables-legacy but after that the kube don’t add all the firewall rules automaticly.

Should the service be on same namespace as the kube-proxy ?

I have kube-proxy and assume it talks to the kube-api ?

Becuase we have proxy-mode = iptables

Should we use ipables-legacy as firewall rules ?
Iptables-nft was set first - but we try to use iptables-legacy instead and then kube-proxy don’t create the rules again and we think becuase it try to talk to localhost and failed

spec:
containers:

  • args:
    • –cluster-cidr=10.42.0.0/16
    • –conntrack-max-per-core=0
    • –conntrack-tcp-timeout-close-wait=0s
    • –conntrack-tcp-timeout-established=0s
    • –healthz-bind-address=127.0.0.1
    • –hostname-override=x
    • –kubeconfig=/var/lib/rancher/rke2/agent/kubeproxy.kubeconfig
    • –proxy-mode=iptables