Kube-Proxy can't find its own node IP

Hi,
I am currently running a k8s cluster setup where I have 3 master nodes and 3 worker nodes installed with Kubeadm. Each one of the master nodes is responsible for managing a worker node. In one of my worker nodes, an error had recently popped up that hadn’t been seen before.

I0226 10:03:50.904111 1066929 round_trippers.go:510] HTTP Trace: Dial to tcp:127.0.0.1:6443 succeed
I0226 10:03:50.918311 1066929 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/kube-system/pods/kube-proxy-ssss2 200 OK in 15 milliseconds
I0226 10:03:50.918395 1066929 round_trippers.go:570] HTTP Statistics: DNSLookup 0 ms Dial 0 ms TLSHandshake 8 ms ServerProcessing 4 ms Duration 15 ms
I0226 10:03:50.918447 1066929 round_trippers.go:577] Response Headers:
I0226 10:03:50.918478 1066929 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:03:50 GMT
I0226 10:03:50.918529 1066929 round_trippers.go:580]     Audit-Id: a7c16d6f-3cb0-48a8-aa73-e2e4fbffde64
I0226 10:03:50.918559 1066929 round_trippers.go:580]     Cache-Control: no-cache, private
I0226 10:03:50.918579 1066929 round_trippers.go:580]     Content-Type: application/json
I0226 10:03:50.918639 1066929 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1cafbc8f-a46e-4e74-9804-03b72b0beb1c
I0226 10:03:50.918661 1066929 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 447a939f-c32c-455a-bc49-1bc69b1544e4
I0226 10:03:50.918906 1066929 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-ssss2","generateName":"kube-proxy-","namespace":"kube-system","uid":"be28ae46-b701-4466-ade5-01032008209d","resourceVersion":"117095929","creationTimestamp":"2024-02-26T04:16:43Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"15ecadb4-5d6c-4fb4-8f51-36dbb9c1a5a4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-26T04:16:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},

I0226 10:03:50.947158 1066929 round_trippers.go:580]     Audit-Id: fda7a277-2512-4d58-a1a8-7bc48e4b1752
I0226 10:03:50.947183 1066929 round_trippers.go:580]     Cache-Control: no-cache, private
I0226 10:03:50.947202 1066929 round_trippers.go:580]     Content-Type: text/plain
I0226 10:03:50.947222 1066929 round_trippers.go:580]     Date: Mon, 26 Feb 2024 10:03:50 GMT
E0226 04:16:43.854878       1 node.go:152] Failed to retrieve node info: Get "https://127.0.0.1:6443/api/v1/nodes/k8s-worker-03": dial tcp 127.0.0.1:6443: connect: connection refused
E0226 04:16:45.010666       1 node.go:152] Failed to retrieve node info: Get "https://127.0.0.1:6443/api/v1/nodes/k8s-worker-03": dial tcp 127.0.0.1:6443: connect: connection refused
E0226 04:16:47.300738       1 node.go:152] Failed to retrieve node info: Get "https://127.0.0.1:6443/api/v1/nodes/k8s-worker-03": dial tcp 127.0.0.1:6443: connect: connection refused
E0226 04:16:51.658415       1 node.go:152] Failed to retrieve node info: Get "https://127.0.0.1:6443/api/v1/nodes/k8s-worker-03": dial tcp 127.0.0.1:6443: connect: connection refused
E0226 04:17:01.247069       1 node.go:152] Failed to retrieve node info: Get "https://127.0.0.1:6443/api/v1/nodes/k8s-worker-03": dial tcp 127.0.0.1:6443: connect: connection refused
E0226 04:17:19.267198       1 node.go:152] Failed to retrieve node info: Get "https://127.0.0.1:6443/api/v1/nodes/k8s-worker-03": dial tcp 127.0.0.1:6443: connect: connection refused
I0226 04:17:19.267293       1 server.go:820] "Can't determine this node's IP, assuming 127.0.0.1; if this is incorrect, please set the --bind-address flag"
I0226 04:17:19.267352       1 server_others.go:109] "Detected node IP" address="127.0.0.1"
I0226 04:17:19.304643       1 server_others.go:248] "Using ipvs Proxier"

The Kube Proxy pods in the master nodes work fine, but the ones in the worker nodes don’t. I’ve tried resetting the worker node and rejoining it to its master node. Nothing seems to be working and the Kube-Proxy pod still can’t determine what its node IP address should be.

The retrieval to https://127.0.0.1:6443/api/v1/namespaces/kube-system/pods/kube-proxy-ssss2 appears to work well but the call to https://127.0.0.1:6443/api/v1/nodes/k8s-worker-03 yields: connect: connection refused.

I think this is maybe why the pod can’t find its node but am not too sure why this is occurring. Any help or insight would be greatly appreciated.

Cluster information:

Kubernetes version: v1.26.1
Cloud being used: bare metal
Installation method: kubeadm
Host OS: debian 11
CNI and version: Calico v3.24.5
CRI and version: Containerd v1.6.16

You can format your yaml by highlighting it and pressing Ctrl-Shift-C, it will make your output easier to read.

We were able to get this working though by changing the config-map in kube-proxy to point to the api server directly and it appears to have worked. Anyone having this issue make sure your kube-proxy is pointing to the right servers.