internalTrafficPolicy: Local and no one endpoints on the node

According to the documentation, if a Service (e.g., someservice) has the property internalTrafficPolicy: Local, the requests should be routed only to the available endpoints on the local node, with no fallback to other nodes.

In my case, each node has two workloads that match the Service selector. When traffic is sent to someservice, kube-proxy balances the load between these two local endpoints as expected.

However, after deploying a new node and sending traffic to it, we observed an unexpected behavior. There were no workloads deployed on the new node that match the selector for someservice.

Kubernetes version: v1.30.2

Docs: Service Internal Traffic Policy | Kubernetes

For pods on nodes with no endpoints for a given Service, the Service behaves as if it has zero endpoints (for Pods on this node) even if the service does have endpoints on other nodes.

We also tried testing the service using nc. On the new node, the port wasn’t open, which confirms that no local endpoints were available, but traffic still went through successfully.

Expected behavior: The service should not send traffic at all since no local endpoints exist.

Actual behavior: The service still balances traffic across all available workloads and their endpoints on other nodes, despite the internalTrafficPolicy: Local setting.

Question: Why does the Service and kube-proxy behave this way? Is this the expected behavior or a misconfiguration on our side?

Cluster information:

Kubernetes version: 1.30.2
Cloud being used: bare-metal
Installation method: kubespray
Host OS: almalinux
CNI and version: calico 3.27.3
CRI and version: containerd 1.7.16

Where is the traffic in question originating from?

Traffic flows from NGINX Ingress to the Service.
There are workloads on the ingress nodes that match the service’s selectors.
From the existing ingress nodes, traffic is routed to the local workloads.

A new ingress node was added without a local workload,
but the traffic is distributed to the ingress nodes where the workload exists.

Not quite sure I follow.

External → nginx on node A → service (iTP=Local, has on-node endpoints) – this works, right?

External → nginx on node B → service (iTP=Local, no on-node endpoints) – this shouldn’t work.

I am not an expert in ingress-nginx, but IIRC nginx doesn’t actually use the Service - it just looks at the endpoints and routes to pods directly (maybe there’s a config for that?). Perhaps it does not implement iTP=Local properly?

But you say you used NC to connect to the service IP from node B and it worked? That doesn’t seem right. This is not the right forum for debugging - if this is all correct:

  1. please simplify the repro case, so we can try it out exactly as you specified it, and not our interpretation of your description
  2. open a bug on GitHub - kubernetes/kubernetes: Production-Grade Container Scheduling and Management with ALL the details as simply as possible
  3. include the full YAML of the minimized repro