NetworkPolicy blocking traffic due to SNAT when using F5 LoadBalancer with NodePort (externalTrafficPolicy: Cluster)

Hello everyone,

I am facing a networking issue in my Kubernetes cluster involving an external F5 Load Balancer, NodePort services, and Network Policies. Here is my current setup:

The Infrastructure:

  • Ingress: External F5 Load Balancer distributes traffic to all worker nodes via NodePort.

  • Service Configuration: The services are running with externalTrafficPolicy: Cluster (default).

  • Network Policy (NetPol): I have a deny-all default policy with an allow-list for specific Client IPs (via F5) and the internal Pod CIDR (10.78.0.0/16).

The Problem: Since the service uses externalTrafficPolicy: Cluster, when F5 sends a request to a Node that does not host the target Pod, Kubernetes forwards the traffic to the correct Node. During this process, Kubernetes applies SNAT, changing the Source IP to the Node’s internal IP.

Consequently, the NetworkPolicy on the destination Pod blocks the request because the Source IP (now the Node’s IP) is not in the allowed list.

Constraints:

  1. Cannot use externalTrafficPolicy: Local: I do not have administrative access to the F5 configuration. If I switch to Local, the F5 continues to send traffic to nodes without Pods, resulting in dropped connections because the F5 health checks are not aware of pod placement.

  2. Security concerns: I want to avoid whitelisting the entire Node subnet in the NetworkPolicy, as this would allow any traffic originating from or passing through a node to bypass the restrictions, effectively negating the source IP filtering.

My Question: Is there a standard pattern or workaround to preserve the client IP (or make NetPol work effectively) in this scenario without modifying the F5 configuration and without whitelisting all Node IPs?

Any advice would be appreciated. Thanks!

Hi,

According to the Create an External Load Balancer | Kubernetes , in order to preserve client ip you need to use `externalTrafficPolicy: Local`. Otherwise you will never know the real ip of the client.

The Problem: Since the service uses externalTrafficPolicy: Cluster, when F5 sends a request to a Node that does not host the target Pod, Kubernetes forwards the traffic to the correct Node. During this process, Kubernetes applies SNAT, changing the Source IP to the Node’s internal IP.

SNAT is always in-place. It does not occur only if Node does not host the target pod.

Thanks for the clarification regarding SNAT behavior. You are absolutely right, I cannot rely on Source IP in Cluster mode.

However, I am in a tight spot because I cannot modify the F5 configuration (no access to set up health checks for pod locality). If I switch to externalTrafficPolicy: Local, the F5 continues sending traffic to nodes without pods, causing connection failures.

Given that I am forced to stay on Cluster mode and migrating 100+ services to an Ingress Controller is not feasible right now, do you have any other recommendations on how to achieve IP filtering in this scenario?

I am open to any alternative approaches that would allow me to filter traffic without changing the Service type.