Hello everyone,
I am facing a networking issue in my Kubernetes cluster involving an external F5 Load Balancer, NodePort services, and Network Policies. Here is my current setup:
The Infrastructure:
-
Ingress: External F5 Load Balancer distributes traffic to all worker nodes via
NodePort. -
Service Configuration: The services are running with
externalTrafficPolicy: Cluster(default). -
Network Policy (NetPol): I have a deny-all default policy with an allow-list for specific Client IPs (via F5) and the internal Pod CIDR (
10.78.0.0/16).
The Problem: Since the service uses externalTrafficPolicy: Cluster, when F5 sends a request to a Node that does not host the target Pod, Kubernetes forwards the traffic to the correct Node. During this process, Kubernetes applies SNAT, changing the Source IP to the Node’s internal IP.
Consequently, the NetworkPolicy on the destination Pod blocks the request because the Source IP (now the Node’s IP) is not in the allowed list.
Constraints:
-
Cannot use
externalTrafficPolicy: Local: I do not have administrative access to the F5 configuration. If I switch toLocal, the F5 continues to send traffic to nodes without Pods, resulting in dropped connections because the F5 health checks are not aware of pod placement. -
Security concerns: I want to avoid whitelisting the entire Node subnet in the NetworkPolicy, as this would allow any traffic originating from or passing through a node to bypass the restrictions, effectively negating the source IP filtering.
My Question: Is there a standard pattern or workaround to preserve the client IP (or make NetPol work effectively) in this scenario without modifying the F5 configuration and without whitelisting all Node IPs?
Any advice would be appreciated. Thanks!