Confusion around service external loadbalancers

Hi,

I am a little confused about external load balancers for a deployed service. For example if I have a deployment of 4 Pods across 4 Nodes and configure a service as LoadBalancer, an external IP is assigned to send traffic to the ‘service’ from outside the cluster. From what I understand, when the traffic hits the external IP, internally the traffic is sent to the internal ClusterIP which then ‘load distributes’ the session to a pod that is a member of the service. So this is ‘load distribution’ of sorts.

If internally whenever traffic hits the ExternalIP, its forwarded to the ClusterIP and the pod the traffic is sent to is based on a distribution algorithm, then I don’t see how an external load balancer (i.e. F5) would be of any use for a service in a single cluster if K8 will always choose the end pod the traffic is forwarded to. Am I missing something fairly fundamental here ?

Also, I note the same behavior if I set the service to NodePort, where when I send traffic from outside the cluster to the NodeIP:NodePort, the traffic does not necessarily terminate on that node if it has a running pod for that service, the same ‘load distribution’ happens and will terminate on a pod chosen by the K8 internal load distribution algorithm.

I am trying to understand where an external load balancer would have any real effect if regardless of if i send traffic to the NodeIP:NodePort or the ExternalIP:Port, the k8 internal load distribution will always choose a node. I can see an external LB having purpose if I have two separate deployments in a cluster or two separate deployment is separate clusters.

Any sanity around this is much appreciated.

Cluster information:

Kubernetes version: 1.20
Cloud being used: ACI
Installation method: kubeadm
Host OS: CentOS7
CNI and version: ACI 5.2
CRI and version:

Thanks,

Shaun

From what I understand, when the traffic hits the external IP, internally the traffic is sent to the internal ClusterIP which then ‘load distributes’ the session to a pod that is a member of the service.

Close - it’s not literally sent to the clusterIP, but it is processed as if it were.

I don’t see how an external load balancer (i.e. F5) would be of any use for a service in a single cluster if K8 will always choose the end pod the traffic is forwarded to. Am I missing something fairly fundamental here ?

First, remember that most of the components are pluggable and swappable. That said, the out-of-the-box components do behave as you say. At layer 4, the default implementations apply a second level of load-distribution within the cluster.

I am trying to understand where an external load balancer would have any real effect

It’s more commonly used at layer 7 (e.g. Ingress), where an LB can talk directly to a pod IP, rather than to a VIP. Service LB implementations CAN do this with a proxy, but I don’t know if any actually do.

That’s clearer now.

Would I be correct in thinking that this second level (or any load-distribution) is implemented/handled within the CNI, therefore this behavior is dependent on the CNI implementation?

Thanks.

The second level is implemented USUALLY by kube-proxy. Some network implementations don’t use kube-proxy, so they have some other answer (whether functionally the same or not).

There’s been some conflation of the term CNI to include that, but I think it’s a misuse of the term. CNI is a very narrow API for setting up a Pod’s network interface(s).