Practical differences between HostPort and NodePort-Local

Assuming I need the original source IP of clients outside the cluster to be seen by the Pod application, and that I will tolerate the application only being reachable on the Nodes where the Pods are scheduled, what are the fundamental differences between the two approaches?

For clarity, the first approach is a Deployment where the Pod template contains a container with a HostPort, e.g. TCP port 32222. There is no Service in this scenario.

The second approach, would be a very similar Deployment but the container does not have a HostPort. Instead there is a Service which selects the Pods from this Deployment and has type: NodePort, externalTrafficPolicy: Local, and nodePort: 32222.

So far, the only difference I can think of is that the HostPort solution would limit the Pod replica count to be at most the number of available Nodes. I.e. two replicas of this Pod could not share a single Node, but the NodePort Service solution would allow multiple replicas of the Pod on one Node.

What other differences have I missed?

Is there another approach that doesn’t assume any particular Kubernetes cloud provider, and doesn’t assume a particular protocol (e.g. HTTP X-Forwarded-For, PROXY protocol)?

| jstangroome
July 16 |

  • | - |

Assuming I need the original source IP of clients outside the cluster to be seen by the Pod application, and that I will tolerate the application only being reachable on the Nodes where the Pods are scheduled, what are the fundamental differences between the two approaches?

For clarity, the first approach is a Deployment where the Pod template contains a container with a HostPort, e.g. TCP port 32222. There is no Service in this scenario.

The second approach, would be a very similar Deployment but the container does not have a HostPort. Instead there is a Service which selects the Pods from this Deployment and has type: NodePort, externalTrafficPolicy: Local, and nodePort: 32222.

So far, the only difference I can think of is that the HostPort solution would limit the Pod replica count to be at most the number of available Nodes. I.e. two replicas of this Pod could not share a single Node, but the NodePort Service solution would allow multiple replicas of the Pod on one Node.

What other differences have I missed?

From the top of my head: permissions. HostPort might be blocked by a pod security policy, for example.

Is there another approach that doesn’t assume any particular Kubernetes cloud provider, and doesn’t assume a particular protocol (e.g. HTTP X-Forwarded-For, PROXY protocol)?

Another approach to what exactly? If you mean load balancing in a bare metal env, yes. See metalLB :slight_smile:

If we hit the pod from an external network, the two ways are almost the same from the user’s perspective, except the implementation details (iptables rules etc)
But a NodePort service has a cluster IP, it means you can reach the pods easily when you send a request from a pod in the same cluster. While hostPort does not have a cluster IP, since it’s not a concept on services but a concept on pods, so you have to figure out how to reach the pods.
So, if we run a service which is reachable both from internal and external network, a nodePort service is probably a better solution. If we run a daemonset, the pod will be on every single node, there is no concept of service, no externalTrafficPolicy choice, then hostPort is simpler in concept.