Advise for exposing Kubernetes pods with TCP and UDP traffic

I’m writing a Kubernetes operator to deploy individual, dedicated game servers, inside the same cluster. What would be the best method for exposing these servers outside of the cluster?


  • Servers must be individually addressable outside of the cluster. Dedicated game servers are isolated from each other and cannot be load balanced.
  • Support TCP and UDP protocols. These are the primary protocols that the games will be communicating with.
  • Manageable programatically. My custom resource is an individual server. Assocciating a server to something like an ingress is dangerous, as a single ingress resource contains the rules of other servers too.
  • Scale to the upper limits of Kubernetes.
  • Native to Kubernetes. This could be hosted anywhere, with no reliance on custom resources or tooling outside of my own project.


I initially thought of an ingress. However, ingress listeners and rules are not separate resources from the ingress itself. Modifying these rules programatically per individual server, could be very dangerous. This is the same with the listeners on the new Gateway API too.

I looked at nodeports, but since the scope of nodeports are cluster wide, the theoretical limit per cluster is a single range of ports - far below 65535.

The closest I’ve gotten so far is clusterIps with externalIps. From what I’m reading, this seems to work in a similar way to a nodePort, but the scope is placed on the externalIps. This means I could distribute the servers across a number of externalIps, scaling much more than nodeports.

apiVersion: v1
kind: Service
  name: my-service
  selector: MyApp
    - name: http
      protocol: TCP
      port: 80
      targetPort: 49152

I’m not sure if I’m misinterpreting the way that clusterIps and externalIps work.


So I can tell you how we do it in Agones, which is built to do this very thing!

I’ll copy paste from our FAQ:

How is traffic routed from the allocated Port to the GameServer container?
Traffic is routed to the GameServer Container utilising the hostPort field on a Pod’s Container specification.

This opens a port on the host Node and routes traffic to the container via iptables or ipvs, depending on host provider and/or network overlay.

In worst case scenarios this routing can add an extra 0.5ms latency to UDP packets, but that is extremely rare.

We do this, since you really can’t use a LoadBalancer to route UDP packets to a specific game server instance, and you don’t want or need the hop (most of the time) – it’s much better to go directly to the node.

As part of the Agones project we allow you to retrieve the IP and port(s) that are exposed through information on our GameServer CRD - which is backed by the Node IP, and the port(s) assigned to the Pod through Agones’ port management system.

Also, creating an external Service per Game Server is just not going to scale - and could be very expensive!

It does mean you need nodes with public IPs, and also a corresponding firewall rule to allow the traffic in.

You could also do this with a hostNetwok, but we decided against it (to also c/p from our FAQ):

Why did you use hostPort and not hostNetwork for your networking?
The decision was made not to use hostNetwork, as the benefits of having isolated network namespaces between game server processes give us the ability to run sidecar containers, and provides an extra layer of security to each game server process.

Now if you really want to use a LB (and maybe you do!) for UDP traffic, you might also want to look at Quilkin - a UDP proxy for game server traffic, that can route based on packet contents. For TCP traffic, depending on the type of TCP traffic you are doing (assuming websockets/http/gRPC?), there is likely a proxy you could also use similarly to route traffic based on some kind of header.

1 Like

Thanks so much for this! Sometimes the simple solution is the best one.

I can’t believe I overlooked the hostPort field…
I wasn’t aware of the Agones project. Looks amazing and I think I’ll be leveraging it going forward!

1 Like