DNS for discovery of NodePort services from outside the cluster


#1

Hey,

I have a Problem with some legacy application. This application has (like so many of em) a server and a client side. We’re now deploying the server side in kubernetes. The application already has an architecture which fits at least OK to the architecture :slight_smile:

Now for the problem: Each of the application containers registers services with a central service registry (also running as pod). It does so by adding an entry with the own host and port where services are exposed (and: it is NOT HTTP, but plain TCP for those services). Now if anyone requires a certain service, it visits this registry and retrieves the actual location of the service and contacts it. I know this mechanism does not really fit to how things should be in kubernetes, but I cannot rewrite a few M lines of code overnight :frowning:. OK. So there are multiple problems:

  1. Since the service registers itself by providing it’s own hostname (kubernetes service name), all other parties need to be able to resolve this hostname (cluster internal pods will get this via KubeDNS/CoreDNS, right?). The problem here are cluster-external applications (clients): First they need to find the location of the registry, which itself is a kubernetes service as well (also type NodePort). I can get this to work by telling the client the IP/Port of any Node. But then the client fetches service information from the registry and cannot connect to any of the hosts mentioned in there, as there is not DNS. So what I need: a DNS service which resolves the service names to the IP of a Node (best would be the node the service is actually running on).
  2. Since the service registers itself by providing it’s own port, ports need to be identity mapped - thus I’m currently using NodePort services (this is OK for me right now, but if anyone has a better Idea, please tell me!).

Any Hints, Help, etc. would be greatly appreciated!
Markus


#2

Hi Markus,

the problem is in fact a NAT problem and the classical approach so solve that is to avoid NA, or have a split horizon DNS for that. Your question goes in the direction of the that.

We experimented a lot with avoiding NAT at all and to use exactly one DNS which is public available. However, if you plan to export it with NAT, you need can do

  1. specify stuff manually in another DNS (and that will only work in almost static environments)
  2. specify an algorithm to calculate DNS names (i.e. from internal DNS names with a suffix, substitution, whatever)

If you just need the address information of services, I would avoid to use a registry at all: it’s just proprietary and needs therefore to be implemented everywhere on the application layer. In fact, the registry itself should provide the end point (like a load balancer) or should resolve via DNS to that.

The CoreDNS project lacks of a proper zone management - we’re discussing that issue with the developer(s) - CoreDNS needs a way to specify a SOA and a NS record.

There is a DNS export project around (https://github.com/kubernetes-incubator/external-dns) but it is more focussed on public cloud providers than internal DNS infrastructures.

However, the API of Kubernetes exposes all services and a simple DNS exporter can be written in a few lines of code.

To specify your registry, you could also work with SRV records in DNS (like _registry._tcp.example.com). However, I wouldn’t recommend that for internal services, where you want to make a quick failover. DNS has a delay of several minutes in case of failover.


#3

Thank you very much for the insights! I fear there is still a lot of work down the road for me :slight_smile:, but this helps in sorting it out.