Source IP on port 22 for non cloud native application?

Hi

I’m trying to run a SSH honeypot (https://github.com/honeytrap/honeytrap) on my Kubrnetes cluster.

I am using a managed Kubernetes (Digital Ocean).

I’m facing a problem - I want to expose port 22 and have the traffic route to my Pod with the source IP in tact. I also want to be able to see the source IP address for traffic analysis.

Exposing a Service with on port 22 works - except that the IP showing up on my honeypot is an internal (10.) IP. Not the real source IP that I wanted.

I have tried two methods to get the actual source IP:

  1. Use NodePort
  ports:
  - port: 22
    nodePort: 22
    name: "ssh"
    protocol: TCP

For security reasons, this is restricted:

The Service "honeytrap" is invalid: spec.ports[0].nodePort: Invalid value: 22: provided port is not in the valid range. The range of valid ports is 30000-32767
  1. Proxy Protocol

Digital Ocean load balancers offer the “Proxy Protocol”, I tried turning this on:

  annotations:
    service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: "true"

But this breaks by SSH honeypot (honeytrap) - if this was nginx (a more cloud native application) I would be OK, but honeytrap does not understand Proxy Protocol.

I know that there are ways around this by managing my own kubernetes cluster (which I don’t want to do), are there any other options?

Cluster information:

Kubernetes version: 1.15.2-do.0
Cloud being used: Digital Ocean
Installation method: Digital Ocean
Host OS: Digital Ocean Managed
CNI and version: Digital Ocean Managed
CRI and version: Digital Ocean Managed

Does DO support the “external traffic policy: Local” setting? That is what you need.

2 Likes

I’ve just tired externalTrafficPolicy: Local and it does not have the desired effect. So it seems they don’t support it!

Thanks - have reported (https://www.digitalocean.com/community/questions/get-client-source-ip-with-kubernetes-load-balancer-service).

Any thoughts on a possible workaround? Unfortunately there aren’t any FW rules I can set up with this host to forward to a known >3000 port.

A horrible workaround might be to use hostPort or, even worse, hostNetwork and connect to the worker IP?

I guess there should be better options, but can’t think of any right now. I haven’t used that, articulated with the app, can be done to simplify it, maybe?

Or if you only run one pod, use a node selector so the pod is run on only one node and you connect to that node IP and NodePort? Would one pod be useful?

Hacky workaround #1: Use a hostport and a specific node or set of nodes.

Hacky workaround #2: Run a sidecar that receives PROXY protocol, logs source IP and then splices to the real SSH server.

For what it’s worth I was able to solve this in a slightly less hacky way.

I used mmproxy, I’ve written up the results here: https://andrewmichaelsmith.com/2020/02/preserving-client-ip-in-kubernetes/

1 Like