You can do this in a couple of different ways.
Through the Ingress
resource, Ingress - Kubernetes or through kubectl using the port-forward
command, Use Port Forwarding to Access Applications in a Cluster - Kubernetes .
I use the ingress resource a lot, you just need to remember to update you hosts
file to point the DNS to the cluster.
Can you please elaborate why they don’t? So we can try to look at something that does together
The application deployed is Cassandra as a statefulset as described in the link here
Example: Deploying Cassandra with Stateful Sets - Kubernetes
I thought the NodePort created a problem as the datastax driver looks for connections through the 300xxx Kubernetes port. But i saw the problem is not the only problem as even exposing the 9042 defualt one doesn’t make the Cassandra nodes reachable (since they have private IPs within the Cluster).
I excluded the LoadBalancer option as i am not running it on a public cloud provider (but i saw you can manually define an ip using externalIPs tag in the yaml file.
So it seems that the only option, if you want tto run Cassandra on Kubernetes, is to make the datastax connection within the Kubernetes cluster. I imagine a viable solution is to have a service that is exposed (through NodePort or LoadBalancer) that then manages the connections with Cassandra. Or am i missing some other obvious solution?
If you need direct-to-pod access then your pods need to be integrated with your outer network. If you run your pods in an overlay, for example, then pod IPs are not “really” on your network.