OK, got this sorted out now.
It boils down to the kind of Service being used: ClusterIP
.
ClusterIP: Exposes the service on a cluster-internal IP. Choosing this value makes the service only reachable from within the cluster. This is the default ServiceType.
If I am wanting to connect to a Pod or Deployment directly from outside of the cluster (something like Postman, pgAdmin, etc.) and I want to do it using a Service, I should be using NodePort
:
NodePort: Exposes the service on each Node’s IP at a static port (the NodePort). A ClusterIP service, to which the NodePort service will route, is automatically created. You’ll be able to contact the NodePort service, from outside the cluster, by requesting
<NodeIP>
:<NodePort>
.
So in my case, if I want to continue using a Service, I’d change my Service manifest to:
apiVersion: v1
kind: Service
metadata:
name: server-cluster-ip-service
spec:
type: NodePort
selector:
component: server
ports:
- port: 5000
targetPort: 5000
nodePort: 31515
Making sure to manually set nodePort: <port>
otherwise it is kind of random and a pain to use.
Then I’d get the minikube
IP with minikube ip
and connect to the Pod with 192.168.99.100:31515
.
At that point, everything worked as expected.
But that means having separate sets of development (NodePort
) and production (ClusterIP
) manifests, which is probably totally fine. But I want my manifests to stay as close to the production version (i.e. ClusterIP
).
There are a couple ways to get around this:
-
Using something like Kustomize where you can set a base.yaml and then have overlays for each environment where it just changes the relevant info avoiding manifests that are mostly duplicative.
-
Using
kubectl port-forward
. I think this is the route I am going to go. That way I can keep my one set of production manifests, but when I want to QA Postgres with pgAdmin I can do:kubectl port-forward services/postgres-cluster-ip-service 5432:5432
Or for the back-end and Postman:
kubectl port-forward services/server-cluster-ip-service 5000:5000
I’m playing with doing this through the ingress-service.yaml
using nginx-ingress
, but don’t have that working quite yet. Will update when I do. But for me, port-forward
seems the way to go since I can just have one set of production manifests that I don’t have to alter.