"Could not get a response" from Postman, fetching Express server

Kubernetes version: 1.16
Cloud being used: bare-metal
Installation method: Docker for Mac
Host OS: macOS 10.14.6
CNI and version: v0.7.5
CRI and version: v1.14.0

I’m kind of stumped as to how to expose the back-end to something like Postman. I just keep getting “Could not get any response” from localhost:5000 and about 20 other URLs I’ve tried in Postman, despite the deployment running when I check kubectl.

docker run -p 5000:5000 exampleapp/server

Runs just fine in that the server responds with "Hello World!" when I fetch localhost:5000 from Postman.

With k8s, not so much.

These are various configs:

ingress-service.yaml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-service
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
  rules:
    - host: localhost
      http:
        paths:
          - path: /?(.*)
            backend:
              serviceName: server-cluster-ip-service
              servicePort: 5000

server-cluster-ip-service.yaml

apiVersion: v1
kind: Service
metadata:
  name: server-cluster-ip-service
spec:
  type: ClusterIP
  selector:
    component: server
  ports:
    - port: 5000
      targetPort: 5000

server-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: server-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      component: server
  template:
    metadata:
      labels:
        component: server
    spec:
      containers:
        - name: server
          image: exampleapp/server
          ports:
            - containerPort: 5000

Dockerfile

FROM node:12.10-alpine

WORKDIR "/app"
COPY ./package.json ./
RUN npm install
COPY . .

CMD ["npm", "run", "dev"]

index.js

const express = require('express');

// Set the ExpressJS application
const app = express();

// Set the listening port
// Web front-end is running on port 3000
const port = 5000;

// Set root route
app.get('/', (req, res) => res.send('Hello World!'));

// Listen on the port
app.listen(port, () => console.log(`Example app listening on port ${port}`));

How are you querying it when your run in kubernetes?

The Express server should be running on port 5000. It apparently seems to be running because I do see Example app listening on port 5000 three times when I start it up with k8s.

I’m trying to get a response from it by doing the following in Postman, some of which are entirely redundant (localhost vs 127.0.0.1)… just a simple GET:

localhost:5000
localhost:5000/
localhost/
127.0.0.1:5000
127.0.0.1:5000/
127.0.0.1/
192.168.99.100:5000 //minikube ip
192.168.99.100:5000/
192.168.99.100/
172.17.0.7:5000 //one of the pod ips
172.17.0.7:5000/
172.17.0.7/

Tried prefixed with http:// and https://. Tried a number of things I knew wouldn’t work, but just desperate at this point to figure out what is going on.

Don’t know what postman is, I guess a fancy curl?

In any case, if you are running docker for Mac, that creates a VM and a new IP for that VM. Not sure how the network setup is there, but quite sure you won’t hit it on localhost.

Not sure how you connected docker for Mac and kubernetes (maybe there is an built in way), but my guess is that you will have to connect to the VM docker for Mac created underneath.

If connections in all ports to the VM are allowed, once you get the VM IP, you should be able to connect to the app. To do that, though, you will need the service exposing your app to be type nodeport and then connect to the VM on that port (you can see it by running kubectl get svc -o yaml and check for the nodeport field).

But never used kubernetes integration with docker for Mac. This is guessing, maybe there is something simpler provided. I don’t really know :-/

OK, got this sorted out now.

It boils down to the kind of Service being used: ClusterIP.

ClusterIP: Exposes the service on a cluster-internal IP. Choosing this value makes the service only reachable from within the cluster. This is the default ServiceType.

If I am wanting to connect to a Pod or Deployment directly from outside of the cluster (something like Postman, pgAdmin, etc.) and I want to do it using a Service, I should be using NodePort:

NodePort: Exposes the service on each Node’s IP at a static port (the NodePort). A ClusterIP service, to which the NodePort service will route, is automatically created. You’ll be able to contact the NodePort service, from outside the cluster, by requesting <NodeIP>:<NodePort>.

So in my case, if I want to continue using a Service, I’d change my Service manifest to:

apiVersion: v1
kind: Service
metadata:
  name: server-cluster-ip-service
spec:
  type: NodePort
  selector:
    component: server
  ports:
    - port: 5000
      targetPort: 5000
      nodePort: 31515

Making sure to manually set nodePort: <port> otherwise it is kind of random and a pain to use.

Then I’d get the minikube IP with minikube ip and connect to the Pod with 192.168.99.100:31515.

At that point, everything worked as expected.

But that means having separate sets of development (NodePort) and production (ClusterIP) manifests, which is probably totally fine. But I want my manifests to stay as close to the production version (i.e. ClusterIP).

There are a couple ways to get around this:

  1. Using something like Kustomize where you can set a base.yaml and then have overlays for each environment where it just changes the relevant info avoiding manifests that are mostly duplicative.

  2. Using kubectl port-forward. I think this is the route I am going to go. That way I can keep my one set of production manifests, but when I want to QA Postgres with pgAdmin I can do:

    kubectl port-forward services/postgres-cluster-ip-service 5432:5432

    Or for the back-end and Postman:

    kubectl port-forward services/server-cluster-ip-service 5000:5000

I’m playing with doing this through the ingress-service.yaml using nginx-ingress, but don’t have that working quite yet. Will update when I do. But for me, port-forward seems the way to go since I can just have one set of production manifests that I don’t have to alter.

1 Like


ishraqiyun77

    October 11

OK, got this sorted out now.

It boils down to the kind of Service being used: ClusterIP.

ClusterIP: Exposes the service on a cluster-internal IP. Choosing this value makes the service only reachable from within the cluster. This is the default ServiceType.

If I am wanting to connect to a Pod or Deployment directly from outside of the cluster (something like Postman, pgAdmin, etc.) and I want to do it using a Service, I should be using NodePort:

NodePort: Exposes the service on each Node’s IP at a static port (the NodePort). A ClusterIP service, to which the NodePort service will route, is automatically created. You’ll be able to contact the NodePort service, from outside the cluster, by requesting <NodeIP>:<NodePort>.

So in my case, if I want to continue using a Service, I’d change my Service manifest to:


apiVersion: v1
kind: Service
metadata:
name: server-cluster-ip-service
spec:
type: NodePort
selector:
component: server
ports:
- port: 5000
targetPort: 5000
nodePort: 31515

Making sure to manually set nodePort: <port> otherwise it is kind of random and a pain to use.

Glad you could make this work. I should have expanded more, sorry! But this is exactly what I meant z you just put it very clearly :slight_smile:

Then I’d get the minikube IP with minikube ip and connect to the Pod with [192.168.99.100:31515](http://192.168.99.100:31515).

At that point, everything worked as expected.

But that means having separate sets of development (NodePort) and production (ClusterIP) manifests, which is probably totally fine. But I want my manifests to stay as close to the production version (i.e. ClusterIP).

One other question: why do you need kubernetes in your laptop? I’d see you want it in staging or something before prod, but for local development of your app, do you need it?

I mean, there are tons of valid uses cases. But sometimes you can just continue local development as you were doing.