Cannot Access the Service

Cluster information:

Kubernetes version: 1.24.1
Cloud being used: Azure
Installation method: Helm Chart

Explanation

I am very new to Kubernetes and decided to migrate one little side-project I have hosted here: GitHub - MrDesjardins/realtimepixel

I am trying to expose a frontend service that connects to a backend service that connects to a Redis service. Locally, I have a docker-compose, and it works. Now, moving to Kubernetes, I have some success (everything is green under Azure, but the exposed IP does not return anything. I added an ingress but haven’t seen a difference from only using the frontend service being a LoadBalancer.

Debug yaml output

The Helm Chart is producing the following output. I do not use that output, I use az login and Helm to install everything on Azure (see the action if relevant here realtimepixel/k8sdeploy.yml at master · MrDesjardins/realtimepixel · GitHub)

---
# Source: realtimepixel/templates/service_backend.yaml
apiVersion: v1
kind: Service
metadata:
  name: backend-service
  namespace: realtimepixel-prod
  labels:
    app: backend-service
spec:
  type: NodePort
  ports:
    - port: 3500
      targetPort: 3500
  selector:
    app: backend-pod
---
# Source: realtimepixel/templates/service_frontend.yaml
apiVersion: v1
kind: Service
metadata:
  name: frontend-service
  namespace: realtimepixel-prod
  labels:
    app: frontend-service
spec:
  type: LoadBalancer
  ports:
    - port: 80
      targetPort: 3501
  selector:
    app: frontend-pod
---
# Source: realtimepixel/templates/service_redis.yaml
apiVersion: v1
kind: Service
metadata:
  name: redis-service
  namespace: realtimepixel-prod
  labels:
    app: redis-service
spec:
  type: NodePort
  ports:
    - port: 6379
      targetPort: 6379
  selector:
    app: redis-pod
---
# Source: realtimepixel/templates/deployment_backend.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: backend-deployment
  namespace: realtimepixel-prod
spec:
  replicas: 1
  selector:
    matchLabels:
      app: "backend-pod"
  template:
    metadata:
      labels:
        app: "backend-pod"
    spec:
      serviceAccountName: default
      securityContext:
        {}
      containers:
        - name: backend-pod
          securityContext:
            {}
          image: "realtimepixel.azurecr.io/realtimepixel_backend:123123"
          imagePullPolicy: Always
          ports:
            - containerPort: 3500
          env:
              - name: SERVER_IP
                value: "backend-service"
              - name: SERVER_PORT
                value: "80"
              - name: REDIS_IP
                value: "redis-service"
              - name: REDIS_PORT
                value: "6379"
              - name: CLIENT_IP
                value: "frontend-service"
              - name: DOCKER_CLIENT_PORT_FORWARD
                value: "80"
          # livenessProbe:
          #   httpGet:
          #     path: /health
          #     port: http
          # readinessProbe:
          #   httpGet:
          #     path: /health
          #     port: http
          resources:
            {}
---
# Source: realtimepixel/templates/deployment_frontend.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend-deployment
  namespace: realtimepixel-prod
spec:
  replicas: 1
  selector:
    matchLabels:
      app: "frontend-pod"
  template:
    metadata:
      labels:
        app: "frontend-pod"
    spec:
      serviceAccountName: default
      securityContext:
        {}
      containers:
        - name: frontend-pod
          securityContext:
            {}
          image: "realtimepixel.azurecr.io/realtimepixel_frontend:123123"
          imagePullPolicy: Always
          ports:
            - containerPort: 3501
          resources:
            {}
---
# Source: realtimepixel/templates/deployment_redis.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-deployment
  namespace: realtimepixel-prod
spec:
  replicas: 1
  selector:
    matchLabels:
      app: "redis-pod"
  template:
    metadata:
      labels:
        app: "redis-pod"
    spec:
      serviceAccountName: default
      securityContext:
        {}
      containers:
        - name: redis-pod
          securityContext:
            {}
          image: "realtimepixel.azurecr.io/realtimepixel_redis:123123"
          imagePullPolicy: Always
          ports:
            - containerPort: 6379
          resources:
            {}

Can someone guide me on what is my mistake?

Thank you

https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/

Hi,
I already see the service and pod running (green).

Workload — ImgBB and Service-Ingress — ImgBB

I tried to connect to the pod and I have an unexpected behavior:

kubectl -n realtimepixel-prod get pods

Gives:

NAME                                   READY   STATUS    RESTARTS   AGE
backend-deployment-6df94df499-xcrpl    1/1     Running   0          4m49s
frontend-deployment-5954559b5b-4wclm   1/1     Running   0          4m50s
redis-deployment-6bb4d55b47-bx6fg      1/1     Running   0          4m50s

Then:

kubectl exec -it frontend-deployment-5954559b5b-4wclm -n realtimepixel-prod -- bash

Inside the pod nslookup does not exist. Leaving the pod.
Using:

kubectl run -it --rm --restart=Never busybox --image=gcr.io/google-containers/busybox -n realtimepixel-prod sh

nslookup frontend-service
Server:    10.0.0.10
Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local

Name:      frontend-service
Address 1: **10.0.208.65** frontend-service.realtimepixel-prod.svc.cluster.local

The article talks about checking DNS:

nslookup kubernetes.default

Returns:

Server:    10.0.0.10
Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes.default
Address 1: 10.0.0.1 kubernetes.default.svc.cluster.local

Where it starts to go wrong is when I try to access from a pod in the cluster to the service:

wget -qO- 10.0.208.65

Returns:

wget: can't connect to remote host (**10.0.208.65**): Connection refused

Verifying the ports:

kubectl get service frontend-service -n realtimepixel-prod -o json

I receive:

{
    "apiVersion": "v1",
    "kind": "Service",
    "metadata": {
        "annotations": {
            "meta.helm.sh/release-name": "realtimepixel",
            "meta.helm.sh/release-namespace": "realtimepixel-prod"
        },
        "creationTimestamp": "2022-07-19T00:09:03Z",
        "finalizers": [
            "service.kubernetes.io/load-balancer-cleanup"
        ],
        "labels": {
            "app": "frontend-service",
            "app.kubernetes.io/managed-by": "Helm"
        },
        "name": "frontend-service",
        "namespace": "realtimepixel-prod",
        "resourceVersion": "257181",
        "uid": "d13b1e49-2ea8-4027-a8f7-e7f9395f1862"
    },
    "spec": {
        "allocateLoadBalancerNodePorts": true,
        "clusterIP": "10.0.208.65",
        "clusterIPs": [
            "10.0.208.65"
        ],
        "externalTrafficPolicy": "Cluster",
        "internalTrafficPolicy": "Cluster",
        "ipFamilies": [
            "IPv4"
        ],
        "ipFamilyPolicy": "SingleStack",
        "ports": [
            {
                "nodePort": 30194,
                "port": 80,
                "protocol": "TCP",
                "targetPort": 3501
            }
        ],
        "selector": {
            "app": "frontend-pod"
        },
        "sessionAffinity": "None",
        "type": "LoadBalancer"
    },
    "status": {
        "loadBalancer": {
            "ingress": [
                {
                    "ip": "20.119.105.76"
                }
            ]
        }
    }
}
  • Is the Service port you are trying to access listed in spec.ports[]? Port 80 is defined :white_check_mark:
  • Is the targetPort correct for your Pods (some Pods use a different port than the Service)? Target to 3501 is correct :white_check_mark:
  • If you meant to use a numeric port, is it a number (9376) or a string “9376”? :white_check_mark:
  • If you meant to use a named port, do your Pods expose a port with the same name? I don’t meant to use named port.
  • Is the port’s protocol correct for your Pods? Default one is used :white_check_mark:

Testing the service pods:

kubectl get pods -l app=frontend-pod -n realtimepixel-prod

Produces:

NAME                                   READY   STATUS    RESTARTS   AGE
frontend-deployment-5954559b5b-4wclm   1/1     Running   0          24m

Testing the pods:

kubectl get endpoints frontend-service -n realtimepixel-prod

Gives:

NAME               ENDPOINTS         AGE
frontend-service   10.244.0.7:3501   39h

Connecting to the busybox to try to get the pod directly:

kubectl run -it --rm --restart=Never busybox --image=gcr.io/google-containers/busybox -n realtimepixel-prod sh

:warning:Then :warning:

wget -qO- 10.244.0.7:3501
wget: can't connect to remote host (10.244.0.7): Connection refused

I double-checked on the Azure portal, and the pod is running (green check) and has the IP 10.244.0.7. I double-checked the port, it is really 3501. I gave a try with port 80 just to check, same cannot connect error.

I think there is an issue there, right? The link provided does not give much information about what needs to be changed. Let me know if you have a clue.

This is telling you that the pod is not actually listening on 3501

Okay, but isn’t the configuration specifying the port 3501:

# Source: realtimepixel/templates/deployment_frontend.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend-deployment
  namespace: realtimepixel-prod
spec:
  replicas: 1
  selector:
    matchLabels:
      app: "frontend-pod"
  template:
    metadata:
      labels:
        app: "frontend-pod"
    spec:
      serviceAccountName: default
      securityContext:
        {}
      containers:
        - name: frontend-pod
          securityContext:
            {}
          image: "realtimepixel.azurecr.io/realtimepixel_frontend:123123"
          imagePullPolicy: Always
          ports:
            - containerPort: 3501
          resources:
            {}

Let me search how I can get more information about the pod/port situation

Just because you told kubernetes it is on that port, doesn’t mean it IS. You still need to tell the app itself to listen on that port (flags, env, etc).

Thanks, I think you find the issue.

I am not passing some variables properly since I moved to Kubernetes. For example, the frontend is not connecting to the backend_service internal DNS and still relies on some codes that were fine with Docker but need adjustment for Kubernetes.

I’ll post in a few hours/days if once the environment variables passed correctly, fix or not the issue. I feel to be on the excellent path to fixing my issue now.
Thank you.

Glad to help! You are not the first to hit these sorts of issues, which is why that doc exists :slight_smile:

Tim

Hello, I wanted to follow up after I had some time to experiment.

As mentioned, the issue was that the pods were running but not correctly in term of code. I did a refactoring to use a reverse proxy that allowed all my calls from the browser to reach the frontend service (load-balancer) and ensured that the pods received all environment variables well. Once everything was in a better shape, I finished with the same result of green pods and services, but that time, the external IP was providing the expected result.

The link provided in this thread showed me some way to debug that I wasn’t aware and helped me understand better Kubernetes.

This issue is officially over! :slight_smile: