Can't connect to Kubernetes Cluster on external IP


#1

I’m trying to access .NET Web API wich I dokerize and mount in an Kubernet Cluster on Microsoft Azure.

The application works fine on local docker machine. The cluster is runnning, my deployment was correct and the pods where created. Everything I check is fine, but I cannot access my aplication through the external cluster IP (Load Balancer). This is my YAML deployment file:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: ohmioapi-deployment
    spec:
      selector:
        matchLabels:
          app: ohmioapi
      replicas: 1
      template:
        metadata:
          labels:
            app: ohmioapi
        spec:
          containers:
          - name: ohmioapi
            image: ohmiocontainers.azurecr.io/ohmioapi:latest
            imagePullPolicy: Always
            ports:
            - containerPort: 15200
          imagePullSecrets:
            - name: acr-auth
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: ohmioapi
      labels:
        app: ohmioapi
    spec:
      selector:
        app: ohmioapi
      ports:
      - port: 15200
        nodePort: 30200
        protocol: TCP
      type: LoadBalancer
    ```

Can anyone give a hint of where to start looking for? Thanks!

#2

Hi @ericpap.
I had this issue and, in my case, was related to health check.
Since my pods did not respond 200 at /, the request resulted in 503 answers.


#3

A couple things to think about:

  • LoadBalancers typically work well on public cloud managed kubernetes environments - GKE, AKE, EKS. They may not work on minikube and sometimes are restricted on private deployments depending on policy.

  • I would do the following:
    $ kubectl get svc --all-namespaces
    And look for the loadbalancer, check it has a public ip and if you are able to telnet to it on the port it is exposed as.

Hope that helps.