Cannot get nginx-ingress-controller to work

Asking for help...

I have installed the bitnami/nginx-ingress-controller image via helm charts. 
Everything installs fine and the nodes/pods/services all come up with no issue. 
However, when I add ingress rules, none of them work. I can confirm that the 
nginx-blog service is up via curl... and I can switch it up to make it a NodePort 
and that web service comes up fine... but I cannot get this ingress-controller to 
work. What am I doing wrong here. 

Thank you.
-ED

### Cluster information:

Kubernetes version:1.26.1
Cloud being used: Hosted VPS 
Installation method:manual
Host OS: debian 11 (bullseye)
CNI and version: calico v3.25.0  Cluster version 3.26.0-0.dev-281-g5fdc7ed3e12a
CRI and version: docker://23.0.1

user@computer$ kubectl get all --all-namespaces
NAMESPACE         NAME                                                                  READY   STATUS    RESTARTS      AGE
blog              pod/nginx-blog-7bc4648bc9-khwnq                                       1/1     Running   0             4h40m
ingress           pod/ingress-controller-nginx-ingress-controller-8ccbbb989-kq6z6       1/1     Running   0             8h
ingress           pod/ingress-controller-nginx-ingress-controller-default-backeng5jfg   1/1     Running   0             8h
kube-system       pod/calico-kube-controllers-56dd5794f-lp5kb                           1/1     Running   0             90d
kube-system       pod/calico-node-bh5xq                                                 1/1     Running   0             90d
kube-system       pod/calico-node-fswld                                                 1/1     Running   6 (30d ago)   90d
kube-system       pod/calico-node-p4s8q                                                 1/1     Running   0             90d
kube-system       pod/calicoctl                                                         1/1     Running   0             91d
kube-system       pod/coredns-787d4945fb-mnxtw                                          1/1     Running   0             91d
kube-system       pod/coredns-787d4945fb-w7cgf                                          1/1     Running   0             91d
kube-system       pod/etcd-node1                                                        1/1     Running   0             91d
kube-system       pod/kube-apiserver-node1                                              1/1     Running   0             91d
kube-system       pod/kube-controller-manager-node1                                     1/1     Running   0             91d
kube-system       pod/kube-proxy-mjvnv                                                  1/1     Running   6 (30d ago)   91d
kube-system       pod/kube-proxy-xrkzf                                                  1/1     Running   0             91d
kube-system       pod/kube-proxy-xvhgs                                                  1/1     Running   0             91d
kube-system       pod/kube-scheduler-node1                                              1/1     Running   0             91d
tigera-operator   pod/tigera-operator-54b47459dd-l5j97                                  1/1     Running   0             91d

NAMESPACE     NAME                                                                  TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
blog          service/nginx-blog                                                    ClusterIP      10.100.23.181    <none>        80/TCP,443/TCP               4h40m
default       service/kubernetes                                                    ClusterIP      10.96.0.1        <none>        443/TCP                      91d
ingress       service/ingress-controller-nginx-ingress-controller                   LoadBalancer   10.110.37.212    <pending>     80:31338/TCP,443:31337/TCP   8h
ingress       service/ingress-controller-nginx-ingress-controller-default-backend   ClusterIP      10.103.148.154   <none>        80/TCP                       8h
kube-system   service/kube-dns                                                      ClusterIP      10.96.0.10       <none>        53/UDP,53/TCP,9153/TCP       91d

NAMESPACE     NAME                         DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
kube-system   daemonset.apps/calico-node   3         3         3       3            3           kubernetes.io/os=linux   90d
kube-system   daemonset.apps/kube-proxy    3         3         3       3            3           kubernetes.io/os=linux   91d

NAMESPACE         NAME                                                                          READY   UP-TO-DATE   AVAILABLE   AGE
blog              deployment.apps/nginx-blog                                                    1/1     1            1           4h40m
ingress           deployment.apps/ingress-controller-nginx-ingress-controller                   1/1     1            1           8h
ingress           deployment.apps/ingress-controller-nginx-ingress-controller-default-backend   1/1     1            1           8h
kube-system       deployment.apps/calico-kube-controllers                                       1/1     1            1           90d
kube-system       deployment.apps/coredns                                                       2/2     2            2           91d
tigera-operator   deployment.apps/tigera-operator                                               1/1     1            1           91d

NAMESPACE         NAME                                                                                     DESIRED   CURRENT   READY   AGE
blog              replicaset.apps/nginx-blog-7bc4648bc9                                                    1         1         1       4h40m
ingress           replicaset.apps/ingress-controller-nginx-ingress-controller-8ccbbb989                    1         1         1       8h
ingress           replicaset.apps/ingress-controller-nginx-ingress-controller-default-backend-775555d6b5   1         1         1       8h
kube-system       replicaset.apps/calico-kube-controllers-56dd5794f                                        1         1         1       90d
kube-system       replicaset.apps/coredns-787d4945fb                                                       2         2         2       91d
tigera-operator   replicaset.apps/tigera-operator-54b47459dd                                               1         1         1       91d




user@computer$ kubectl get ingress --all-namespaces
NAMESPACE   NAME           CLASS   HOSTS   ADDRESS        PORTS   AGE
blog        ingress-blog   nginx   blog    123.123.123.123   80      15h


$ kubectl describe node
Name:               node2
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=node2
                    kubernetes.io/os=linux
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/cri-dockerd.sock
                    node.alpha.kubernetes.io/ttl: 0
                    projectcalico.org/IPv4Address: 123.123.123.321/24
                    projectcalico.org/IPv4IPIPTunnelAddr: 192.168.215.0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Fri, 24 Feb 2023 15:16:05 -0800
Taints:             <none>
Unschedulable:      false
Lease:
  HolderIdentity:  node2
  AcquireTime:     <unset>
  RenewTime:       Sat, 27 May 2023 06:32:50 -0700
Conditions:
  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----                 ------  -----------------                 ------------------                ------                       -------
  NetworkUnavailable   False   Sat, 25 Feb 2023 07:02:50 -0800   Sat, 25 Feb 2023 07:02:50 -0800   CalicoIsUp                   Calico is running on this node
  MemoryPressure       False   Sat, 27 May 2023 06:27:52 -0700   Wed, 10 May 2023 05:05:43 -0700   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure         False   Sat, 27 May 2023 06:27:52 -0700   Wed, 10 May 2023 05:05:43 -0700   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure          False   Sat, 27 May 2023 06:27:52 -0700   Wed, 10 May 2023 05:05:43 -0700   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready                True    Sat, 27 May 2023 06:27:52 -0700   Wed, 10 May 2023 05:05:43 -0700   KubeletReady                 kubelet is posting ready status. AppArmor enabled
Addresses:
  InternalIP:  123.123.123.321
  Hostname:    node2
Capacity:
  cpu:                4
  ephemeral-storage:  203248744Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             8147772Ki
  pods:               110
Allocatable:
  cpu:                4
  ephemeral-storage:  187314042161
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             8045372Ki
  pods:               110
System Info:
  Machine ID:                 478debb80e5c9b97cb40353563f26a33
  System UUID:                bce69726-90ef-4c72-b1a4-31d00beaaca4
  Boot ID:                    5bf611ca-4a05-4fe5-a5e0-191a5a631727
  Kernel Version:             5.10.0-12-amd64
  OS Image:                   Debian GNU/Linux 11 (bullseye)
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  docker://23.0.1
  Kubelet Version:            v1.26.1
  Kube-Proxy Version:         v1.26.1
Non-terminated Pods:          (4 in total)
  Namespace                   Name                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
  ---------                   ----                           ------------  ----------  ---------------  -------------  ---
  blog                        nginx-blog-7bc4648bc9-khwnq    0 (0%)        0 (0%)      0 (0%)           0 (0%)         17h
  kube-system                 calico-node-p4s8q              250m (6%)     0 (0%)      0 (0%)           0 (0%)         90d
  kube-system                 calicoctl                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         91d
  kube-system                 kube-proxy-xvhgs               0 (0%)        0 (0%)      0 (0%)           0 (0%)         91d
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests   Limits
  --------           --------   ------
  cpu                250m (6%)  0 (0%)
  memory             0 (0%)     0 (0%)
  ephemeral-storage  0 (0%)     0 (0%)
  hugepages-1Gi      0 (0%)     0 (0%)
  hugepages-2Mi      0 (0%)     0 (0%)
Events:              <none>


Name:               node1
Roles:              control-plane
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=node1
                    kubernetes.io/os=linux
                    node-role.kubernetes.io/control-plane=
                    node.kubernetes.io/exclude-from-external-load-balancers=
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/cri-dockerd.sock
                    node.alpha.kubernetes.io/ttl: 0
                    projectcalico.org/IPv4Address: 321.321.321.321/22
                    projectcalico.org/IPv4IPIPTunnelAddr: 192.168.141.192
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Fri, 24 Feb 2023 07:50:38 -0800
Taints:             node-role.kubernetes.io/control-plane:NoSchedule
Unschedulable:      false
Lease:
  HolderIdentity:  node1
  AcquireTime:     <unset>
  RenewTime:       Sat, 27 May 2023 06:32:55 -0700
Conditions:
  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----                 ------  -----------------                 ------------------                ------                       -------
  NetworkUnavailable   False   Sat, 25 Feb 2023 07:02:33 -0800   Sat, 25 Feb 2023 07:02:33 -0800   CalicoIsUp                   Calico is running on this node
  MemoryPressure       False   Sat, 27 May 2023 06:29:50 -0700   Fri, 24 Feb 2023 07:50:35 -0800   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure         False   Sat, 27 May 2023 06:29:50 -0700   Fri, 24 Feb 2023 07:50:35 -0800   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure          False   Sat, 27 May 2023 06:29:50 -0700   Fri, 24 Feb 2023 07:50:35 -0800   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready                True    Sat, 27 May 2023 06:29:50 -0700   Fri, 24 Feb 2023 07:50:39 -0800   KubeletReady                 kubelet is posting ready status. AppArmor enabled
Addresses:
  InternalIP:  321.321.321.321
  Hostname:    node1
Capacity:
  cpu:                6
  ephemeral-storage:  103036536Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             16395164Ki
  pods:               110
Allocatable:
  cpu:                6
  ephemeral-storage:  94958471421
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             16292764Ki
  pods:               110
System Info:
  Machine ID:                 978d204eafa44840a5c211f2224cddeb
  System UUID:                978d204e-afa4-4840-a5c2-11f2224cddeb
  Boot ID:                    a7d41862-fef6-4884-a648-ec31233a544e
  Kernel Version:             5.10.0-21-cloud-amd64
  OS Image:                   Debian GNU/Linux 11 (bullseye)
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  docker://23.0.1
  Kubelet Version:            v1.26.1
  Kube-Proxy Version:         v1.26.1
Non-terminated Pods:          (10 in total)
  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
  kube-system                 calico-kube-controllers-56dd5794f-lp5kb    0 (0%)        0 (0%)      0 (0%)           0 (0%)         90d
  kube-system                 calico-node-bh5xq                          250m (4%)     0 (0%)      0 (0%)           0 (0%)         90d
  kube-system                 coredns-787d4945fb-mnxtw                   100m (1%)     0 (0%)      70Mi (0%)        170Mi (1%)     91d
  kube-system                 coredns-787d4945fb-w7cgf                   100m (1%)     0 (0%)      70Mi (0%)        170Mi (1%)     91d
  kube-system                 etcd-node1                                 100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         91d
  kube-system                 kube-apiserver-node1                       250m (4%)     0 (0%)      0 (0%)           0 (0%)         91d
  kube-system                 kube-controller-manager-node1              200m (3%)     0 (0%)      0 (0%)           0 (0%)         91d
  kube-system                 kube-proxy-xrkzf                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         91d
  kube-system                 kube-scheduler-node1                       100m (1%)     0 (0%)      0 (0%)           0 (0%)         91d
  tigera-operator             tigera-operator-54b47459dd-l5j97           0 (0%)        0 (0%)      0 (0%)           0 (0%)         91d
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests     Limits
  --------           --------     ------
  cpu                1100m (18%)  0 (0%)
  memory             240Mi (1%)   340Mi (2%)
  ephemeral-storage  0 (0%)       0 (0%)
  hugepages-1Gi      0 (0%)       0 (0%)
  hugepages-2Mi      0 (0%)       0 (0%)
Events:              <none>


Name:               node3
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=node3
                    kubernetes.io/os=linux
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/cri-dockerd.sock
                    node.alpha.kubernetes.io/ttl: 0
                    projectcalico.org/IPv4Address: 123.123.123.123/24
                    projectcalico.org/IPv4IPIPTunnelAddr: 192.168.224.192
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Fri, 24 Feb 2023 15:15:16 -0800
Taints:             <none>
Unschedulable:      false
Lease:
  HolderIdentity:  node3
  AcquireTime:     <unset>
  RenewTime:       Sat, 27 May 2023 06:32:54 -0700
Conditions:
  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----                 ------  -----------------                 ------------------                ------                       -------
  NetworkUnavailable   False   Wed, 26 Apr 2023 12:25:43 -0700   Wed, 26 Apr 2023 12:25:43 -0700   CalicoIsUp                   Calico is running on this node
  MemoryPressure       False   Sat, 27 May 2023 06:31:19 -0700   Wed, 10 May 2023 04:26:42 -0700   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure         False   Sat, 27 May 2023 06:31:19 -0700   Wed, 10 May 2023 04:26:42 -0700   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure          False   Sat, 27 May 2023 06:31:19 -0700   Wed, 10 May 2023 04:26:42 -0700   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready                True    Sat, 27 May 2023 06:31:19 -0700   Wed, 10 May 2023 04:26:42 -0700   KubeletReady                 kubelet is posting ready status. AppArmor enabled
Addresses:
  InternalIP:  123.123.123.123
  Hostname:    node3
Capacity:
  cpu:                4
  ephemeral-storage:  203248744Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             8147932Ki
  pods:               110
Allocatable:
  cpu:                4
  ephemeral-storage:  187314042161
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             8045532Ki
  pods:               110
System Info:
  Machine ID:                 eecb43c14ffabbedded1dbeb63f269b9
  System UUID:                ac799180-c4b6-42de-bc87-b10f40564ec5
  Boot ID:                    42d1a8fd-2f0c-47e4-aae0-02a4eaca9a27
  Kernel Version:             5.10.0-21-amd64
  OS Image:                   Debian GNU/Linux 11 (bullseye)
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  docker://23.0.1
  Kubelet Version:            v1.26.1
  Kube-Proxy Version:         v1.26.1
Non-terminated Pods:          (4 in total)
  Namespace                   Name                                                               CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
  ---------                   ----                                                               ------------  ----------  ---------------  -------------  ---
  ingress                     ingress-controller-nginx-ingress-controller-8ccbbb989-kq6z6        0 (0%)        0 (0%)      0 (0%)           0 (0%)         20h
  ingress                     ingress-controller-nginx-ingress-controller-default-backeng5jfg    0 (0%)        0 (0%)      0 (0%)           0 (0%)         20h
  kube-system                 calico-node-fswld                                                  250m (6%)     0 (0%)      0 (0%)           0 (0%)         90d
  kube-system                 kube-proxy-mjvnv                                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         91d
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests   Limits
  --------           --------   ------
  cpu                250m (6%)  0 (0%)
  memory             0 (0%)     0 (0%)
  ephemeral-storage  0 (0%)     0 (0%)
  hugepages-1Gi      0 (0%)     0 (0%)
  hugepages-2Mi      0 (0%)     0 (0%)
Events:              <none>


user@computer$ kubectl get nodes
NAME      STATUS   ROLES           AGE   VERSION
node2    Ready    <none>          91d   v1.26.1
node1    Ready    control-plane   91d   v1.26.1
node3    Ready    <none>          91d   v1.26.1



user@computer$ kubectl describe ingress ingress-blog -n blog
Name:             ingress-blog
Labels:           <none>
Namespace:        blog
Address:          123.123.123.123
Ingress Class:    nginx
Default backend:  <default>
Rules:
  Host        Path  Backends
  ----        ----  --------
  blog        
              /   nginx-blog:80 (192.168.215.15:8080)
Annotations:  <none>
Events:       <none>


user@computer$ helm list --all-namespaces
NAME                NAMESPACE REVISION  UPDATED                                 STATUS    CHART                           APP VERSION
ingress-controller  ingress   1         2023-05-26 09:36:48.957022455 -0700 PDT deployed  nginx-ingress-controller-9.7.1  1.7.1      
nginx-blog          blog      1         2023-05-26 13:32:39.937922211 -0700 PDT deployed  nginx-14.2.2                    1.24.0

Hey,

I’m not familiar with the nginx-ingress-controller from Bitnami myself, but I think it is a lot like the common nginx-ingress-controller. What stands out to me is that your ingress controller is not receiving an external IP-address (ingress service/ingress-controller-nginx-ingress-controller LoadBalancer 10.110.37.212 <pending> 80:31338/TCP,443:31337/TCP 8h). Its status is Pending. This can have many causes. For example you cluster provider can have certain limits on the amount of IP-addresses you are allowed to use. You can make sure this is not case.

What you can always try to do is uninstall en install the helm chart, if you didn’t try that already.

I hope this helps.

Mike