Internal ingress controller on GKE cluster

Cluster information:

Kubernetes version: v1.14.10-gke.27
Cloud being used: GKE

We’ve installed an internal load balancer using helm package like this:
helm upgrade --install --name nginx-internal-ssl stable/nginx-ingress -f values.yaml

With this values.yaml file:

rbac.create: true
controller.publishService.enabled: true

controller:
  ingressClass: nginx-ingress-ssl
  service:
    loadBalancerIP: 10.128.0.110
    annotations:
      cloud.google.com/load-balancer-type: "Internal"

The service is created correctly:

bash $ kubectl get svc
NAME                                               TYPE           CLUSTER-IP      EXTERNAL-IP    PORT(S)         
nginx-internal-ssl-nginx-ingress-controller        LoadBalancer   10.47.149.60    10.128.0.110   80:32480/TCP,443:31274/TCP  

After ingress controller installation, we’ve created an ingress object with ingresClass pointing to the ingress we’ve created using annotations:

annotations:
   kubernetes.io/ingress.class: nginx-ingress-ssl

But when we check the ingress with kubectl we see this:

bash$ kubectl get ingress    
NAME               HOSTS                     ADDRESS        PORTS     AGE
myapp              myapp.mydomain.internal   34.105.51.13   80, 443   57m

It shows a public IP. But there is no external LB defined and the IP doesn’t publish any port or service to Internet. And all is working as expected (internal IP, ingress rules,…)

Have you experienced something like this? Do you know why appear a public IP instead of 10.128.0.110 that belongs to the internal load balancer?

Many thanks in advance!
Marc

1 Like

I have same problem, did you solve it ?