Internal ingress controller on GKE cluster

Cluster information:

Kubernetes version: v1.14.10-gke.27
Cloud being used: GKE

We’ve installed an internal load balancer using helm package like this:
helm upgrade --install --name nginx-internal-ssl stable/nginx-ingress -f values.yaml

With this values.yaml file:

rbac.create: true
controller.publishService.enabled: true

  ingressClass: nginx-ingress-ssl
    annotations: "Internal"

The service is created correctly:

bash $ kubectl get svc
NAME                                               TYPE           CLUSTER-IP      EXTERNAL-IP    PORT(S)         
nginx-internal-ssl-nginx-ingress-controller        LoadBalancer   80:32480/TCP,443:31274/TCP  

After ingress controller installation, we’ve created an ingress object with ingresClass pointing to the ingress we’ve created using annotations:

annotations: nginx-ingress-ssl

But when we check the ingress with kubectl we see this:

bash$ kubectl get ingress    
NAME               HOSTS                     ADDRESS        PORTS     AGE
myapp              myapp.mydomain.internal   80, 443   57m

It shows a public IP. But there is no external LB defined and the IP doesn’t publish any port or service to Internet. And all is working as expected (internal IP, ingress rules,…)

Have you experienced something like this? Do you know why appear a public IP instead of that belongs to the internal load balancer?

Many thanks in advance!