GKE Ingress Issue having Backend Unhealthy

Hello All,

I am running an ingress on GKE cluster and have issue with backend being UNHEALTHY. In short, ingress is connected to a svc, svc has endpoints and the endpoints has liveness probe configured with success. Not sure what the problem is. Any help would be great. Have captured the outputs as below

kk@jumphost:~$ kubectl get ingress jenkins-ingress
NAME              HOSTS   ADDRESS          PORTS   AGE
jenkins-ingress   *       34.107.221.217   80      13h
kk@jumphost:~$ kubectl describe ingress jenkins-ingress
Name:             jenkins-ingress
Namespace:        default
Address:          34.107.221.217
Default backend:  jenkins-service:8080 (10.4.2.15:8080)
Rules:
  Host  Path  Backends
  ----  ----  --------
  *     *     jenkins-service:8080 (10.4.2.15:8080)
Annotations:
  kubectl.kubernetes.io/last-applied-configuration:  {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{},"name":"jenkins-ingress","namespace":"default"},"spec":{"backend":{"serviceName":"jenkins-service","servicePort":8080}}}
  ingress.kubernetes.io/backends:         {"k8s-be-31308--007ae74e899741f4":"UNHEALTHY"}
  ingress.kubernetes.io/forwarding-rule:  k8s-fw-default-jenkins-ingress--007ae74e899741f4
  ingress.kubernetes.io/target-proxy:     k8s-tp-default-jenkins-ingress--007ae74e899741f4
  ingress.kubernetes.io/url-map:          k8s-um-default-jenkins-ingress--007ae74e899741f4
Events:                                   <none>

2) Looking at the service, it has endpoints as below

kk@jumphost:~$ kubectl get svc jenkins-service
NAME              TYPE       CLUSTER-IP   EXTERNAL-IP   PORT(S)          AGE
jenkins-service   NodePort   10.8.9.248   <none>        8080:31308/TCP   6d13h
kk@jumphost:~$ kubectl describe svc jenkins-service
Name:                     jenkins-service
Namespace:                default
Labels:                   <none>
Annotations:              kubectl.kubernetes.io/last-applied-configuration:
                            {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"jenkins-service","namespace":"default"},"spec":{"ports":[{"port":...
Selector:                 app=jenkins
Type:                     NodePort
IP:                       10.8.9.248
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31308/TCP
Endpoints:                10.4.2.15:8080
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

3) The endpoint has liveness probe configured and it is passing

kk@jumphost:~$ kubectl describe pod jenkins-deployment-b5664c765-jsz47
Name:           jenkins-deployment-b5664c765-jsz47
Namespace:      default
Priority:       0
Node:           gke-cluster-1-default-pool-5a375c66-jq71/10.128.0.4
Start Time:     Tue, 17 Mar 2020 12:14:59 +0000
Labels:         app=jenkins
                pod-template-hash=b5664c765
Annotations:    kubernetes.io/limit-ranger: LimitRanger plugin set: cpu request for container jenkins
Status:         Running
IP:             10.4.2.15
IPs:            <none>
Controlled By:  ReplicaSet/jenkins-deployment-b5664c765
Containers:
  jenkins:
    Container ID:   docker://9f31b3cdec6df6dbfa2606e8231ab1e828618529c770fb381805bbb09e982a7e
    Image:          jenkins:2.60.3
    Image ID:       docker-pullable://jenkins@sha256:eeb4850eb65f2d92500e421b430ed1ec58a7ac909e91f518926e02473904f668
    Port:           8080/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Tue, 17 Mar 2020 12:15:02 +0000
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:        100m
    Liveness:     http-get http://:8080/login delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /test-pd from task-volume (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-jlcsw (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  task-volume:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  my-disk-claim-1
    ReadOnly:   false
  default-token-jlcsw:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-jlcsw
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:          <none>

Any help here ?

Hi All,

I am facing the same issue.
Any help to fix this issue, please.
Thanks,
Siva