Microk8s: nginx-controller relys on calico which is failing to get the right authorization info to interact with the cluster

Hi everyone! I am trying to set up ingress to expose http/https service to the public. The cluster is set up by MicroK8s. Before setting up the ingress, I tested the service by curl ClusterIP: port and NodePort, and both work very well. However, when I set up the ingress to expose http, it doesn’t work.

This is the manifest of ingress.yaml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: minio-ingress
  namespace: minio-operator
  annotations:
    kubernetes.io/ingress.class: public
spec:
  rules:
  - host: k8master.snac.com
    http:
      paths:
      - path: /storage-console
        pathType: Prefix
        backend:
          service:
            name: console
            port:
              number: 9090
      - path: /storage-operator
        pathType: Prefix
        backend:
          service:
            name: operator
            port:
              number: 4222

This is the description of ingress.

Name:             minio-ingress
Labels:           <none>
Namespace:        minio-operator
Address:          
Ingress Class:    <none>
Default backend:  <default>
Rules:
  Host                         Path  Backends
  ----                         ----  --------
  k8master.snac.com.  
                               /storage-console    console:9090 (10.1.168.39:9090)
                               /storage-operator   operator:4222 (10.1.168.38:4222)
Annotations:                   kubernetes.io/ingress.class: public
Events:                        <none>

The ingress controller is still stuck at creation. Please see the pod description below:

Namespace:        ingress
Priority:         0
Service Account:  nginx-ingress-microk8s-serviceaccount
Node:             snac-holmes/10.50.50.139
Start Time:       Fri, 10 Feb 2023 03:27:39 +0000
Labels:           controller-revision-hash=74f957997f
                  name=nginx-ingress-microk8s
                  pod-template-generation=1
Annotations:      <none>
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    DaemonSet/nginx-ingress-microk8s-controller
Containers:
  nginx-ingress-microk8s:
    Container ID:  
    Image:         registry.k8s.io/ingress-nginx/controller:v1.2.0
    Image ID:      
    Ports:         80/TCP, 443/TCP, 10254/TCP
    Host Ports:    80/TCP, 443/TCP, 10254/TCP
    Args:
      /nginx-ingress-controller
      --configmap=$(POD_NAMESPACE)/nginx-load-balancer-microk8s-conf
      --tcp-services-configmap=$(POD_NAMESPACE)/nginx-ingress-tcp-microk8s-conf
      --udp-services-configmap=$(POD_NAMESPACE)/nginx-ingress-udp-microk8s-conf
      --ingress-class=public
       
      --publish-status-address=127.0.0.1
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Liveness:       http-get http://:10254/healthz delay=10s timeout=5s period=10s #success=1 #failure=3
    Readiness:      http-get http://:10254/healthz delay=0s timeout=5s period=10s #success=1 #failure=3
    Environment:
      POD_NAME:       nginx-ingress-microk8s-controller-j6prb (v1:metadata.name)
      POD_NAMESPACE:  ingress (v1:metadata.namespace)
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2xq9q (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  kube-api-access-2xq9q:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                             node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists
                             node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                             node.kubernetes.io/unreachable:NoExecute op=Exists
                             node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
  Type     Reason                  Age                    From     Message
  ----     ------                  ----                   ----     -------
  Warning  FailedCreatePodSandBox  2m3s (x539 over 120m)  kubelet  (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "f3e2ab6a960e68c7589e727475281999d1e45ed27d5551a20fc9534ce05dead7": plugin type="calico" failed (add): error getting ClusterInformation: connection is unauthorized: Unauthorized
kylie@snac-k8master-2:~$ k get all -n ingress
NAME                                          READY   STATUS              RESTARTS   AGE
pod/nginx-ingress-microk8s-controller-j6prb   0/1     ContainerCreating   0          123m
pod/nginx-ingress-microk8s-controller-fqf5t   0/1     ContainerCreating   0          123m

I noticed that the error message is weird, which said:

(combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "f3e2ab6a960e68c7589e727475281999d1e45ed27d5551a20fc9534ce05dead7": plugin type="calico" failed (add): error getting ClusterInformation: connection is unauthorized: Unauthorized

Do you have any advice or idea? Thanks so much

2 Likes

Hi all! I have figure out the solution for this problem. This is so weird. Just delete all “calico-node” pods under “kube-system” namespace. The calico node will recreate automatically, and it works.

1 Like

Hi! Thanks for sharing the solution. I noticed that deleting the “calico-node” pods also works for me, but the issue tends to reappear after some time. Did you experience the problem returning after a while, or has it been permanently resolved on your side? Any additional steps you took to prevent it from happening again?