Dashboard service - connection refused

I am trying to use the Kubernetes dashboard add on with microk8s.

I installed the dashboard add-on, and after following the port forward example (choosing 9100 as the free host port in my case)

microk8s kubectl port-forward -n kube-system service/kubernetes-dashboard 9100:443

It does not connect to service when I try to browse to https://localhost:9100:

E0708 15:16:30.506413 6793 portforward.go:400] an error occurred forwarding 9100 -> 8443: error forwarding port 8443 to pod 0352a3aa5b7388f8343070b529193594254023338dbfb976d5ac7f86405e8b2c, uid : failed to execute portforward in network namespace “/var/run/netns/cni-2443edf3-16c6-8a66-6fe1-37afe9c4eb79”: socat command returns error: exit status 1, stderr: “2020/07/08 15:16:30 socat[6868] E connect(5, AF=2 127.0.0.1:8443, 16): Connection refused\n”

I had similar connection refused issues connecting to the private docker registry shipped with microk8s, setup to listen on 32000. When I set it up instead to use a custom private registry I put together on localhost:5000 it worked fine. Any idea what might be the case?

I’m using Ubuntu 18.04, with v1.18.4 of microk8s.

Are you following the instructions from here https://microk8s.io/docs/addon-dashboard ? Does this happen with all ports (higher than 9100)?

Yes - I have been following the instructions from the addon-dashboard link - I tried it on 10443 to begin with and encountered the issue, so retried on another port that I use and know to be free, to be sure (ie the one I mentioned in my post).

I just now cleared iptables to a very basic state, to see if it could be something firewall related, but the issue persists.

OK, I just checked and there are problems with the dashboard pod itself:

I will investigate further.

kubectl -n kube-system get pods
NAME READY STATUS RESTARTS AGE
coredns-588fd544bf-fwg9v 0/1 Running 9 44h
dashboard-metrics-scraper-db65b9c6f-dvhh4 1/1 Running 9 41h
heapster-v1.5.2-58fdbb6f4d-6krl7 4/4 Running 32 41h
hostpath-provisioner-75fdc8fccd-rfwvb 0/1 CrashLoopBackOff 132 40h
kubernetes-dashboard-67765b55f5-w2jtg 0/1 CrashLoopBackOff 154 41h
monitoring-influxdb-grafana-v4-6dc675bf8c-bbmpb 2/2 Running 16 41h

Looking at coredns logs, it doesn’t seem to behave well:

E0709 11:27:57.164070 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://10.152.183.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.152.183.1:443: i/o timeout
E0709 11:27:57.164070 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://10.152.183.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.152.183.1:443: i/o timeout
E0709 11:27:57.164070 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://10.152.183.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.152.183.1:443: i/o timeout
[INFO] plugin/ready: Still waiting on: “kubernetes”

I suspect there is something blocking the communication between the pods and the k8s API server. Can you have a look at the first couple of common issues in https://microk8s.io/docs/troubleshooting#heading--common-issues ?

I have it working now. I do not have ufw enabled, but earlier when I mentioned I’d set iptables to a basic state, and still didn’t work - I should have restarted microk8s after flushing iptables. After doing that now, it works.