Working with registries and containerd in MicroK8s

We recently released MicroK8s with containerd support and noticed that some of our users were not comfortable configuring and interacting with image registries. We have taken the time to go through the common workflows and document how to properly configure the containerd service so it can pull images correctly.

The scenarios we cover include:

  • Working with locally built images without a registry
  • Working with public registries
  • Working with MicroK8s’ registry add-on
  • Working with a private registry

We have also covered common pitfalls such as the “server gave HTTP response to HTTPS client” error and how to resolve them.

The full documentation is available here. Please get in touch if you have further questions or feedback.

2 Likes

I use Microk8s: installed: v1.15.1 (720) 192MB classic,
with: microk8s.daemon-containerd: simple, enabled, active.
If I try to build a microservice and push the docker image to the local microk8s registry
I get the following error:
Excerpt from Jenkins Build:
Successfully built 5bd29c0a420d
19:46:32 Successfully tagged localhost:32000/quarkus-demo:latest
19:46:32 + docker push localhost:32000/quarkus-demo
19:46:33 The push refers to repository [localhost:32000/quarkus-demo]
19:46:58 Get http://localhost:32000/v2/: net/http: request canceled (Client.Timeout exceeded while awaiting headers)

The microk8s docs dont explain how to configure the container daemon.
When I execute: sudo netstat -a -p | grep 32000
I get:
tcp6 3 0 [::]:32000 [::]:* LISTEN 31751/kube-proxy
tcp6 149 0 localhost6.locald:32000 localhost6.locald:58340 CLOSE_WAIT -
tcp6 0 0 localhost6.locald:41484 localhost6.locald:32000 ESTABLISHED 31758/containerd
tcp6 245 0 localhost6.locald:32000 localhost6.locald:58370 CLOSE_WAIT -
tcp6 315 0 localhost6.locald:32000 localhost6.locald:41484 ESTABLISHED -
But there is no service listening to localhost:32000.

Whats wrong with the picture?
Can you help me?
Thanx
Lutz

Hi @lstrobel

Have you enabled the registry with microk8s.enable registry? This addon starts the registry service backed up by local storage.

How does your microk8s.kubectl get all --all-namespaces look like?

Thanks

Thank you @kjackal for your fast answer.
Yes the registry is enabled:

lstrobel@microk8s:~$ microk8s.status
microk8s is running
addons:
knative: disabled
jaeger: disabled
fluentd: disabled
gpu: disabled
storage: enabled
registry: enabled
rbac: disabled
ingress: enabled
dns: disabled
metrics-server: disabled
linkerd: disabled
prometheus: disabled
istio: disabled
dashboard: enabled

And within the results from kctl get all --all-namespaces I can see that the registry pod is not running.
lstrobel@microk8s:~$ kctl get all --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
ci pod/nexus-54d7869dd8-29kl2 1/1 Running 15 69d
container-registry pod/registry-6c99589dc-lgszm 0/1 Pending 0 22h
default pod/default-http-backend-5769f6bc66-nkdvp 1/1 Running 22 73d
default pod/nginx-ingress-microk8s-controller-gs4qp 1/1 Running 22 73d
default pod/quarkus-demo-57848f5dd6-w4f7j 0/1 ImagePullBackOff 0 2d2h
default pod/quarkus-demo-69875b8dd6-vqftg 0/1 ContainerCreating 0 2d4h
kube-system pod/heapster-v1.5.2-6b5d7b57f9-jg6jt 4/4 Running 166 111d
kube-system pod/hostpath-provisioner-58564cb894-c288q 0/1 CrashLoopBackOff 5 24h
kube-system pod/kube-dns-6bfbdd666c-xqlpc 2/3 Running 6959 111d
kube-system pod/kubernetes-dashboard-6fd7f9c494-xqqzk 0/1 CrashLoopBackOff 3437 111d
kube-system pod/monitoring-influxdb-grafana-v4-78777c64c8-nzgj8 2/2 Running 91 111d
kube-system pod/tiller-deploy-765dcb8745-5mnmx 1/1 Running 13 66d

NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ci service/nexus-service NodePort 10.152.183.248 8081:32032/TCP 70d
container-registry service/registry NodePort 10.152.183.54 5000:32000/TCP 22h
default service/default-http-backend ClusterIP 10.152.183.55 80/TCP 73d
default service/hello-node NodePort 10.152.183.122 8080:30114/TCP 70d
default service/jenkins-operator-http-example ClusterIP 10.152.183.118 8080/TCP 66d
default service/jenkins-operator-slave-example ClusterIP 10.152.183.128 50000/TCP 66d
default service/kubernetes ClusterIP 10.152.183.1 443/TCP 112d
default service/quarkus-demo NodePort 10.152.183.113 8080:31580/TCP 2d4h
kube-system service/heapster ClusterIP 10.152.183.242 80/TCP 111d
kube-system service/kube-dns ClusterIP 10.152.183.10 53/UDP,53/TCP 111d
kube-system service/kubernetes-dashboard ClusterIP 10.152.183.235 443/TCP 111d
kube-system service/monitoring-grafana ClusterIP 10.152.183.133 80/TCP 111d
kube-system service/monitoring-influxdb ClusterIP 10.152.183.62 8083/TCP,8086/TCP 111d
kube-system service/tiller-deploy ClusterIP 10.152.183.243 44134/TCP 66d

NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
default daemonset.apps/nginx-ingress-microk8s-controller 1 1 1 1 1 73d

NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
ci deployment.apps/nexus 1/1 1 1 70d
container-registry deployment.apps/registry 0/1 1 0 22h
default deployment.apps/default-http-backend 1/1 1 1 73d
default deployment.apps/quarkus-demo 0/1 1 0 2d4h
kube-system deployment.apps/heapster-v1.5.2 1/1 1 1 111d
kube-system deployment.apps/hostpath-provisioner 0/1 1 0 24h
kube-system deployment.apps/kube-dns 0/1 1 0 111d
kube-system deployment.apps/kubernetes-dashboard 0/1 1 0 111d
kube-system deployment.apps/monitoring-influxdb-grafana-v4 1/1 1 1 111d
kube-system deployment.apps/tiller-deploy 1/1 1 1 66d

NAMESPACE NAME DESIRED CURRENT READY AGE
ci replicaset.apps/nexus-54d7869dd8 1 1 1 70d
container-registry replicaset.apps/registry-6c99589dc 1 1 0 22h
default replicaset.apps/default-http-backend-5769f6bc66 1 1 1 73d
default replicaset.apps/quarkus-demo-55b4c6b8ff 0 0 0 2d4h
default replicaset.apps/quarkus-demo-57848f5dd6 1 1 0 2d2h
default replicaset.apps/quarkus-demo-69875b8dd6 1 1 0 2d4h
kube-system replicaset.apps/heapster-v1.5.2-5c5498f57c 0 0 0 111d
kube-system replicaset.apps/heapster-v1.5.2-6b5d7b57f9 1 1 1 111d
kube-system replicaset.apps/heapster-v1.5.2-89b48dff 0 0 0 111d
kube-system replicaset.apps/hostpath-provisioner-58564cb894 1 1 0 24h
kube-system replicaset.apps/kube-dns-6bfbdd666c 1 1 0 111d
kube-system replicaset.apps/kubernetes-dashboard-6fd7f9c494 1 1 0 111d
kube-system replicaset.apps/monitoring-influxdb-grafana-v4-78777c64c8 1 1 1 111d
kube-system replicaset.apps/tiller-deploy-765dcb8745 1 1 1 66d

kctl describe pod … registry shows a warnig at the end:
Events:
Type Reason Age From Message


Warning FailedScheduling 5m39s (x783 over 19h) default-scheduler pod has unbound immediate PersistentVolumeClaims

Maybe that this is the problem.

When you microk8s.enable registry the storage plugin is also enabled because the registry needs to claim a persistent volume. I see that the hostpath-provisioner pod is crashlooping. Can you share the output of microk8s.kubectl logs -n kube-system pod/hostpath-provisioner-58564cb894-c288q and microk8s.kubectl describe nodes ? We need to see why the hostpath provisioner (providing the default storageclass) is failing.

Sorry, but I´m a new User I can only have 5 links in a post :hot_face: . so I cannot provide describe nodes results. Do you have an analyze command which does not generate so many links?

This is the log entry from the hostpath provisioner:
F0805 17:44:52.652866 1 hostpath-provisioner.go:162] Error getting server version: Get https://10.152.183.1:443/version: dial tcp 10.152.183.1:443: i/o timeout

I have the microk8s freshly installed, but the result is all the same.
Here the pods list:
lstrobel@microk8s:~$ kctl -n kube-system get pods
NAME READY STATUS RESTARTS AGE
heapster-v1.5.2-5c5498f57c-nx7jr 4/4 Running 4 23h
hostpath-provisioner-6d744c4f7c-rh94v 0/1 CrashLoopBackOff 7 18m
kube-dns-6bfbdd666c-7nb4r 1/3 CrashLoopBackOff 533 23h
kubernetes-dashboard-6fd7f9c494-tddlk 1/1 Running 267 23h
monitoring-influxdb-grafana-v4-78777c64c8-7w2sl 2/2 Running 2 23h

What do you think - is there any hope to fix it?

Could you open an issue here: https://github.com/ubuntu/microk8s/issues and attach the tarball produced by microk8s.inspect? Thank you.

It looks like I got it.
microk8s.inspect showed 2 warnings, so a:
sudo iptables -P FORWARD ACCEPT
and a
sudo ufw allow in on cbr0 && sudo ufw allow out on cbr0
and a microk8s restart seemed to clear the crashloops.
Thank you for your help.
Lutz

But one problem stays.
The micro8s registry does not seem to listen on localhost:32000 (ip4) but only on tcp6.
lstrobel@microk8s:~$ sudo netstat -a -p | grep 32000
tcp6 2 0 [::]:32000 [::]:* LISTEN 20016/kube-proxy
tcp6 245 0 localhost6.locald:32000 localhost6.locald:43058 CLOSE_WAIT -
tcp6 0 0 localhost6.locald:43042 localhost6.locald:32000 FIN_WAIT2 -
tcp6 149 0 localhost6.locald:32000 localhost6.locald:43042 CLOSE_WAIT -
tcp6 0 0 localhost6.locald:43058 localhost6.locald:32000 FIN_WAIT2 -
So a docker push fails.
It is annoying.

Congrats @lstrobel, you have a K8s cluster now!

We have a few users facing the same problem with IPv6 and the registry [1][2]. What worked for most people is to edit /etc/hosts and comment out the ::1 localhost ip6-localhost ip6-loopback line (suggested in [1])

[1] https://github.com/ubuntu/microk8s/issues/196#issuecomment-443869365
[2] https://github.com/ubuntu/microk8s/issues/498

This is unbelievable.
Thanx

Not sure if this is the best way to ask the question, but looking for advice on how to get the local registry working / best approach if using microk8s clustered. My setup works just fine on a single node microk8s, images are pushed to/retrieved from the registry on localhost, happy system.

Once I add a second node, anything scheduled to that node dies on image pull. That seems logical enough as the yaml looks for the image in localhost, as I understand that registry is only on the master node, so it’s going to fail?

Ideally that would not involve ‘hardcoding’ the host anymore than necessary to keep it portable…

@MarkS you are right that the registry add-on is not suited out of the box for a multi-node cluster. There a couple of points you as the cluster admin need to address.

First you will need storage shared across nodes. This would allow the registry pod to re-spawn if needed without losing the stored images.

Second you will need to expose the registry with a fixed endpoint and configure the containerd instances to be aware of is. For exposing the registry you have the option of nodeport so that the registry would be available in a specific port on every node or you could use a load-balancer (metallb is available as an addon with mcirok8s enable metallb). For configuring containerd you will need to update /var/snap/microk8s/args/containerd-template.toml with the registry endpoint and restart MicroK8s.

We would like to hear what your approach would be especially in the case of the storage solution. Packaging a multi-node-friendly storage solution as an addon would have been great.

You may also want to look at the Images and Registries section in the docs [1]

[1] https://microk8s.io/docs

1 Like