We recently released MicroK8s with containerd support and noticed that some of our users were not comfortable configuring and interacting with image registries. We have taken the time to go through the common workflows and document how to properly configure the containerd service so it can pull images correctly.
The scenarios we cover include:
Working with locally built images without a registry
Working with public registries
Working with MicroK8s’ registry add-on
Working with a private registry
We have also covered common pitfalls such as the “server gave HTTP response to HTTPS client” error and how to resolve them.
The full documentation is available here. Please get in touch if you have further questions or feedback.
I use Microk8s: installed: v1.15.1 (720) 192MB classic,
with: microk8s.daemon-containerd: simple, enabled, active.
If I try to build a microservice and push the docker image to the local microk8s registry
I get the following error:
Excerpt from Jenkins Build:
Successfully built 5bd29c0a420d 19:46:32 Successfully tagged localhost:32000/quarkus-demo:latest 19:46:32 + docker push localhost:32000/quarkus-demo 19:46:33 The push refers to repository [localhost:32000/quarkus-demo] 19:46:58 Get http://localhost:32000/v2/: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
The microk8s docs dont explain how to configure the container daemon.
When I execute: sudo netstat -a -p | grep 32000
I get:
tcp6 3 0 [::]:32000 [::]:* LISTEN 31751/kube-proxy
tcp6 149 0 localhost6.locald:32000 localhost6.locald:58340 CLOSE_WAIT -
tcp6 0 0 localhost6.locald:41484 localhost6.locald:32000 ESTABLISHED 31758/containerd
tcp6 245 0 localhost6.locald:32000 localhost6.locald:58370 CLOSE_WAIT -
tcp6 315 0 localhost6.locald:32000 localhost6.locald:41484 ESTABLISHED -
But there is no service listening to localhost:32000.
Whats wrong with the picture?
Can you help me?
Thanx
Lutz
When you microk8s.enable registry the storage plugin is also enabled because the registry needs to claim a persistent volume. I see that the hostpath-provisioner pod is crashlooping. Can you share the output of microk8s.kubectl logs -n kube-system pod/hostpath-provisioner-58564cb894-c288q and microk8s.kubectl describe nodes ? We need to see why the hostpath provisioner (providing the default storageclass) is failing.
Sorry, but I´m a new User I can only have 5 links in a post . so I cannot provide describe nodes results. Do you have an analyze command which does not generate so many links?
This is the log entry from the hostpath provisioner:
F0805 17:44:52.652866 1 hostpath-provisioner.go:162] Error getting server version: Get https://10.152.183.1:443/version: dial tcp 10.152.183.1:443: i/o timeout
I have the microk8s freshly installed, but the result is all the same.
Here the pods list:
lstrobel@microk8s:~$ kctl -n kube-system get pods
NAME READY STATUS RESTARTS AGE
heapster-v1.5.2-5c5498f57c-nx7jr 4/4 Running 4 23h
hostpath-provisioner-6d744c4f7c-rh94v 0/1 CrashLoopBackOff 7 18m
kube-dns-6bfbdd666c-7nb4r 1/3 CrashLoopBackOff 533 23h
kubernetes-dashboard-6fd7f9c494-tddlk 1/1 Running 267 23h
monitoring-influxdb-grafana-v4-78777c64c8-7w2sl 2/2 Running 2 23h
It looks like I got it.
microk8s.inspect showed 2 warnings, so a:
sudo iptables -P FORWARD ACCEPT
and a
sudo ufw allow in on cbr0 && sudo ufw allow out on cbr0
and a microk8s restart seemed to clear the crashloops.
Thank you for your help.
Lutz
But one problem stays.
The micro8s registry does not seem to listen on localhost:32000 (ip4) but only on tcp6.
lstrobel@microk8s:~$ sudo netstat -a -p | grep 32000
tcp6 2 0 [::]:32000 [::]:* LISTEN 20016/kube-proxy
tcp6 245 0 localhost6.locald:32000 localhost6.locald:43058 CLOSE_WAIT -
tcp6 0 0 localhost6.locald:43042 localhost6.locald:32000 FIN_WAIT2 -
tcp6 149 0 localhost6.locald:32000 localhost6.locald:43042 CLOSE_WAIT -
tcp6 0 0 localhost6.locald:43058 localhost6.locald:32000 FIN_WAIT2 -
So a docker push fails.
It is annoying.
We have a few users facing the same problem with IPv6 and the registry [1][2]. What worked for most people is to edit /etc/hosts and comment out the ::1 localhost ip6-localhost ip6-loopback line (suggested in [1])
Not sure if this is the best way to ask the question, but looking for advice on how to get the local registry working / best approach if using microk8s clustered. My setup works just fine on a single node microk8s, images are pushed to/retrieved from the registry on localhost, happy system.
Once I add a second node, anything scheduled to that node dies on image pull. That seems logical enough as the yaml looks for the image in localhost, as I understand that registry is only on the master node, so it’s going to fail?
Ideally that would not involve ‘hardcoding’ the host anymore than necessary to keep it portable…
@MarkS you are right that the registry add-on is not suited out of the box for a multi-node cluster. There a couple of points you as the cluster admin need to address.
First you will need storage shared across nodes. This would allow the registry pod to re-spawn if needed without losing the stored images.
Second you will need to expose the registry with a fixed endpoint and configure the containerd instances to be aware of is. For exposing the registry you have the option of nodeport so that the registry would be available in a specific port on every node or you could use a load-balancer (metallb is available as an addon with mcirok8s enable metallb). For configuring containerd you will need to update /var/snap/microk8s/args/containerd-template.toml with the registry endpoint and restart MicroK8s.
We would like to hear what your approach would be especially in the case of the storage solution. Packaging a multi-node-friendly storage solution as an addon would have been great.
You may also want to look at the Images and Registries section in the docs [1]