Migrating to kubernetes version with containerd instead of docker as a container runtime

Dear community

I got some concerns regarding to use of containerd as CRI runtime inside K8s clusters from 1.20 version (In AKS from 1.19.x)

I am about to upgrade my AKS cluster to 1.19.x, and I understand the trade-offs in this link announcemnent Don’t Panic: Kubernetes and Docker | Kubernetes. But is still confusing for me the idea to modify the nodes to use containerd runtime whether it is something that I would have to do or I don’t have to.

I am reading here more about it. Container runtimes | Kubernetes, but is not clear for me the following:
If I upgrade my cluster to a version which one has containerd as a runtime and not docker (using docker-shim) anymore,
making this upgrade … can I expect all my nodes have the containerd runtime automatically installed?
Or,
Does the upgrade will just remove docker as a container runtime and I have to install my suitable flavour runtime on my own over my nodes?

The thing is, my production cluster is just fetching a docker image which one was built and pushed outside the cluster operations, so if I upgrade to 1.19.x version having containerd as a runtime there, I cannot expect to have issues and or have to do some modifications regarding to use of containerd, since all images produced from docker build will work on akk CRI implementations according to this Dockershim Deprecation FAQ | Kubernetes

You generally don’t have to worry on managed platforms. According to the AKS docs, if you’re using 1.19, you’re already on containerd.

If a self-managed cluster (e.g. deployed with kubeadm), you would be responsible for installing containerd, it would not be automatically installed for you.

Docker images are oci compatible images and should work with any oci compatible runtime (the vast majority of them are; there are a few edge cases in the research community).

If you rely on mounting docker.sock to your pods for CI. It wont work anymore with containerd. No one should do that anyways, it’s insecure. Things like kaniko exist for CI builds.

Something that is a problem with containerd is that it does not respect CA certificate changes on the hosts without restarting. This looks like it might be fixed in 1.5, but I’m not sure, it hasn’t rolled out to Ubuntu. This was a problem that directly affected me, but I do weird stuff.

I’ve yet to run into other quirks between docker and containerd for my container runtime.