Upgrading MicroK8s

As detailed in the documentation for selecting a channel, patch release updates (e.g 1.20.x to 1.20.x+1) happen automatically for the installed version of MicroK8s. This page covers intentionally upgrading to a new minor version (e.g. 1.20 to 1.21).

MicroK8s makes use of snap channels. This restricts automatic updates to new versions published in that channel, providing users with a way of making sure version upgrades only occur when the user asks for them.

MicroK8s channels follow the upstream release versions of Kubernetes. This makes it easy to specify a version of Kubernetes to use when installing MicroK8s, and to restrict updates to non-breaking changes within the same minor version. To upgrade to a new version, it is necessary to refresh the snap to point to a different channel, but some additional steps may be required on a running cluster to minimise disruption (as detailed below).

ā“˜ IMPORTANT NOTE: Workloads running in the cluster, unless specified otherwise in the documentation, will NOT be upgraded as part of a MicroK8s upgrade. This includes both enabled add-ons and the CNI. Currently, the most effective way to upgrade add-ons is to use microk8s disable <add-on> and then re-enable them. Please make sure to read the release notes for specific details referring to add-ons before upgrading. The latest manifest of the default CNI (calico) is always under /snap/microk8s/current/upgrade-scripts/000-switch-to-calico/resources/calico.yaml. Before applying it, please make sure you patch it with any customizations needed in your current setup.

Holding upgrades

By default, upgrades are delivered by the snapd daemon, which checks the store regularly to see if an updated version of the snap is available (for the currently installed channel). This is an important part of the security mechanism of delivering MicroK8s as a snap - making sure users get timely updates to resolve security issues and bugs.
However, it can be the case that you may not want a running unit or cluster of MicroK8s to upgrade on the default schedule, to mitigate against unexpected downtime.
For this reason, snap upgrades can be held to specific times/dates:

To delay any refreshes for a specified periodā€¦

sudo snap refresh --hold=24hr  microk8s

ā€¦to set a specific dateā€¦

sudo snap refresh --hold=2023-02-18T15:22:04+00:00

ā€¦or simply to stop updates altogetherā€¦

sudo snap refresh --hold microk8s

More specifics on setting the refresh parameters, including more detailed examples of using --hold can be found in the Snap documentation.

If you hold or turn off snap refreshes, it is important to remember that the software you are running may no longer be the most up to date patched version available.

Cluster upgrades

The Kubenetes project releases strive to be API backwards compatible, so upgrading a cluster should be possible. However, as a cluster administrator you should be aware of changes introduced on every release before attempting one. There are often important changes that affect the behavior of the cluster and the workloads it can serve. Please, consult the upstream release notes of the release you are targeting. Pay particular attention to the deprecation notices for a release you intend to upgrade to

Be also aware of the following constraints set by MicroK8s during an upgrade:

  • ā€œskip-levelā€ updates are NOT tested. Only upgrade through one minor release (e.g. 1.19 to 1.20) at a time.

  • Downgrading (e.g. 1.20 to 1.19) is not tested or supported.

  • Any configuration changes must be migrated incrementally across multiple versons to ensure the configuration is retained.

  • Customisation (e.g. changes to the arguments for kubernetes services) WILL be carried over to upgraded version, but please be aware that version changes may make these customisations incompatible with the updated cluster.

If an upgrade is not possible it is possible to re-install MicroK8s targetting the desired version.

Upgrade a single node cluster

Refreshing the MicroK8s snap to the desired channel effectively upgrades the cluster. For instance, to upgrade a v1.20 cluster to v1.21 simply run snap refresh with the new channel name:

sudo snap refresh microk8s --channel=1.21/stable

Upgrade a multi-node cluster

Kubernetes allows for some degree of service skew This allows a multi-node cluster to be upgraded one at a time.

For instance, if you have an existing cluster on v1.19 and you want to upgrade to v1.20, you should perform the following on each node:

microk8s kubectl drain <node> --ignore-daemonsets

At this point the node you drained should have no workload pods. Running the command:

microk8s kubectl get po -A -o wide

ā€¦should only show daemon set pods.

To upgrade the node, run:

sudo snap refresh microk8s --channel=1.21/stable

After the new version has been fetched and the snap is updated, the node should register with the new version:

microk8s.kubectl get no

The last step is to resume pod scheduling on the upgraded node with:

microk8s kubectl uncordon <node>

These steps should be repeated on all the nodes in the cluster.

A detailed example of upgrading a three node cluster is available in our How to section.

Revert a failed upgrade attempt

If for any reason an upgrade does not result in a working cluster, you can revert the node to its state before the latest refresh with:

sudo snap revert microk8s

For diagnostic purposes, you may wish to run:

microk8s inspect

ā€¦before reverting the upgrade. This collects a tarball of information about the running cluster/node.

1 Like

Hi there, Iā€™m slightly confused w.r.t. these docs about how to proceed to upgrade my 3 node cluster from 1.30/stable to 1.31/stable while keeping the cluster running. Or is that simply not a realistic option?

My confusion centers around the fact that it seems that itā€™s recommended to disable addons and then re-enable them post upgrade. But DNS, vital for running the cluster, is a ā€˜core addonā€™. Disabling the dns addon will as far as I know disable dns for the whole cluster, rendering it inoperable at least while doing the upgrade for all nodes.

The ā€œdetailed how to for upgrading a 3 node clusterā€ seems to completely ignore the whole addon situation, which according to this documentation would lead to a new k8s version with old and perhaps incompatible ā€˜addonsā€™ which you might not be able to remove due to the addon scripts being updated alongside the snap.

As my cluster is for learning I just tried it. The disabling didnā€™t really go smoothly with some errors popping up (while Iā€™m running microk8s ā€œfrom the tutorialā€, on Ubunutu server 24.04.01)

jelle@lenovo-03:~$ microk8s disable dns
Infer repository core for addon dns
Disabling DNS
Reconfiguring kubelet
Removing DNS manifest
deployment.apps "coredns" deleted
pod/coredns-5986966c54-849x7 condition met
serviceaccount "coredns" deleted
configmap "coredns" deleted
service "kube-dns" deleted
clusterrole.rbac.authorization.k8s.io "coredns" deleted
clusterrolebinding.rbac.authorization.k8s.io "coredns" deleted
[sudo] password for jelle: 
Removing argument --cluster-domain from nodes.
Removing argument --cluster-dns from nodes.
Restarting nodes.
The connection to the server 127.0.0.1:16443 was refused - did you specify the right host or port?
Failed to list nodes (try 1): Command '['/snap/microk8s/7398/microk8s-kubectl.wrapper', 'get', 'node', '-o', 'json']' returned non-zero exit status 1.
DNS is disabled

After this I can indeed see that dns is disabled on all three nodes, and pods are no longer running properly (as one might expect), e.g. in argocd

Failed to load target state: failed to generate manifest for source 1 of 1: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp: lookup argocd-repo-server: i/o timeout"