Installing MicroK8s Offline or in an airgapped environment

There are situations where it is necessary or desirable to run MicroK8s on a
machine not connected to the internet. This is possible, but there are a few
extra things to be aware of, and some different strategies depending on the extend of separation from the network. This guide explains the necessary preparation required and the steps required for the potential scenarios.

Install MicroK8s in airgap environments

Prepare for deployment

The main things to consider when deploying MicroK8s in an airgap environment are:

1. Download the MicroK8s snap

From a machine that has access to the internet, download the core20 and microk8s snaps and assertion files.

NOTE: For MicroK8s versions 1.26 or earlier, the core18 snap is required instead.

sudo snap download microk8s --channel 1.27
sudo snap download core20
sudo mv microk8s_*.snap microk8s.snap
sudo mv microk8s_*.assert microk8s.assert
sudo mv core20_*.snap core20.snap
sudo mv core20_*.assert core20.assert

We will use core20.snap and microk8s.snap to install MicroK8s on the next steps. The core20.assert and microk8s.assert are the snap assertion files, required to verify the integrity of the snap packages.

2. Networking Requirements

Air-gap deployments are typically associated with a number of constraints and restrictions with the networking connectivity of the machines. Below we discuss the requirements that the deployment needs to fulfil.

Verify networking access between machines for the Kubernetes services

Make sure that all cluster nodes are reachable from each other. Refer to Services and ports used for a list of all network ports used by MicroK8s.

Ensure machines have a default gateway

Kubernetes services use the default interface of the machine for discovery reasons:

  • kube-apiserver (part of kubelite) uses the default interface to advertise this address to other nodes in the cluster. Starting kube-apiserver without a default route will fail.
  • kubelet (part of kubelite) uses the default interface to pick the node InternalIP address.
  • A default gateway greatly simplifies the process of setting up the Calico CNI.

In case your airgap environment does not have a default gateway, you can add a dummy default route on interface eth0 using the following command:

ip route add default dev eth0

NOTE: The dummy gateway will only be used by the Kubernetes services to know which interface to use, actual connectivity to the internet is not required.

NOTE: Make sure that the dummy gateway rule survives a node reboot.

(Optional) Ensure proxy access

This is only required if an HTTP proxy (e.g. squid) is used to allow limited access to image registries (e.g. docker.io, quay.io, rocks.canonical.com, etc) (see the Access to upstream registries via an HTTP proxy section below).

Ensure that all nodes can use the proxy to access the registry. For example, if using http://squid.internal:3128 to access docker.io, an easy way to test connectivity is:

export https_proxy=http://squid.internal:3128
curl -v https://registry-1.docker.io

3. Images

All workloads in a Kubernetes cluster are running as an OCI image. Kubernetes needs to be able to fetch these images and load them into the container runtime, otherwise the cluster will be unable to run any workload. For a MicroK8s deployment, you will need to fetch the images used by the MicroK8s core (calico, coredns, etc) as well as any images that are needed to run your workloads.

For airgap deployments, there are 3 main options, ordered by ease of use.

NOTE: For a list of all images used by MicroK8s, see images.txt. This is the list of core images required to bring up MicroK8s (e.g. CoreDNS, Calico CNI, etc). Make sure that you also include any images for the workloads that you intend to run on the cluster.

NOTE: Depending on the use case, more than one of the methods below may be required.

Option A. Access to upstream registries via an HTTP proxy

In many cases, the nodes of the airgap deployment may not have direct access to upstream registries, but can reach them through the use of an HTTP proxy.

Option B. Use a private registry mirror

In case regulations and/or network constraints do not allow the cluster nodes to access any upstream image registry, it is typical to deploy a private registry mirror. This is an image registry service that contains all the required OCI Images (e.g. registry, Harbor or any other OCI registry) and is reachable from all cluster nodes.

This requires three steps:

  1. Deploy and secure the registry service. This is out of scope for this document, please follow the instructions for the registry that you want to deploy.
  2. Load all images from the upstream source and push to our registry mirror.
  3. Configure the MicroK8s container runtime (containerd) to load images from the private registry mirror instead of the upstream source. This will be described in the Configure registry mirrors section.

In order to load images into the private registry, you need a machine with access to both the upstream registry (e.g. docker.io) and the internal one. Loading the images is possible with docker or ctr.

For the examples below we assume that a private registry mirror is running at 10.100.100.100:5000.

Load images with ctr

On the machine with access to both registries, first install ctr. For Ubuntu hosts, this can be done with:

sudo apt-get update
sudo apt-get install containerd

Then, pull an image:

NOTE: For DockerHub images, prefix with docker.io/library.

export IMAGE=library/nginx:latest
export FROM_REPOSITORY=docker.io
export TO_REPOSITORY=10.100.100.100:5000

# pull the image and tag
ctr image pull "$FROM_REPOSITORY/$IMAGE"
ctr image convert "$FROM_REPOSITORY/$IMAGE" "$TO_REPOSITORY/$IMAGE"

Finally, push the image (see ctr image push --help for a complete list of supported arguments):

# push image
ctr image push "$TO_REPOSITORY/$IMAGE"
# OR, if using HTTP and basic auth
ctr image push "$TO_REPOSITORY/$IMAGE" --plain-http -u "$USER:$PASS"
# OR, if using HTTPS and a custom CA (assuming CA certificate is at `/path/to/ca.crt`)
ctr image push "$TO_REPOSITORY/$IMAGE" --ca /path/to/ca.crt

Make sure to repeat the steps above (pull, convert, push) for all the images that you need.

Load images with docker

On the machine with access to both registries, first install docker. For Ubuntu hosts, this can be done with:

sudo apt-get update
sudo apt-get install docker.io

If needed, login to the private registry:

sudo docker login $TO_REGISTRY

Then pull, tag and push the image:

export IMAGE=library/nginx:latest
export FROM_REPOSITORY=docker.io
export TO_REPOSITORY=10.100.100.100:5000

sudo docker pull "$FROM_REPOSITORY/$IMAGE"
sudo docker tag "$FROM_REPOSITORY/$IMAGE" "$TO_REPOSITORY/$IMAGE"
sudo docker push "$TO_REPOSITORY/$IMAGE"

Repeat the pull, tag and push steps for all required images.

Option C. Side-load images

Image side-loading is the process of loading all required OCI images directly into the container runtime, so that they do not have to be fetched at runtime. If the image side-loading option is chosen, you then need a bundle of all the OCI images that will be used by the cluster.

See the Image side-loading page for more information on how to create a bundle of OCI images. As an example, to create a bundle of all OCI images currently in use by a MicroK8s instance and store it into images.tar, you can use:

microk8s images export-local > images.tar

Deploy MicroK8s cluster

1. Install MicroK8s

Copy the microk8s.snap, microk8s.assert, core20.snap and core20.assert files into the target node, then install with:

sudo snap ack core20.assert && sudo snap install ./core20.snap
sudo snap ack microk8s.assert && sudo snap install ./microk8s.snap --classic

Repeat the above for all nodes of the cluster.

2. Form MicroK8s cluster

NOTE: This step is not required for single-node deployments.

On one of the nodes, run the following command:

microk8s add-node --token-ttl 3600

This will print the command that needs to be used by all other nodes to join the cluster, for example:

microk8s join 10.0.0.10:25000/asd6fa8sd67857a587dsa65f87a/fg6sdf87g65

After a while, you should be able to see all the cluster nodes showing up in the output of the microk8s kubectl get node. The nodes will most likely be in NotReady state, since we still need to ensure the container runtime can fetch images.

3. Configure container runtime

Option A. Configure HTTP proxy for registries

Edit /var/snap/microk8s/current/args/containerd-env and set http_proxy, https_proxy and no_proxy. For example, if your proxy is at http://squid.internal:3128, append the following lines:

HTTP_PROXY=http://squid.internal:3128
HTTPS_PROXY=http://squid.internal:3128
NO_PROXY=10.0.0.0/8,192.168.0.0/16,127.0.0.1,172.16.0.0/12

Then restart MicroK8s with:

sudo snap restart microk8s

NOTE: For more information, see Installing behind a proxy.

Option B. Configure registry mirrors

This requires that you have already setup a registry mirror, as explained in Use a private registry mirror.

Assuming the registry mirror is at 10.100.100.100:5000, edit /var/snap/microk8s/current/args/certs.d/docker.io/hosts.toml and make sure it looks like this:

HTTP registry
# /var/snap/microk8s/current/args/certs.d/docker.io/hosts.toml
[host."http://10.100.100.100:5000"]
capabilities = ["pull", "resolve"]
HTTPS registry

You will have to specify the registry CA certificate as well. Copy the certificate to /var/snap/microk8s/current/args/certs.d/docker.io/ca.crt, then add

# /var/snap/microk8s/current/args/certs.d/docker.io/hosts.toml
[host."https://10.100.100.100:5000"]
capabilities = ["pull", "resolve"]
ca = "/var/snap/microk8s/current/args/certs.d/docker.io/ca.crt"

Option C. Side-load images

For MicroK8s 1.25 or newer, copy the images.tar file to one of the cluster nodes and run the following command:

microk8s images import < images.tar

In older MicroK8s versions, copy the images.tar on all nodes and run the following on each node:

microk8s ctr image import - < images.tar

NOTE: See the image side-loading page for more details.

1 Like

Might be worth mentioning the specific domains that would need access through a proxy, which we found to be:

Also, MicroK8s defaults to using Google’s DNS servers, so if you don’t allow access to those you should run microk8s enable dns:${ip-of-dns-server}.

1 Like

Sadly, I can’t get this to work. Upon attempting stop one of the cluster instillation running the following command fails: sudo snap ack microk8s.assert && sudo snap install ./microk8s.snap --classic. Whereas the install instructions for core20 works just fine.

Upon executing the micork8s.snap --classic install, it declares that it is - Ensuring that the prerequisites for "microk8s" are available. Obviously this fails because I am on an air gapped network.

Any Ideas?

sorry for not responding sooner. Did you have any update on this? Otherwise I will schedule some time to check through that the process is consistent

@evilnick

On the networking requirements I also needed to do some tweaks to Calico deamonset in order for the pods to come up okay

I would see this error log

# microk8s.kubectl logs -n kube-system calico-node-drp64
Defaulted container "calico-node" out of: calico-node, upgrade-ipam (init), install-cni (init)
2024-04-30 20:10:19.343 [INFO][10] startup/startup.go 427: Early log level set to info
2024-04-30 20:10:19.343 [INFO][10] startup/utils.go 126: Using NODENAME environment for node name management-01
2024-04-30 20:10:19.343 [INFO][10] startup/utils.go 138: Determined node name: management-01
2024-04-30 20:10:19.343 [INFO][10] startup/startup.go 94: Starting node management-01 with version v3.25.1
2024-04-30 20:10:19.345 [INFO][10] startup/startup.go 432: Checking datastore connection
2024-04-30 20:10:21.905 [INFO][10] startup/startup.go 456: Datastore connection verified
2024-04-30 20:10:21.905 [INFO][10] startup/startup.go 104: Datastore is ready
2024-04-30 20:10:26.932 [INFO][10] startup/customresource.go 102: Error getting resource Key=GlobalFelixConfig(name=CalicoVersion) Name="calicoversion" Resource="GlobalFelixConfigs" error=the server could not find the requested resource (get GlobalFelixConfigs.crd.projectcalico.org calicoversion)
2024-04-30 20:10:26.946 [INFO][10] startup/startup.go 485: Initialize BGP data
2024-04-30 20:10:26.947 [WARNING][10] startup/autodetection_methods.go 99: Unable to auto-detect an IPv4 address: no valid IPv4 addresses found on the host interfaces
2024-04-30 20:10:26.947 [WARNING][10] startup/startup.go 507: Couldn't autodetect an IPv4 address. If auto-detecting, choose a different autodetection method. Otherwise provide an explicit address.
2024-04-30 20:10:26.947 [INFO][10] startup/startup.go 391: Clearing out-of-date IPv4 address from this node IP=""
2024-04-30 20:10:26.955 [WARNING][10] startup/utils.go 48: Terminating
Calico node failed to start

And had to update it like this in order for the pod to get into running state

# microk8s.kubectl set env daemonset/calico-node -n kube-system IP_AUTODETECTION_METHOD=kubernetes-internal-ip
1 Like

Let’s please update the link for docs/image-sideloading to /docs/sideload.

1 Like

well, it shouldnt be /docs/anything, it should link to here: /t/image-side-loading/20437