MicroK8s in LXD

MicroK8s can also be installed inside an LXD container. This is a great way, for example, to test out clustered MicroK8s without the need for multiple physical hosts.

Installing LXD

You can install LXD via snaps:

sudo snap install lxd
sudo lxd init

Add the MicroK8s LXD profile

MicroK8s requires some specific settings to work within LXD (these are explained in more detail below). These can be applied using a custom profile. The first step is to create a new profile to use:

lxc profile create microk8s

Once created, we’ll need to add the rules. If you’re using ZFS, you’ll need this version or, if you’re using ext4, you’ll need this one. There is a section at the end of this document to describe what these rules do.

Download the profile:

# for ZFS
wget https://raw.githubusercontent.com/ubuntu/microk8s/master/tests/lxc/microk8s-zfs.profile -O microk8s.profile

# for ext4
wget https://raw.githubusercontent.com/ubuntu/microk8s/master/tests/lxc/microk8s.profile -O microk8s.profile

We can now pipe that file into the LXD profile.

cat microk8s.profile | lxc profile edit microk8s

And then clean up.

rm microk8s.profile

Start an LXD container for MicroK8s

We can now create the container that MicroK8s will run in.

lxc launch -p default -p microk8s ubuntu:20.04 microk8s

Note that this command uses the ‘default’ profile, for any existing system settings (networking, storage, etc.) before also applying the ‘microk8s’ profile - the order is important.

Install MicroK8s in an LXD container

Access the container’s shell:

lxc exec microk8s -- bash

Then, once inside the shell:

sudo snap install microk8s --classic

Explanation of the custom rules

  • boot.autostart: “true”: Always start the container when LXD starts. This is needed to start the container when the host boots.

  • linux.kernel_modules: Comma separated list of kernel modules to load before starting the container

  • lxc.apparmor.profile=unconfined: Disable AppArmor. Allow the container to talk to a bunch of subsystems of the host (eg /sys) (see [1]). By default AppArmor will block nested hosting of containers, however Kubernetes needs to host Docker containers. Docker containers need to be confined based on their profiles thus we rely on confining them and not the hosts. If you can account for the needs of the Docker containers you could tighten the AppArmor profile instead of disabling it completely, as suggested in [1].

  • lxc.cap.drop=: Do not drop any capabilities [2]. For justification see above.

  • lxc.mount.auto=proc:rw sys:rw: Mount proc and sys rw [3]. For privileged containers, lxc over-mounts part of /proc as read-only to avoid damage to the host. Kubernetes will complain with messages like “Failed to start ContainerManager open /proc/sys/kernel/panic: permission denied”

  • security.nesting: “true”: Support running LXD (nested) inside the container.

  • security.privileged: “true”: Runs the container in privileged mode, not using kernel namespaces [4, 5]. This is needed because hosted Docker containers may need to access for example storage devices (See comment in [6]).

  • devices: disable /sys/module/nf_conntrack/parameters/hashsize and /sys/module/apparmor/parameters/enabled: Hide two files owned by the host from the LXD containers. Containers cannot disable AppArmor or set the size of the connection tracking table[7].