MicroK8s in LXD

MicroK8s can also be installed inside an LXD container. This is a great way, for example, to test out clustered MicroK8s without the need for multiple physical hosts.

Installing LXD

You can install LXD via snaps:

sudo snap install lxd
sudo lxd init

Add the MicroK8s LXD profile

MicroK8s requires some specific settings to work within LXD (these are explained in more detail below). These can be applied using a custom profile. The first step is to create a new profile to use:

lxc profile create microk8s

Once created, we’ll need to add the rules. If you’re using ZFS, you’ll need this version or, if you’re using ext4, you’ll need this one. There is a section at the end of this document to describe what these rules do.

Download the profile:

# for ZFS
wget https://raw.githubusercontent.com/ubuntu/microk8s/master/tests/lxc/microk8s-zfs.profile -O microk8s.profile

# for ext4
wget https://raw.githubusercontent.com/ubuntu/microk8s/master/tests/lxc/microk8s.profile -O microk8s.profile

We can now pipe that file into the LXD profile.

cat microk8s.profile | lxc profile edit microk8s

And then clean up.

rm microk8s.profile

Start an LXD container for MicroK8s

We can now create the container that MicroK8s will run in.

lxc launch -p default -p microk8s ubuntu:20.04 microk8s

Note that this command uses the ‘default’ profile, for any existing system settings (networking, storage, etc.) before also applying the ‘microk8s’ profile - the order is important.

Install MicroK8s in an LXD container

First, we’ll need to install MicroK8s within the container.

lxc exec microk8s -- sudo snap install microk8s --classic

Load AppArmor profiles on boot

When the LXD container boots it needs to load the AppArmor profiles required by MicroK8s or else you may get the error:

cannot change profile for the next exec call: No such file or directory

To automate the profile loading first enter the LXD container with:

lxc shell microk8s

Then create an rc.local file to perform the profile loading:

cat > /etc/rc.local <<EOF
#!/bin/bash

apparmor_parser --replace /var/lib/snapd/apparmor/profiles/snap.microk8s.*
exit 0
EOF

Make the rc.local executable:

chmod +x /etc/rc.local

Accessing MicroK8s Services Within LXD

Assuming you left the default bridged networking when you initially setup LXD, there is minimal effort required to access MicroK8s services inside the LXD container.

Simply note the eth0 interface IP address from

lxc list microk8s

and use this to access services running inside the container.

Exposing Services To Node

You’ll need to expose the deployment or service to the container itself before you can access it via the LXD container’s IP address. This can be done using kubectl expose. This example will expose the deployment’s port 80 to a port assigned by Kubernetes.

Microbot

In this example, we will use Microbot as it provides a simple HTTP endpoint to expose. These steps can be applied to any other deployment.

First, let’s deploy Microbot (please note this image only works on x86_64).

lxc exec microk8s -- sudo microk8s kubectl create deployment microbot --image=dontrebootme/microbot:v1

Then check that the deployment has come up.

lxc exec microk8s -- sudo microk8s kubectl get all

NAME                            READY   STATUS    RESTARTS   AGE
pod/microbot-6d97548556-hchb7   1/1     Running   0          21m

NAME                       TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
service/kubernetes         ClusterIP   10.152.183.1     <none>        443/TCP        21m

NAME                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/microbot   1/1     1            1           21m

NAME                                  DESIRED   CURRENT   READY   AGE
replicaset.apps/microbot-6d97548556   1         1         1       21m

As we can see, Microbot is running. Let’s expose it to the LXD container.

lxc exec microk8s -- sudo microk8s kubectl expose deployment microbot --type=NodePort --port=80 --name=microbot-service

We can now get the assigned port. In this example, it’s 32750.

lxc exec microk8s -- sudo microk8s kubectl get service microbot-service

NAME               TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
microbot-service   NodePort   10.152.183.188   <none>        80:32750/TCP   27m

With this, we can access Microbot from our host but using the container’s address that we noted earlier.

curl 10.245.108.37:32750

Dashboard

The dashboard addon has a built in helper. Start the Kubernetes dashboard

lxc exec microk8s -- microk8s dashboard-proxy

and replace 127.0.0.1 with the container’s IP address we noted earlier.

Explanation of the custom rules

  • boot.autostart: “true”: Always start the container when LXD starts. This is needed to start the container when the host boots.

  • linux.kernel_modules: Comma separated list of kernel modules to load before starting the container

  • lxc.apparmor.profile=unconfined: Disable AppArmor. Allow the container to talk to a bunch of subsystems of the host (eg /sys) (see [1]). By default AppArmor will block nested hosting of containers, however Kubernetes needs to host Docker containers. Docker containers need to be confined based on their profiles thus we rely on confining them and not the hosts. If you can account for the needs of the Docker containers you could tighten the AppArmor profile instead of disabling it completely, as suggested in [1].

  • lxc.cap.drop=: Do not drop any capabilities [2]. For justification see above.

  • lxc.mount.auto=proc:rw sys:rw: Mount proc and sys rw [3]. For privileged containers, lxc over-mounts part of /proc as read-only to avoid damage to the host. Kubernetes will complain with messages like “Failed to start ContainerManager open /proc/sys/kernel/panic: permission denied”

  • security.nesting: “true”: Support running LXD (nested) inside the container.

  • security.privileged: “true”: Runs the container in privileged mode, not using kernel namespaces [4, 5]. This is needed because hosted Docker containers may need to access for example storage devices (See comment in [6]).

  • devices: disable /sys/module/nf_conntrack/parameters/hashsize and /sys/module/apparmor/parameters/enabled: Hide two files owned by the host from the LXD containers. Containers cannot disable AppArmor or set the size of the connection tracking table[7].

Citations

4 Likes

Thanks, needed a few tweaks to get it working:

cat > lxd-init.yaml <<EOF
config: {}
networks:
- config:
    ipv4.address: auto
    ipv6.address: auto
  description: ""
  name: lxdbr0
  type: ""
  project: default
storage_pools:
- config:
    size: 12GB
  description: ""
  name: default
  driver: zfs
profiles:
- config: {}
  description: ""
  devices:
    eth0:
      name: eth0
      network: lxdbr0
      type: nic
    root:
      path: /
      pool: default
      type: disk
  name: default
projects: []
cluster: null
EOF
cat lxd-init.yaml | sudo lxd init --preseed
rm lxd-init.yaml
lxc profile create microk8s
cat > microk8s.profile <<EOF
config:
  boot.autostart: "true"
  linux.kernel_modules: ip_vs,ip_vs_rr,ip_vs_wrr,ip_vs_sh,ip_tables,ip6_tables,netlink_diag,nf_nat,overlay,br_netfilter,nf_conntrack_ipv4
  raw.lxc: |
    lxc.apparmor.profile=unconfined
    lxc.mount.auto=proc:rw sys:rw cgroup:rw
    lxc.cgroup.devices.allow=a
    lxc.cap.drop=
  security.nesting: "true"
  security.privileged: "true"
  security.syscalls.intercept.bpf: "true"
  security.syscalls.intercept.bpf.devices: "true"
  security.syscalls.intercept.mknod: "true"
  security.syscalls.intercept.setxattr: "true"
description: ""
devices:
  aadisable:
    path: /sys/module/nf_conntrack/parameters/hashsize
    source: /sys/module/nf_conntrack/parameters/hashsize
    type: disk
  aadisable1:
    path: /sys/module/apparmor/parameters/enabled
    source: /dev/null
    type: disk
  aadisable2:
    path: /dev/zfs
    source: /dev/zfs
    type: disk
  aadisable3:
    path: /dev/kmsg
    source: /dev/kmsg
    type: disk
  aadisable4:
    path: /sys/fs/bpf
    source: /sys/fs/bpf
    type: disk
name: microk8s
used_by: []
EOF
cat microk8s.profile | lxc profile edit microk8s
rm microk8s.profile
lxc launch -p default -p microk8s ubuntu:20.04 microk8s
sleep 10
lxc exec microk8s -- sudo snap install microk8s --classic
lxc shell microk8s
cat > /etc/rc.local <<EOF
#!/bin/bash

apparmor_parser --replace /var/lib/snapd/apparmor/profiles/snap.microk8s.*
exit 0
EOF
chmod +x /etc/rc.local
systemctl restart rc-local
echo 'L /dev/kmsg - - - - /dev/null' > /etc/tmpfiles.d/kmsg.conf
exit
echo '--conntrack-max-per-core=0' >> /var/snap/microk8s/current/args/kube-proxy
lxc restart microk8s
lxc exec microk8s -- sudo swapoff -a
lxc exec microk8s -- sudo microk8s.kubectl create deployment microbot --image=dontrebootme/microbot:v1
lxc exec microk8s -- sudo microk8s.kubectl get all
1 Like

Up to now I am happy with kind.

Why do you think MicroK8s is better?

Does anyone have a working profile to run microk8 within lxd?

The profile(s) mentioned here doesn’t work.

security.nesting = true  --> no good
linux.kernel_modules ----> no good

image

Thanks to this article I was able to implement a 6-node MicroK8s cluster with a single VM, using multipass, and then provision 6 light-VMs (containers) inside this VM, support received on the MicroK8s github was also very helpful to solve a networking-related problem.

To be able to reproduce a cloud-like environment within a single VM running on Windows 10 Pro is really convenient/awesome, multipass is a game changer for those using Linux on Windows 10.

Thank you very much for these great initiatives, MicroK8s, LXD, and Multipass

1 Like

@Erik_Lonroth Just off the top of my head, arne’t these ‘profile’ settings, not project settings?

I spent lots time in this issue, anybody help please please please!!!

then i got

error: unable to contact snap store

some infomartion :
ryze@ubuntu:~$ lxc exec microk8s – sudo curl --verbose api.snapcraft.io

  • Trying 185.125.188.58:80

  • TCP_NODELAY set
  • connect to 185.125.188.58 port 80 failed: Connection timed out
  • Trying 185.125.188.54:80

  • TCP_NODELAY set
  • After 84870ms connect time, move on!
  • connect to 185.125.188.54 port 80 failed: Connection timed out
  • Trying 185.125.188.59:80

  • TCP_NODELAY set
  • After 42434ms connect time, move on!
  • connect to 185.125.188.59 port 80 failed: Connection timed out
  • Trying 185.125.188.55:80

  • TCP_NODELAY set
  • After 21215ms connect time, move on!
  • connect to 185.125.188.55 port 80 failed: Connection timed out
  • Failed to connect to api.snapcraft.io port 80: Connection timed out
  • Closing connection 0
    curl: (28) Failed to connect to api.snapcraft.io port 80: Connection timed out

ryze@ubuntu:~$ lxc network list
±----------------±---------±--------±---------------±--------------------------±------------±--------±--------+
| NAME | TYPE | MANAGED | IPV4 | IPV6 | DESCRIPTION | USED BY | STATE |
±----------------±---------±--------±---------------±--------------------------±------------±--------±--------+
| br-2420f6858d6a | bridge | NO | | | | 0 | |
±----------------±---------±--------±---------------±--------------------------±------------±--------±--------+
| docker0 | bridge | NO | | | | 0 | |
±----------------±---------±--------±---------------±--------------------------±------------±--------±--------+
| enp0s3 | physical | NO | | | | 0 | |
±----------------±---------±--------±---------------±--------------------------±------------±--------±--------+
| lxdbr0 | bridge | YES | 10.78.125.1/24 | fd42:d07d:153f:5503::1/64 | | 2 | CREATED |
±----------------±---------±--------±---------------±--------------------------±------------±--------±--------+

ryze@ubuntu:~$ lxc network show lxdbr0
config:
ipv4.address: 10.78.125.1/24
ipv4.nat: “true”
ipv6.address: fd42:d07d:153f:5503::1/64
ipv6.nat: “true”
description: “”
name: lxdbr0
type: bridge
used_by:

  • /1.0/instances/microk8s
  • /1.0/profiles/default
    managed: true
    status: Created
    locations:
  • none

ryze@ubuntu:~$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 08:00:27:19:5b:97 brd ff:ff:ff:ff:ff:ff
inet 172.17.51.124/24 metric 100 brd 172.17.51.255 scope global dynamic enp0s3
valid_lft 590949sec preferred_lft 590949sec
inet6 fe80::a00:27ff:fe19:5b97/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:dd:b0:a5:36 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
4: br-2420f6858d6a: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:ff:d1:ca:19 brd ff:ff:ff:ff:ff:ff
inet 192.168.49.1/24 brd 192.168.49.255 scope global br-2420f6858d6a
valid_lft forever preferred_lft forever
13: lxdbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:16:3e:24:cc:d2 brd ff:ff:ff:ff:ff:ff
inet 10.78.125.1/24 scope global lxdbr0
valid_lft forever preferred_lft forever
inet6 fd42:d07d:153f:5503::1/64 scope global
valid_lft forever preferred_lft forever
inet6 fe80::216:3eff:fe24:ccd2/64 scope link
valid_lft forever preferred_lft forever
15: veth0152dd73@if14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxdbr0 state UP group default qlen 1000
link/ether 5a:06:53:93:64:1e brd ff:ff:ff:ff:ff:ff link-netnsid 0

Hi @liu_ryze. Did you resolve this issue? Can you confirm that you can contact any other URL from withing the container?