MicroK8s in LXD

MicroK8s can also be installed inside an LXD VM. This is a great way, for example, to test out clustered MicroK8s without the need for multiple physical hosts.

Why an LXD virtual machine and not a container? In order to run certain Kubernetes services, the LXD container would need to be a privileged container. While this is possible, it is not the recommended pattern as it allows the root user in the container to be the root user on the host. Also, newer versions of Ubuntu and systemd require operations (such as mounting to the /proc directory) that cannot be safely handled with privileged containers. By using virtual machines, we ensure that the Kubernetes environment remains well isolated.

Installing LXD

You can install LXD via snaps:

sudo snap install lxd
sudo lxd init

Start an LXD VM for MicroK8s

We can now create the VM that MicroK8s will run in.

lxc launch ubuntu:22.04 k8s-vm --vm -c limits.cpu=2 -c limits.memory=4GB

Install MicroK8s in an LXD VM

First, we’ll need to install MicroK8s within the VM.

lxc exec k8s-vm -- sudo snap install microk8s --classic

Accessing MicroK8s Services Within LXD

Assuming you left the default bridged networking when you initially setup LXD, there is minimal effort required to access MicroK8s services inside the LXD VM.

Simply note the eth0 interface IP address from

lxc list k8s-vm

and use this to access services running inside the VM.

Exposing Services To Node

You’ll need to expose the deployment or service to the VM itself before you can access it via the LXD VM’s IP address. This can be done using kubectl expose. This example will expose the deployment’s port 80 to a port assigned by Kubernetes.

Microbot

In this example, we will use Microbot as it provides a simple HTTP endpoint to expose. These steps can be applied to any other deployment.

First, let’s deploy Microbot (please note this image only works on x86_64).

lxc exec k8s-vm -- sudo microk8s kubectl create deployment microbot --image=dontrebootme/microbot:v1

Then check that the deployment has come up.

lxc exec k8s-vm -- sudo microk8s kubectl get all

NAME                            READY   STATUS    RESTARTS   AGE
pod/microbot-6d97548556-hchb7   1/1     Running   0          21m

NAME                       TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
service/kubernetes         ClusterIP   10.152.183.1     <none>        443/TCP        21m

NAME                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/microbot   1/1     1            1           21m

NAME                                  DESIRED   CURRENT   READY   AGE
replicaset.apps/microbot-6d97548556   1         1         1       21m

As we can see, Microbot is running. Let’s expose it to the LXD VM.

lxc exec k8s-vm -- sudo microk8s kubectl expose deployment microbot --type=NodePort --port=80 --name=microbot-service

We can now get the assigned port. In this example, it’s 32750.

lxc exec k8s-vm -- sudo microk8s kubectl get service microbot-service

NAME               TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
microbot-service   NodePort   10.152.183.188   <none>        80:32750/TCP   27m

With this, we can access Microbot from our host but using the VM’s address that we noted earlier.

curl 10.245.108.37:32750

Dashboard

The dashboard addon has a built in helper. Start the Kubernetes dashboard

lxc exec k8s-vm -- microk8s dashboard-proxy

and replace 127.0.0.1 with the VM’s IP address we noted earlier.

4 Likes

Thanks, needed a few tweaks to get it working:

cat > lxd-init.yaml <<EOF
config: {}
networks:
- config:
    ipv4.address: auto
    ipv6.address: auto
  description: ""
  name: lxdbr0
  type: ""
  project: default
storage_pools:
- config:
    size: 12GB
  description: ""
  name: default
  driver: zfs
profiles:
- config: {}
  description: ""
  devices:
    eth0:
      name: eth0
      network: lxdbr0
      type: nic
    root:
      path: /
      pool: default
      type: disk
  name: default
projects: []
cluster: null
EOF
cat lxd-init.yaml | sudo lxd init --preseed
rm lxd-init.yaml
lxc profile create microk8s
cat > microk8s.profile <<EOF
config:
  boot.autostart: "true"
  linux.kernel_modules: ip_vs,ip_vs_rr,ip_vs_wrr,ip_vs_sh,ip_tables,ip6_tables,netlink_diag,nf_nat,overlay,br_netfilter,nf_conntrack_ipv4
  raw.lxc: |
    lxc.apparmor.profile=unconfined
    lxc.mount.auto=proc:rw sys:rw cgroup:rw
    lxc.cgroup.devices.allow=a
    lxc.cap.drop=
  security.nesting: "true"
  security.privileged: "true"
  security.syscalls.intercept.bpf: "true"
  security.syscalls.intercept.bpf.devices: "true"
  security.syscalls.intercept.mknod: "true"
  security.syscalls.intercept.setxattr: "true"
description: ""
devices:
  aadisable:
    path: /sys/module/nf_conntrack/parameters/hashsize
    source: /sys/module/nf_conntrack/parameters/hashsize
    type: disk
  aadisable1:
    path: /sys/module/apparmor/parameters/enabled
    source: /dev/null
    type: disk
  aadisable2:
    path: /dev/zfs
    source: /dev/zfs
    type: disk
  aadisable3:
    path: /dev/kmsg
    source: /dev/kmsg
    type: disk
  aadisable4:
    path: /sys/fs/bpf
    source: /sys/fs/bpf
    type: disk
name: microk8s
used_by: []
EOF
cat microk8s.profile | lxc profile edit microk8s
rm microk8s.profile
lxc launch -p default -p microk8s ubuntu:20.04 microk8s
sleep 10
lxc exec microk8s -- sudo snap install microk8s --classic
lxc shell microk8s
cat > /etc/rc.local <<EOF
#!/bin/bash

apparmor_parser --replace /var/lib/snapd/apparmor/profiles/snap.microk8s.*
exit 0
EOF
chmod +x /etc/rc.local
systemctl restart rc-local
echo 'L /dev/kmsg - - - - /dev/null' > /etc/tmpfiles.d/kmsg.conf
exit
echo '--conntrack-max-per-core=0' >> /var/snap/microk8s/current/args/kube-proxy
lxc restart microk8s
lxc exec microk8s -- sudo swapoff -a
lxc exec microk8s -- sudo microk8s.kubectl create deployment microbot --image=dontrebootme/microbot:v1
lxc exec microk8s -- sudo microk8s.kubectl get all
1 Like

Up to now I am happy with kind.

Why do you think MicroK8s is better?

Does anyone have a working profile to run microk8 within lxd?

The profile(s) mentioned here doesn’t work.

security.nesting = true  --> no good
linux.kernel_modules ----> no good

image

Thanks to this article I was able to implement a 6-node MicroK8s cluster with a single VM, using multipass, and then provision 6 light-VMs (containers) inside this VM, support received on the MicroK8s github was also very helpful to solve a networking-related problem.

To be able to reproduce a cloud-like environment within a single VM running on Windows 10 Pro is really convenient/awesome, multipass is a game changer for those using Linux on Windows 10.

Thank you very much for these great initiatives, MicroK8s, LXD, and Multipass

1 Like

@Erik_Lonroth Just off the top of my head, arne’t these ‘profile’ settings, not project settings?

I spent lots time in this issue, anybody help please please please!!!

then i got

error: unable to contact snap store

some infomartion :
ryze@ubuntu:~$ lxc exec microk8s – sudo curl --verbose api.snapcraft.io

  • Trying 185.125.188.58:80…
  • TCP_NODELAY set
  • connect to 185.125.188.58 port 80 failed: Connection timed out
  • Trying 185.125.188.54:80…
  • TCP_NODELAY set
  • After 84870ms connect time, move on!
  • connect to 185.125.188.54 port 80 failed: Connection timed out
  • Trying 185.125.188.59:80…
  • TCP_NODELAY set
  • After 42434ms connect time, move on!
  • connect to 185.125.188.59 port 80 failed: Connection timed out
  • Trying 185.125.188.55:80…
  • TCP_NODELAY set
  • After 21215ms connect time, move on!
  • connect to 185.125.188.55 port 80 failed: Connection timed out
  • Failed to connect to api.snapcraft.io port 80: Connection timed out
  • Closing connection 0
    curl: (28) Failed to connect to api.snapcraft.io port 80: Connection timed out

ryze@ubuntu:~$ lxc network list
±----------------±---------±--------±---------------±--------------------------±------------±--------±--------+
| NAME | TYPE | MANAGED | IPV4 | IPV6 | DESCRIPTION | USED BY | STATE |
±----------------±---------±--------±---------------±--------------------------±------------±--------±--------+
| br-2420f6858d6a | bridge | NO | | | | 0 | |
±----------------±---------±--------±---------------±--------------------------±------------±--------±--------+
| docker0 | bridge | NO | | | | 0 | |
±----------------±---------±--------±---------------±--------------------------±------------±--------±--------+
| enp0s3 | physical | NO | | | | 0 | |
±----------------±---------±--------±---------------±--------------------------±------------±--------±--------+
| lxdbr0 | bridge | YES | 10.78.125.1/24 | fd42:d07d:153f:5503::1/64 | | 2 | CREATED |
±----------------±---------±--------±---------------±--------------------------±------------±--------±--------+

ryze@ubuntu:~$ lxc network show lxdbr0
config:
ipv4.address: 10.78.125.1/24
ipv4.nat: “true”
ipv6.address: fd42:d07d:153f:5503::1/64
ipv6.nat: “true”
description: “”
name: lxdbr0
type: bridge
used_by:

  • /1.0/instances/microk8s
  • /1.0/profiles/default
    managed: true
    status: Created
    locations:
  • none

ryze@ubuntu:~$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 08:00:27:19:5b:97 brd ff:ff:ff:ff:ff:ff
inet 172.17.51.124/24 metric 100 brd 172.17.51.255 scope global dynamic enp0s3
valid_lft 590949sec preferred_lft 590949sec
inet6 fe80::a00:27ff:fe19:5b97/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:dd:b0:a5:36 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
4: br-2420f6858d6a: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:ff:d1:ca:19 brd ff:ff:ff:ff:ff:ff
inet 192.168.49.1/24 brd 192.168.49.255 scope global br-2420f6858d6a
valid_lft forever preferred_lft forever
13: lxdbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:16:3e:24:cc:d2 brd ff:ff:ff:ff:ff:ff
inet 10.78.125.1/24 scope global lxdbr0
valid_lft forever preferred_lft forever
inet6 fd42:d07d:153f:5503::1/64 scope global
valid_lft forever preferred_lft forever
inet6 fe80::216:3eff:fe24:ccd2/64 scope link
valid_lft forever preferred_lft forever
15: veth0152dd73@if14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxdbr0 state UP group default qlen 1000
link/ether 5a:06:53:93:64:1e brd ff:ff:ff:ff:ff:ff link-netnsid 0

Hi @liu_ryze. Did you resolve this issue? Can you confirm that you can contact any other URL from withing the container?

I updated this page to follow the latest recommendations when it comes to privileged LXD containers and MicroK8s. As I mention at the top of the post, while deploying MicroK8s on LXD privileged containers is possible, newer versions of Ubuntu and systemd require operations (such as mounting to the /proc directory) that cannot be safely handled with privileged containers.