HowTo setup MicroK8s with (Micro)Ceph storage

With the 1.28 release, we introduced a new rook-ceph addon that allows users to easily setup, import, and manage Ceph deployments via rook.

In this guide we show how to setup a Ceph cluster with MicroCeph, give it three virtual disks backed up by local files, and import the Ceph cluster in MicroK8s using the rook-ceph addon.

Install MicroCeph

MicroCeph is a lightweight way of deploying a Ceph cluster with a focus on reduced ops. It is distributed as a snap and thus it gets deployed with:

sudo snap install microceph --channel=latest/edge

First, we need to bootstrap the Ceph cluster:

sudo microceph cluster bootstrap

In this guide, we do not cluster multiple nodes. The interested reader can look into the official docs on how to form a multinode Ceph cluster with MicroCeph.

At this point we can check the status of the cluster and query the list of available disks that should be empty. The disk status is queried with:

sudo microceph.ceph status                                                                                                                                                                                        

Its output should look like:

  cluster:                                                                                                                                                                                                                                  
    id:     b5205159-8092-4be4-9f26-8176c397c929                                                                                                                                                                                            
    health: HEALTH_OK                                                                                                                                                                                                                       
                                                                                                                                                                                                                                            
  services:                                                                                                                                                                                                                                 
    mon: 1 daemons, quorum ip-172-31-3-156 (age 22s)                                                                                                                                                                                        
    mgr: ip-172-31-3-156(active, since 14s)                                                                                                                                                                                                 
    osd: 0 osds: 0 up, 0 in                                                                                                                                                                                                                 
                                                                                                                                                                                                                                            
  data:                                                                                                                                                                                                                                     
    pools:   0 pools, 0 pgs                                                                                                                                                                                                                 
    objects: 0 objects, 0 B                                                                                                                                                                                                                 
    usage:   0 B used, 0 B / 0 B avail                                                                                                                                                                                                      
    pgs:                                                                                                                                                                                                                                    

The disk list is shown with:

sudo microceph disk list                                                                    

In our empty cluster the disks list should be:

Disks configured in MicroCeph:                                                                                        
+-----+----------+------+                                  
| OSD | LOCATION | PATH |                                                                                             
+-----+----------+------+                                                                                             
                                                                                                                      
Available unpartitioned disks on this system:                                                                         
+-------+----------+------+------+                                                                                                                                                                                                          
| MODEL | CAPACITY | TYPE | PATH |                                                                                    
+-------+----------+------+------+                                            

Add virtual disks

The following loop creates three files under /mnt that will back respective loop devices. Each Virtual disk is then added as an OSD to Ceph:

for l in a b c; do
  loop_file="$(sudo mktemp -p /mnt XXXX.img)"
  sudo truncate -s 1G "${loop_file}"
  loop_dev="$(sudo losetup --show -f "${loop_file}")"
  # the block-devices plug doesn't allow accessing /dev/loopX
  # devices so we make those same devices available under alternate
  # names (/dev/sdiY)
  minor="${loop_dev##/dev/loop}"
  sudo mknod -m 0660 "/dev/sdi${l}" b 7 "${minor}"
  sudo microceph disk add --wipe "/dev/sdi${l}"
done

At this point the disks show show up in the sudo microceph.ceph status command:

  cluster:
    id:     b5205159-8092-4be4-9f26-8176c397c929
    health: HEALTH_OK
  
  services:
    mon: 1 daemons, quorum ip-172-31-3-156 (age 115s)
    mgr: ip-172-31-3-156(active, since 107s)
    osd: 3 osds: 3 up (since 25s), 3 in (since 29s)
  
  data:
    pools:   1 pools, 1 pgs
    objects: 2 objects, 449 KiB
    usage:   25 MiB used, 3.0 GiB / 3 GiB avail
    pgs:     1 active+clean

And the sudo microceph disk list:

Disks configured in MicroCeph:
+-----+-----------------+-----------+
| OSD |    LOCATION     |   PATH    |
+-----+-----------------+-----------+
| 0   | ip-172-31-3-156 | /dev/sdia |
+-----+-----------------+-----------+
| 1   | ip-172-31-3-156 | /dev/sdib |
+-----+-----------------+-----------+
| 2   | ip-172-31-3-156 | /dev/sdic |
+-----+-----------------+-----------+

Available unpartitioned disks on this system:
+-------+----------+------+------+
| MODEL | CAPACITY | TYPE | PATH |
+-------+----------+------+------+

It is worth looking into customizing your Ceph setup at this point. Here, as this cluster is a local one and is going to be used by a local MicroK8s deployment we set the replica count to be 2, we disable manager redirects, and we set the bucket type to use for chooseleaf in a CRUSH rule to 0:

sudo microceph.ceph config set global osd_pool_default_size 2                               
sudo microceph.ceph config set mgr mgr_standby_modules false                                                                                                                                                      
sudo microceph.ceph config set osd osd_crush_chooseleaf_type 0

Refer to the Ceph docs to shape the cluster according to your needs.

Connect MicroCeph to MicroK8s

The rook-ceph addon first appeared with the 1.28 release, so we should select a MicroK8s deployment channel greater or equal to 1.28:

sudo snap install microk8s --channel=1.28/stable
sudo microk8s status --wait-ready

ā“˜ Note: Before enabling the rook-ceph addon on a strictly confined MicroK8s, make sure the rbd kernel module is loaded with sudo modprobe rbd.

The output message of enabling the addon, sudo microk8s enable rook-ceph, describes what the next steps should be to import a Ceph cluster:

Infer repository core for addon rook-ceph                                                                                                                                                                                                   
Add Rook Helm repository https://charts.rook.io/release                                                                                                                                                                                     
"rook-release" has been added to your repositories                                                                                                                                                                                          
...
=================================================

Rook Ceph operator v1.11.9 is now deployed in your MicroK8s cluster and
will shortly be available for use.

As a next step, you can either deploy Ceph on MicroK8s, or connect MicroK8s with an
existing Ceph cluster.

To connect MicroK8s with an existing Ceph cluster, you can use the helper command
'microk8s connect-external-ceph'. If you are running MicroCeph on the same node, then
you can use the following command:

    sudo microk8s connect-external-ceph

Alternatively, you can connect MicroK8s with any external Ceph cluster using:

    sudo microk8s connect-external-ceph \
        --ceph-conf /path/to/cluster/ceph.conf \
        --keyring /path/to/cluster/ceph.keyring \
        --rbd-pool microk8s-rbd

For a list of all supported options, use 'microk8s connect-external-ceph --help'.

To deploy Ceph on the MicroK8s cluster using storage from your Kubernetes nodes, refer
to https://rook.io/docs/rook/latest-release/CRDs/Cluster/ceph-cluster-crd/

As we have already setup MicroCeph having it managed by rook is done with just:

sudo microk8s connect-external-ceph

At the end of this process you should have a storage class ready to use:

NAME       PROVISIONER                  RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
ceph-rbd   rook-ceph.rbd.csi.ceph.com   Delete          Immediate           true                   3h38m

Further reading

1 Like

When is microk8s going to have 1.28/stable published?

error: snap "microk8s" is not available on 1.28/stable but is available to install on the following
       channels:

       1.28/candidate  snap refresh --channel=1.28/candidate microk8s
       1.28/beta       snap refresh --channel=1.28/beta microk8s
       1.28/edge       snap refresh --channel=1.28/edge microk8s

       Please be mindful pre-release channels may include features not completely tested or
       implemented. Get more information with 'snap info microk8s'.


The 1.28 release is next week, for now you can use latest/edge. Thank you.

Even with latest/edge the command is still missing:

root@k8s-ceph-1:~# microk8s connect-external-ceph
'connect-external-ceph' is not a valid MicroK8s subcommand.
...
root@k8s-ceph-1:~# snap info microk8s | grep tracking
tracking:     latest/edge

Can you please try with sudo?

I am using the root userā€¦ but for the sake of it:

root@k8s-ceph-1:~# sudo microk8s connect-external-ceph
'connect-external-ceph' is not a valid MicroK8s subcommand.

Did microk8s enable rook-ceph completed without errors? What is the output of ls -l /var/snap/microk8s/common/plugins/. What Linux distribution are you on so I can try to reproduce the issue? Maybe it would be best to open a github issue on the microk8s repo and attach a microk8s.inspect tarball. I would like to try to reproduce the error you are seeing.

no errorsā€¦ I just re-installed everything and it worked

well, this is the formal, tutorial, it works for RBD, but for some reason my first deployment included a postgresql data base which needs the CephFS. how to connect Microk8s to a CephFS pool created in MicroCeph?

Note the section on installing and configuring MicroCeph is a bit outdated by now.

tl;dr do this:

 sudo snap install microceph
 sudo snap refresh --hold microceph
 sudo microceph cluster bootstrap
 sudo microceph disk add loop,4G,3
 sudo ceph status

For more context take a look at the official docs e.g. the single node tutorial

Please do not install from latest/edge channel, itā€™s highly experimental and likely to break

@petersabaini which version works for microk8s setup?

followed up here with some success but still it looks far from simple as MDS needs special setup that I could not find in MicroCeph documentation.

microceph enable mds
Error: failed placing service mds: mds service unable to sustain on host: mds service is not active