Microk8s+microceph+cephfs - Ubuntu 22

Hi All,

I am setting up a multinode cluster with microk8s+microceph. The setup takes no time thanks to Ubuntu snap+documentation. However, with the instructions found in the Ubuntu pages there is only one Storage Class created which is based on rbd. With rbd storage class I could only create “ReadWriteOnce” pvcs and does not suite multi-replica pods requiring data storage (db statefulsets for example).

Exploring through the kubecluster (microk8s), I can see that there are csi-cephfsplugin-* pods. I can see the rook-ceph.cephfs.csi.ceph.com but no cephfs storage class.

What is the correct way of achieving a cephfs based storage class with the microk8s+microceph setup? Why is microceph not automatically creating a storage class as it does for the rbd?

Thanks for your time and please let me know if you need more information in this regard.

I discovered it, couldn’t find it in any documentation!
after you have configured rbd with connect-externalj-ceph command, and you have your rbd storageclass in microk8s already do the following:

Create a meta data pool for your CephFS

sudo ceph osd pool create cephfs_metadata 32 32

Create a data pool for your CephFS

sudo ceph osd pool create cephfs_data 64 64

Create CephFS using the two previous pools

sudo ceph fs new my_cephfs cephfs_metadata cephfs_data

then just run the “sudo microk8s connect-external-ceph” again and the storage class will show up in your microk8s deployments

1 Like

Thanks @dedumose !

I can confirm that this works.
This has been bugging me for a few days now.

@Alexander_Hermann with pleasure !

on the other hand, although with the instructions above we could see the storage class in Microk8s, however I can’t get the cluster to provision CephFS from MicroCeph, the pod that asks for the CephFS backed PVC doesn’t start, I figured it’s a storage provisioning problem after deeper investigation, apparently MicroCeph doesn’t include MDS by default:

below are the results of ceph -s

  cluster:
    id:     334243a2-6625-4911-8e7f-9d53ccb9670b
    health: HEALTH_ERR
            1 filesystem is offline
            1 filesystem is online with fewer MDS than max_mds
 
  services:
    mon: 3 daemons, quorum 3mk1,3mk2,3mk3 (age 39m)
    mgr: 3mk1(active, since 39m), standbys: 3mk2, 3mk3
    mds: 0/0 daemons up
    osd: 3 osds: 3 up (since 39m), 3 in (since 2d)
 
  data:
    volumes: 1/1 healthy
    pools:   6 pools, 225 pgs
    objects: 237 objects, 707 MiB
    usage:   2.0 GiB used, 298 GiB / 300 GiB avail
    pgs:     225 active+clean

notice that mds demons 0/0 up and mds offline

I remember I tried to start an mds service and the host machine reach 100% cpu usage and became completely non responsive.

also a bit confusing to see mds services running on all nodes from microceph status

microceph status
MicroCeph deployment summary:
- 3mk1 (10.128.140.59)
  Services: mds, mgr, mon, osd
  Disks: 1
- 3mk2 (10.128.140.189)
  Services: mds, mgr, mon, osd
  Disks: 1
- 3mk3 (10.128.140.120)
  Services: mds, mgr, mon, osd
  Disks: 1
microceph enable mds
Error: failed placing service mds: mds service unable to sustain on host: mds service is not active

I’ll follow up if I figured a way to make it work, also insight are welcome