@Alexander_Hermann with pleasure !
on the other hand, although with the instructions above we could see the storage class in Microk8s, however I can’t get the cluster to provision CephFS from MicroCeph, the pod that asks for the CephFS backed PVC doesn’t start, I figured it’s a storage provisioning problem after deeper investigation, apparently MicroCeph doesn’t include MDS by default:
below are the results of ceph -s
cluster:
id: 334243a2-6625-4911-8e7f-9d53ccb9670b
health: HEALTH_ERR
1 filesystem is offline
1 filesystem is online with fewer MDS than max_mds
services:
mon: 3 daemons, quorum 3mk1,3mk2,3mk3 (age 39m)
mgr: 3mk1(active, since 39m), standbys: 3mk2, 3mk3
mds: 0/0 daemons up
osd: 3 osds: 3 up (since 39m), 3 in (since 2d)
data:
volumes: 1/1 healthy
pools: 6 pools, 225 pgs
objects: 237 objects, 707 MiB
usage: 2.0 GiB used, 298 GiB / 300 GiB avail
pgs: 225 active+clean
notice that mds demons 0/0 up and mds offline
I remember I tried to start an mds service and the host machine reach 100% cpu usage and became completely non responsive.
also a bit confusing to see mds services running on all nodes from microceph status
microceph status
MicroCeph deployment summary:
- 3mk1 (10.128.140.59)
Services: mds, mgr, mon, osd
Disks: 1
- 3mk2 (10.128.140.189)
Services: mds, mgr, mon, osd
Disks: 1
- 3mk3 (10.128.140.120)
Services: mds, mgr, mon, osd
Disks: 1
microceph enable mds
Error: failed placing service mds: mds service unable to sustain on host: mds service is not active
I’ll follow up if I figured a way to make it work, also insight are welcome