MicroK8s supports a cluster-ready replicated storage solution based on OpenEBS Mayastor. This does require some initial setup and configuration, as detailed below.
Requirements
ⓘ Note: These requirements apply to ALL the nodes in a MicroK8s cluster. Please run the commands on each node. Sorry, arm64 architecture is not currently supported
-
HugePages must be enabled. Mayastor requires at least 1024 4MB HugePages.
This can be achieved by running the following commands on each host:
sudo sysctl vm.nr_hugepages=1024 echo 'vm.nr_hugepages=1024' | sudo tee -a /etc/sysctl.conf
-
The
nvme_fabrics
andnvme_tcp
modules are required on all hosts. Install the modules with:sudo apt install linux-modules-extra-$(uname -r)
Then enable them with:
sudo modprobe nvme_tcp echo 'nvme-tcp' | sudo tee -a /etc/modules-load.d/microk8s-mayastor.conf
-
The MicroK8s DNS and Helm3 addons. These will be automatically installed if missing.
Installation
Assuming you have configured your cluster as mentioned above, you can now enable Mayastor.
-
Enable the Mayastor addon:
sudo microk8s enable core/mayastor --default-pool-size 20G
-
Wait for the mayastor control plane and data plane pods to come up:
sudo microk8s.kubectl get pod -n mayastor
-
The mayastor addon will automatically create on MayastorPool per node in the MicroK8s cluster. This pool is backed by a sparse image file. Refer to the Mayastor documentation for information on using existing block devices.
Verify that all mayastorpools are up and running with:
sudo microk8s.kubectl get mayastorpool -n mayastor
In a 3-node cluster, the output should look like this:
NAME NODE STATUS CAPACITY USED AVAILABLE microk8s-m2-pool m2 Online 21449670656 0 21449670656 microk8s-m1-pool m1 Online 21449670656 0 21449670656 microk8s-m3-pool m3 Online 21449670656 0 21449670656
Mayastor is now deployed!
Deploy a test workload
The mayastor addon creates two storage classes:
-
mayastor
: This can be used in single-node clusters. -
mayastor-3
: This requires at least 3 cluster nodes, as it replicates volume data across 3 storage pools, ensuring data redundancy.
Let’s create a simple pod that uses the mayastor
storage class:
# pod-with-pvc.yaml
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-pvc
spec:
storageClassName: mayastor
accessModes: [ReadWriteOnce]
resources: { requests: { storage: 5Gi } }
---
apiVersion: v1
kind: Pod
metadata:
name: test-nginx
spec:
volumes:
- name: pvc
persistentVolumeClaim:
claimName: test-pvc
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: pvc
mountPath: /usr/share/nginx/html
Then, create the pod with:
sudo microk8s.kubectl create -f pod-with-pvc.yaml
Verify that our PVC and pod have been created with:
sudo microk8s.kubectl get pod,pvc
The output should look like this:
NAME READY STATUS RESTARTS AGE
pod/test-nginx 1/1 Running 0 4m
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/test-pvc Bound pvc-e280b734-3224-4af3-af0b-e7ad3c4e6d79 5Gi RWO mayastor 4m