Addon: OpenEBS Mayastor clustered storage

Compatibility: amd64 arm64 classic
Source: Mayastor

MicroK8s supports a cluster-ready replicated storage solution based on OpenEBS Mayastor. This does require some initial setup and configuration, as detailed below.


Note: These requirements apply to ALL the nodes in a MicroK8s cluster. Please run the commands on each node.

  1. HugePages must be enabled. Mayastor requires at least 1024 4MB HugePages.

    This can be achieved by running the following commands on each host:

    sudo sysctl vm.nr_hugepages=1024
    echo 'vm.nr_hugepages=1024' | sudo tee -a /etc/sysctl.conf
  2. The nvme_fabrics and nvme_tcp modules are required on all hosts. Install the modules with:

    sudo apt install linux-modules-extra-$(uname -r)

    Then enable them with:

    sudo modprobe nvme_tcp
    echo 'nvme-tcp' | sudo tee -a /etc/modules-load.d/microk8s-mayastor.conf
  3. You should restart MicroK8s at this point:

    microk8s stop
    microk8s start
  4. The MicroK8s DNS and Helm3 addons. These will be automatically installed if missing.


Assuming you have configured your cluster as mentioned above, you can now enable Mayastor.

  1. Enable the Mayastor addon:

    sudo microk8s enable core/mayastor --default-pool-size 20G
  2. Wait for the mayastor control plane and data plane pods to come up:

    sudo microk8s.kubectl get pod -n mayastor
  3. The mayastor addon will automatically create on MayastorPool per node in the MicroK8s cluster. This pool is backed by a sparse image file. Refer to the Mayastor documentation for information on using existing block devices.

    Verify that all mayastorpools are up and running with:

    sudo microk8s.kubectl get mayastorpool -n mayastor

    In a 3-node cluster, the output should look like this:

    microk8s-m2-pool   m2     Online   21449670656   0      21449670656
    microk8s-m1-pool   m1     Online   21449670656   0      21449670656
    microk8s-m3-pool   m3     Online   21449670656   0      21449670656

Mayastor is now deployed!

Deploy a test workload

The mayastor addon creates two storage classes:

  • mayastor: This can be used in single-node clusters.
  • mayastor-3: This requires at least 3 cluster nodes, as it replicates volume data across 3 storage pools, ensuring data redundancy.

Let’s create a simple pod that uses the mayastor storage class:

# pod-with-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
  name: test-pvc
  storageClassName: mayastor
  accessModes: [ReadWriteOnce]
  resources: { requests: { storage: 5Gi } }
apiVersion: v1
kind: Pod
  name: test-nginx
    - name: pvc
        claimName: test-pvc
    - name: nginx
      image: nginx
        - containerPort: 80
        - name: pvc
          mountPath: /usr/share/nginx/html

Then, create the pod with:

sudo microk8s.kubectl create -f pod-with-pvc.yaml

Verify that our PVC and pod have been created with:

sudo microk8s.kubectl get pod,pvc

The output should look like this:

pod/test-nginx   1/1     Running   0          4m

NAME                             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/test-pvc   Bound    pvc-e280b734-3224-4af3-af0b-e7ad3c4e6d79   5Gi        RWO            mayastor       4m

Configure storage classes

For advanced use-cases, it is possible to define a custom storage class and configure parameters for the number of replicas, the underlying protocol etc. For example, to define a storage class with 2 replicas, execute the following:

microk8s kubectl apply -f - <<EOF
kind: StorageClass
  name: mayastor-2
  repl: '2'
  protocol: 'nvmf'
  ioTimeout: '60'
  local: 'true'
provisioner: io.openebs.csi-mayastor
volumeBindingMode: WaitForFirstConsumer

For more information, see Create Mayastor StorageClass(s)

Configure MayaStor pools

By default, the MayaStor addon will create one pool per node, backed by a local image file. For production use, it is recommended that you instead use designated disks.

For convenience, a helper script is provided to easily create, list and delete mayastor pools from the cluster:


# get help
sudo snap run --shell microk8s -c '
  $SNAP_COMMON/addons/core/addons/mayastor/ --help

# create a mayastor pool using `/dev/sdb` on node `uk8s-1`
sudo snap run --shell microk8s -c '
  $SNAP_COMMON/addons/core/addons/mayastor/ add --node uk8s-1 --device /dev/sdb

# create a mayastor pool of 100GB using a sparse image file on node `uk8s-1`. The image file will be placed under `/var/snap/microk8s/common/mayastor/data`.
sudo snap run --shell microk8s -c '
  $SNAP_COMMON/addons/core/addons/mayastor/ add --node uk8s-1 --size 100GB

# list mayastor pools
sudo snap run --shell microk8s -c '
  $SNAP_COMMON/addons/core/addons/mayastor/ list

# delete a mayastor pool. --force removes it even if the pool is in use, --purge removes the backing image file
# the mayastor pool name is required, as it appears in the output of the list command
sudo snap run --shell microk8s -c '
  $SNAP_COMMON/addons/core/addons/mayastor/ remove microk8s-uk8s-1-pool --force --purge

For more information, see Create Mayastor Pool(s).


Unable to start mayastor data plane


Depending on the underlying hardware, starting the mayastor data plane pods may get stuck in CrashLoopBackOff state. This can be due to failing to initialize EAL. Verify this by checking the logs of the daemonset with the following command…

microk8s.kubectl logs -n mayastor daemonset/mayastor

… and check that the logs contain an error message similar to this:

EAL: alloc_pages_on_heap(): couldn't allocate memory due to IOVA exceeding limits of current DMA mask
EAL: alloc_pages_on_heap(): Please try initializing EAL with --iova-mode=pa parameter
EAL: error allocating rte services array
EAL: FATAL: rte_service_init() failed
EAL: rte_service_init() failed
thread 'main' panicked at 'Failed to init EAL', mayastor/src/core/
stack backtrace:
0: std::panicking::begin_panic
1: mayastor::core::env::MayastorEnvironment::init
2: mayastor::main
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.

For more details, see canonical/microk8s-core-addons#25.


Edit the manifest of the mayastor daemonset with:

microk8s kubectl edit -n mayastor daemonset mayastor

Then, edit the command line of the mayastor pod by adding the following argument --env-context=--iova-mode=pa. Save and exit the editor to apply the changes, then wait for the mayastor pods to restart.

1 Like

Could it work with multi-nodes (across several types of cloud) cluster and use AWS EFS as PersistentVolume?

Please add this to the Section Requirements:
3. Please restart your Microk8s cluster (on each node):
microk8s stop
microk8s start

Otherwise the Maystor pods cannot allocate hugepages. After exec sysctl a restart is needed on each cluster node!

(post deleted by author)

I have tried all sorts of things to get OpenEBS Mayastor clustered storage to work on microk8s without much success. So rather than give up completely I thought I would detail one of my failed attempts and see if anyone could figure out what I am doing wrong. Thanks in advance for any help you can give me :slight_smile:

Failed Attempt
Here is the results of following the steps posted on at MicroK8s - Addon: OpenEBS Mayastor clustered storage.

VM Setup:
3 VM running Ubuntu 22.04 with 16GB ram on a vSphere hypervisor. I have used these same VM to create a 3 node microk8s cluster with good success in the past.
Microk8s removal:
removed microk8s on all 3 nodes.

microk8s stop  
`sudo snap remove microk8s --purge`  
sudo reboot

Microk8s fresh install:

snap info microk8s
latest/stable: v1.26.0 2022-12-17 (4390) 176MB classic
On all 3 nodes:

sudo snap install microk8s --classic --channel=1.26/stable  
sudo usermod -a -G microk8s $USER  
sudo chown -f -R $USER ~/.kube  
newgrp microk8s  
sudo reboot  

verify everything is ok

microk8s status  
microk8s inspect  
**Do what inspect tells you to do:**  
WARNING:  IPtables FORWARD policy is DROP. Consider enabling traffic forwarding with: sudo iptables -P FORWARD ACCEPT 
The change can be made persistent with: sudo apt-get install iptables-persistent  
sudo iptables -S  
sudo iptables-legacy -S  
sudo iptables -P FORWARD ACCEPT  
sudo apt-get install iptables-persistent  
sudo systemctl is-enabled netfilter-persistent.service  
sudo reboot  
microk8s inspect  

still get the IPtable FORWARD warning on 2 of the 3 nodes.
hopefully it is not that important.
ping all the ip addresses in cluster from every node.

Followed the directions at MicroK8s - Addon: OpenEBS Mayastor clustered storage
step 1:

sudo sysctl vm.nr_hugepages=1024  
echo 'vm.nr_hugepages=1024' | sudo tee -a /etc/sysctl.conf  
sudo nvim /etc/sysctl.conf  

step 2:

sudo apt install linux-modules-extra-$(uname -r)  
sudo modprobe nvme_tcp  
echo 'nvme-tcp' | sudo tee -a /etc/modules-load.d/microk8s-mayastor.conf  
sudo nvim /etc/modules-load.d/microk8s-mayastor.conf  

step 3:

microk8s enable dns  
microk8s enable helm3  
thought we might need rbac so I enabled that also.  
microk8s enable rbac  

Created 3 node cluster.

from main node.  
sudo microk8s add-node  
go to 2nd node.  
microk8s join  
from main node.  
sudo microk8s add-node  
go to 3rd node.  
microk8s join  
microk8s status  

enable the mayastor add-on:

from main node.  
sudo microk8s enable core/mayastor --default-pool-size 20G  
go to  2nd node.  
sudo microk8s enable core/mayastor --default-pool-size 20G  
Addon core/mayastor is already enabled  
go to 3rd node.  
sudo microk8s enable core/mayastor --default-pool-size 20G  
Addon core/mayastor is already enabled  

Wait for the mayastor control plane and data plane pods to come up:

sudo microk8s.kubectl get pod -n mayastor
NAME                                     READY   STATUS              RESTARTS   AGE
mayastor-csi-962jf                       0/2     ContainerCreating   0          2m6s
mayastor-csi-l4zxx                       0/2     ContainerCreating   0          2m5s
mayastor-8pcc4                           0/1     Init:0/3            0          2m6s
msp-operator-74ff9cf5d5-jvxqb            0/1     Init:0/2            0          2m5s
mayastor-lt8qq                           0/1     Init:0/3            0          2m5s
etcd-operator-mayastor-65f9967f5-mpkrw   0/1     ContainerCreating   0          2m5s
mayastor-csi-6wb7x                       0/2     ContainerCreating   0          2m5s
core-agents-55d76bb877-8nffd             0/1     Init:0/1            0          2m5s
csi-controller-54ccfcfbcc-m94b7          0/3     Init:0/1            0          2m5s
mayastor-9q4gl                           0/1     Init:0/3            0          2m5s
rest-77d69fb479-qsvng                    0/1     Init:0/2            0          2m5s

# Still waiting
sudo microk8s.kubectl get pod -n mayastor
NAME                                     READY   STATUS     RESTARTS   AGE
mayastor-8pcc4                           0/1     Init:0/3   0          32m
msp-operator-74ff9cf5d5-jvxqb            0/1     Init:0/2   0          32m
mayastor-lt8qq                           0/1     Init:0/3   0          32m
core-agents-55d76bb877-8nffd             0/1     Init:0/1   0          32m
csi-controller-54ccfcfbcc-m94b7          0/3     Init:0/1   0          32m
mayastor-9q4gl                           0/1     Init:0/3   0          32m
rest-77d69fb479-qsvng                    0/1     Init:0/2   0          32m
mayastor-csi-962jf                       2/2     Running    0          32m
mayastor-csi-l4zxx                       2/2     Running    0          32m
etcd-operator-mayastor-65f9967f5-mpkrw   1/1     Running    1          32m
mayastor-csi-6wb7x                       2/2     Running    0          32m
etcd-6tjf7zb9dh                          0/1     Init:0/1   0          30m

Went to the trouble-shooting section at MicroK8s - Addon: OpenEBS Mayastor clustered storage

microk8s.kubectl logs -n mayastor daemonset/mayastor
output was:
Found 3 pods, using pod/mayastor-8pcc4
Defaulted container "mayastor" out of: mayastor, registration-probe (init), etcd-probe (init), initialize-pool (init)
Error from server (BadRequest): container "mayastor" in pod "mayastor-8pcc4" is waiting to start: PodInitializing

We have frequently seen this being an issue due to a vxlan bug that breaks Calico traffic. Can you check whether

microk8s kubectl patch felixconfigurations default --type=merge --patch='{"spec":{"featureDetectOverride":"ChecksumOffloadBroken=true"}}'

helps with your issue? This should unblock your pods from starting.