I’m trying to install Mayastor on K8s cluster based on Talos 1.11.5
I set cp.yaml
cluster:
apiServer:
admissionControl:
- name: PodSecurity
configuration:
apiVersion: pod-security.admission.config.k8s.io/v1beta1
kind: PodSecurityConfiguration
exemptions:
namespaces:
- openebs
wp.yaml
machine:
time:
servers:
- 169.254.169.123
kernel:
modules:
- name: nvme_tcp
kubelet:
extraMounts:
- destination: /var/openebs
type: bind
source: /var/openebs
options:
- bind
- rshared
- rw
- destination: /var/local
type: bind
source: /var/local
options:
- bind
- rshared
- rw
sysctls:
vm.nr_hugepages: "1024"
nodeLabels:
openebs.io/engine: "mayastor"
and generate config files as below:
talosctl gen config mon-k8s-cluster-test ``https://talos.mon.k8s.mydomain.com:6443`` --with-examples=false --with-docs=false --with-kubespan --kubernetes-version v1.34.3 --talos-version v1.11 --with-secrets /root/talos/secrets.yaml --additional-sans ``talos.mon.k8s.mydomain.com``,i-0afda05ba1085796e.eu-west-2.compute.internal,i-06a2b80b4fd162ec4.eu-west-2.compute.internal,i-09f42c1865a8d541e.eu-west-1.compute.internal,10.192.11.10,10.192.11.140,10.192.43.10 --dns-domain ``mon.k8s.mydomain.com`` --config-patch-control-plane @cp.yaml --config-patch-worker @wp.yaml --output talos
Set permissions openebs-namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: openebs
labels:
pod-security.kubernetes.io/enforce: privileged
pod-security.kubernetes.io/warn: privileged
pod-security.kubernetes.io/audit: privileged
Create mayastor-values.yaml
mayastor:
csi:
node:
initContainers:
enabled: false
loki-stack:
loki:
persistence:
size: 5Gi
engines:
local:
lvm:
enabled: false
zfs:
enabled: false
Install openebs
helm repo add openebs https://openebs.github.io/openebs
helm repo update
kubectl apply -f openebs-namespace.yaml
helm install openebs --namespace openebs openebs/openebs --set mayastor.etcd.clusterDomain="mon.k8s.mydomain.com" -f mayastor-values.yaml
All pods are running correctly
$ kubectl get pods -n openebs
NAME READY STATUS RESTARTS AGE
openebs-agent-core-f4f6d8bc4-ctsbm 2/2 Running 0 39h
openebs-agent-ha-node-45xf2 1/1 Running 0 39h
openebs-agent-ha-node-kqllt 1/1 Running 0 39h
openebs-agent-ha-node-kz65w 1/1 Running 0 39h
openebs-agent-ha-node-lj7xx 1/1 Running 0 39h
openebs-alloy-7cx2f 2/2 Running 0 39h
openebs-alloy-9wwl8 2/2 Running 0 39h
openebs-alloy-ktcjk 2/2 Running 0 39h
openebs-alloy-lxkzc 2/2 Running 0 39h
openebs-api-rest-764b65c8b5-5pnjf 1/1 Running 0 39h
openebs-csi-controller-5c68f87b5f-xz5rb 6/6 Running 0 39h
openebs-csi-node-dt7x4 2/2 Running 0 39h
openebs-csi-node-hvzdt 3/3 Running 2 (20h ago) 20h
openebs-csi-node-pvlnh 3/3 Running 24 (20h ago) 21h
openebs-csi-node-zw8pm 2/2 Running 0 39h
openebs-etcd-0 1/1 Running 0 39h
openebs-etcd-1 1/1 Running 0 39h
openebs-etcd-2 1/1 Running 0 39h
openebs-io-engine-9dm6m 2/2 Running 0 39h
openebs-io-engine-dbjwn 2/2 Running 0 39h
openebs-io-engine-jv4nn 2/2 Running 0 39h
openebs-io-engine-nx2zp 2/2 Running 0 39h
openebs-localpv-provisioner-89447ff8-dch4z 1/1 Running 0 39h
openebs-loki-0 2/2 Running 0 39h
openebs-loki-1 2/2 Running 0 39h
openebs-loki-2 2/2 Running 0 39h
openebs-minio-0 1/1 Running 0 39h
openebs-minio-1 1/1 Running 0 39h
openebs-minio-2 1/1 Running 0 39h
openebs-nats-0 3/3 Running 0 39h
openebs-nats-1 3/3 Running 0 39h
openebs-nats-2 3/3 Running 0 39h
openebs-obs-callhome-5b8b749ff-l67bs 2/2 Running 0 39h
openebs-operator-diskpool-cc45c9cb4-x7bnt 1/1 Running 0 39h
Verify
$ helm ls -n openebs
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
openebs openebs 1 2026-01-26 17:42:25.667586078 +0000 UTC deployed openebs-4.4.0 4.4.0
Tag Worker Nodes
$ kubectl get nodes -l -L topology.kubernetes.io/zone
NAME STATUS ROLES AGE VERSION ZONE
i-09010310d2c9c582f Ready 39h v1.34.3 eu-west-2
i-09789ad7ea1040030 Ready 39h v1.34.3 eu-west-2
i-0989e0766c81e7697 Ready 39h v1.34.3 eu-west-1
i-0e48216a8318f2699 Ready 39h v1.34.3 eu-west-2
Create StorageClass mayastor-storage-class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: mayastor-single-replica
annotations:
storageclass.kubernetes.io/is-default-class: "true"
parameters:
repl: "1"
protocol: "nvmf"
ioTimeout: "30"
thin: "false"
fsType: "xfs"
provisioner: io.openebs.csi-mayastor
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Retain
allowVolumeExpansion: true
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: mayastor-uk-3
annotations:
storageclass.kubernetes.io/is-default-class: "false"
parameters:
repl: "3"
protocol: "nvmf"
ioTimeout: "30"
thin: "false"
fsType: "xfs"
nodeAffinityTopologyLabel: |
topology.kubernetes.io/zone: eu-west-2
provisioner: io.openebs.csi-mayastor
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Retain
allowVolumeExpansion: true
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: mayastor-ie-1
annotations:
storageclass.kubernetes.io/is-default-class: "false"
parameters:
repl: "1"
protocol: "nvmf"
ioTimeout: "30"
thin: "false"
fsType: "xfs"
nodeAffinityTopologyLabel: |
topology.kubernetes.io/zone: eu-west-1
provisioner: io.openebs.csi-mayastor
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Retain
allowVolumeExpansion: true
$ kubectl apply -f $OPENEBSFOLDER/mayastor-storage-class.yaml
Create
apiVersion: "openebs.io/v1beta3"
kind: DiskPool
metadata:
name: "pool-$NODE_ID"
namespace: openebs
spec:
node: "$NODE_ID"
disks: ["/dev/disk/by-id/nvme-Amazon_Elastic_Block_Store_$SERIAL"]
topology:
labelled:
topology.kubernetes.io/zone: $REGION
$ kubectl get diskpool -n openebs
NAME NODE STATE POOL-STATUS ENCRYPTED CAPACITY USED AVAILABLE DISK-CAPACITY MAX-EXPANDABLE-SIZE
pool-i-09010310d2c9c582f i-09010310d2c9c582f Created Online false 511.5 GiB 100 GiB 411.5 GiB 512 GiB 639.8 GiB
pool-i-09789ad7ea1040030 i-09789ad7ea1040030 Created Online false 511.5 GiB 70 GiB 441.5 GiB 512 GiB 639.8 GiB
pool-i-0989e0766c81e7697 i-0989e0766c81e7697 Created Online false 511.5 GiB 110 GiB 401.5 GiB 512 GiB 639.8 GiB
pool-i-0e48216a8318f2699 i-0e48216a8318f2699 Created Online false 511.5 GiB 110 GiB 401.5 GiB 512 GiB 639.8 GiB
My scope is create replica for my pvc if I use storageclass mayastor-uk-3 (deployed on node running in UK region) and one replica if I use mayastor-uk-3 (running on IE node).
now If I create a pvc using mayastor-single-replica, works fine, but I’m trying to use mayastor-uk-3 setting as below:
---
apiVersion: v1
kind: Namespace
metadata:
name: mytest
labels:
name: mytest
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-pvc
namespace: mytest
spec:
storageClassName: mayastor-uk-3
accessModes: [ReadWriteOnce]
resources: { requests: { storage: 5Gi } }
---
apiVersion: v1
kind: Pod
metadata:
name: test-nginx
namespace: mytest
spec:
volumes:
- name: pvc
persistentVolumeClaim:
claimName: test-pvc
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: pvc
mountPath: /usr/share/nginx/html
I created them:
$ kubectl get pod,pvc -n mytest
NAME READY STATUS RESTARTS AGE
pod/test-nginx 0/2 Pending 0 30m
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
persistentvolumeclaim/test-pvc Pending mayastor-uk-3 30m
but I receive the error:
$ kubectl describe persistentvolumeclaim/test-pvc -n marco
Name: test-pvc
Namespace: mytest
StorageClass: mayastor-uk-3
Status: Pending
Volume:
Labels:
Annotations: volume.beta.kubernetes.io/storage-provisioner: io.openebs.csi-mayastor
volume.kubernetes.io/selected-node: i-09010310d2c9c582f
volume.kubernetes.io/storage-provisioner: io.openebs.csi-mayastor
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Used By: test-nginx
Events:
Type Reason Age From Message
Normal WaitForFirstConsumer 31m persistentvolume-controller waiting for first consumer to be created before binding
Normal Provisioning 2m56s (x16 over 31m) io.openebs.csi-mayastor_i-09789ad7ea1040030_730cf568-5041-4143-9d4b-4bf05bf4a749 External provisioner is provisioning volume for claim “marco/test-pvc”
Warning ProvisioningFailed 2m55s (x16 over 31m) io.openebs.csi-mayastor_i-09789ad7ea1040030_730cf568-5041-4143-9d4b-4bf05bf4a749 failed to provision volume with StorageClass “mayastor-uk-3”: rpc error: code = Internal desc = Operation failed: ResourceExhausted(“error in response: status code ‘507 Insufficient Storage’, content: ‘RestJsonError { details: "Not enough suitable pools available, 0/1", message: "SvcError :: NotEnoughResources: Operation failed due to insufficient resources", kind: ResourceExhausted }’”)
Normal ExternalProvisioning 80s (x123 over 31m) persistentvolume-controller Waiting for a volume to be created either by the external provisioner ‘io.openebs.csi-mayastor’ or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.
any idea how can fix it?
Thanks for your help