Hi everybody ! After few trials, I’m here to ask you an help.
I’m trying to deploy Cassandra Stateful App (kubernetes.io/docs/tutorials/stateful-application/cassandra/
) but, clearly, I’m making some mistakes
This is my Kubernetes Cluster
:
root@k8s-eu-1-master:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-eu-1-master Ready control-plane 41h v1.28.2
k8s-eu-1-worker-1 Ready <none> 41h v1.28.2
k8s-eu-1-worker-2 Ready <none> 41h v1.28.2
k8s-eu-1-worker-3 Ready <none> 41h v1.28.2
k8s-eu-1-worker-4 Ready <none> 41h v1.28.2
k8s-eu-1-worker-5 Ready <none> 41h v1.28.2
with nfs
shared folders:
root@k8s-eu-1-master:~# df -h | grep /srv/
aa.aaa.aaa.aaa:/srv/shared-k8s-eu-1-worker-1 391G 6.1G 365G 2% /mnt/data
yy.yyy.yyy.yyy:/srv/shared-k8s-eu-1-worker-2 391G 6.1G 365G 2% /mnt/data
zz.zzz.zzz.zz:/srv/shared-k8s-eu-1-worker-3 391G 6.1G 365G 2% /mnt/data
pp.ppp.ppp.pp:/srv/shared-k8s-eu-1-worker-4 391G 6.1G 365G 2% /mnt/data
qq.qqq.qqq.qqq:/srv/shared-k8s-eu-1-worker-5 391G 6.1G 365G 2% /mnt/data
I deployed nfs-subdir-exteranl-provisioner
: github.com/kubernetes-sigs/nfs-subdir-external-provisioner/blob/master/charts/nfs-subdir-external-provisioner/README.md#install-multiple-provisioners
, specifying, for each provisioner, a different storageClassName
:
root@k8s-eu-1-master:~# helm install k8s-eu-1-worker-1-nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
> --set nfs.server=aa.aaa.aaa.aa \
> --set nfs.path=/srv/shared-k8s-eu-1-worker-1 \
> --set storageClass.name=k8s-eu-1-worker-1 \
> --set storageClass.provisionerName=k8s-sigs.io/k8s-eu-1-worker-1
NAME: k8s-eu-1-worker-1-nfs-subdir-external-provisioner
LAST DEPLOYED: Mon Nov 6 17:28:58 2023
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
root@k8s-eu-1-master:~# helm install k8s-eu-1-worker-2-nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
> --set nfs.server=yy.yyy.yyy.yyy \
> --set nfs.path=/srv/shared-k8s-eu-1-worker-2 \
> --set storageClass.name=k8s-eu-1-worker-2 \
> --set storageClass.provisionerName=k8s-sigs.io/k8s-eu-1-worker-2
NAME: k8s-eu-1-worker-2-nfs-subdir-external-provisioner
LAST DEPLOYED: Mon Nov 6 17:31:15 2023
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
root@k8s-eu-1-master:~# helm install k8s-eu-1-worker-3-nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
> --set nfs.server=zz.zzz.zzz.zz \
> --set nfs.path=/srv/shared-k8s-eu-1-worker-3 \
> --set storageClass.name=k8s-eu-1-worker-3 \
> --set storageClass.provisionerName=k8s-sigs.io/k8s-eu-1-worker-3
NAME: k8s-eu-1-worker-3-nfs-subdir-external-provisioner
LAST DEPLOYED: Mon Nov 6 17:39:25 2023
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
root@k8s-eu-1-master:~# helm install k8s-eu-1-worker-4-nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
> --set nfs.server=pp.ppp.ppp.pp \
> --set nfs.path=/srv/shared-k8s-eu-1-worker-4 \
> --set storageClass.name=k8s-eu-1-worker-4 \
> --set storageClass.provisionerName=k8s-sigs.io/k8s-eu-1-worker-4
NAME: k8s-eu-1-worker-4-nfs-subdir-external-provisioner
LAST DEPLOYED: Tue Nov 7 08:25:33 2023
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
root@k8s-eu-1-master:~# helm install k8s-eu-1-worker-5-nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
> --set nfs.server=qq.qqq.qqq.qqq \
> --set nfs.path=/srv/shared-k8s-eu-1-worker-5 \
> --set storageClass.name=k8s-eu-1-worker-5 \
> --set storageClass.provisionerName=k8s-sigs.io/k8s-eu-1-worker-5
NAME: k8s-eu-1-worker-5-nfs-subdir-external-provisioner
LAST DEPLOYED: Mon Nov 6 17:49:21 2023
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
root@k8s-eu-1-master:~# kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
k8s-eu-1-worker-1-nfs-subdir-external-provisioner 1/1 1 1 16h
k8s-eu-1-worker-2-nfs-subdir-external-provisioner 1/1 1 1 16h
k8s-eu-1-worker-3-nfs-subdir-external-provisioner 1/1 1 1 16h
k8s-eu-1-worker-4-nfs-subdir-external-provisioner 1/1 1 1 85m
k8s-eu-1-worker-5-nfs-subdir-external-provisioner 1/1 1 1 16h
root@k8s-eu-1-master:~# kubectl get pods
NAME READY STATUS RESTARTS AGE
k8s-eu-1-worker-1-nfs-subdir-external-provisioner-74787c8dx8f4j 1/1 Running 0 16h
k8s-eu-1-worker-2-nfs-subdir-external-provisioner-ffdfb98dk9mrw 1/1 Running 0 16h
k8s-eu-1-worker-3-nfs-subdir-external-provisioner-7c9797c8jpzkv 1/1 Running 0 16h
k8s-eu-1-worker-4-nfs-subdir-external-provisioner-6bd84f54b2xx2 1/1 Running 0 86m
k8s-eu-1-worker-5-nfs-subdir-external-provisioner-84976cd7lttsn 1/1 Running 0 16h
These are the PersistentVolumeClaims
:
root@k8s-eu-1-master:~# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
k8s-eu-1-worker-1-cassandra-0 Bound pvc-22d85482-9103-43b5-a93e-a52e70bdbd16 1Gi RWO k8s-eu-1-worker-1 18h
k8s-eu-1-worker-2-cassandra-0 Bound pvc-5118d0ae-b6fa-476e-b22d-a5bb3247f7fb 1Gi RWO k8s-eu-1-worker-2 18h
k8s-eu-1-worker-3-cassandra-0 Bound pvc-7a7160ea-0bf6-42de-9b35-3464930ea7d0 1Gi RWO k8s-eu-1-worker-3 18h
k8s-eu-1-worker-4-cassandra-0 Bound pvc-b7934357-6d6c-47a8-b644-28b9a0ad58b5 1Gi RWO k8s-eu-1-worker-4 18h
k8s-eu-1-worker-5-cassandra-0 Bound pvc-d587623f-f62f-4f80-b6c2-39104c568fda 1Gi RWO k8s-eu-1-worker-5 18h
and the PersistentVolume
(it seems that they coincide with PVC
) :
root@k8s-eu-1-master:~# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-22d85482-9103-43b5-a93e-a52e70bdbd16 1Gi RWO Delete Bound default/k8s-eu-1-worker-1-cassandra-0 k8s-eu-1-worker-1 18h
pvc-5118d0ae-b6fa-476e-b22d-a5bb3247f7fb 1Gi RWO Delete Bound default/k8s-eu-1-worker-2-cassandra-0 k8s-eu-1-worker-2 18h
pvc-7a7160ea-0bf6-42de-9b35-3464930ea7d0 1Gi RWO Delete Bound default/k8s-eu-1-worker-3-cassandra-0 k8s-eu-1-worker-3 18h
pvc-b7934357-6d6c-47a8-b644-28b9a0ad58b5 1Gi RWO Delete Bound default/k8s-eu-1-worker-4-cassandra-0 k8s-eu-1-worker-4 18h
pvc-d587623f-f62f-4f80-b6c2-39104c568fda 1Gi RWO Delete Bound default/k8s-eu-1-worker-5-cassandra-0 k8s-eu-1-worker-5 18h
I tried to modify the cassandra-statefulset.yaml
file ( kubernetes.io/docs/tutorials/stateful-application/cassandra/
) :
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: cassandra
labels:
app: cassandra
spec:
serviceName: cassandra
replicas: 3
selector:
matchLabels:
app: cassandra
template:
metadata:
labels:
app: cassandra
spec:
terminationGracePeriodSeconds: 1800
containers:
- name: cassandra
image: gcr.io/google-samples/cassandra:v13
imagePullPolicy: Always
ports:
- containerPort: 7000
name: intra-node
- containerPort: 7001
name: tls-intra-node
- containerPort: 7199
name: jmx
- containerPort: 9042
name: cql
resources:
limits:
cpu: "500m"
memory: 1Gi
requests:
cpu: "500m"
memory: 1Gi
securityContext:
capabilities:
add:
- IPC_LOCK
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- nodetool drain
env:
- name: MAX_HEAP_SIZE
value: 512M
- name: HEAP_NEWSIZE
value: 100M
- name: CASSANDRA_SEEDS
value: "cassandra-0.cassandra.default.svc.cluster.local"
- name: CASSANDRA_CLUSTER_NAME
value: "K8Demo"
- name: CASSANDRA_DC
value: "DC1-K8Demo"
- name: CASSANDRA_RACK
value: "Rack1-K8Demo"
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
readinessProbe:
exec:
command:
- /bin/bash
- -c
- /ready-probe.sh
initialDelaySeconds: 15
timeoutSeconds: 5
# These volume mounts are persistent. They are like inline claims,
# but not exactly because the names need to match exactly one of
# the stateful pod volumes.
volumeMounts:
- name: k8s-eu-1-worker-1-cassandra-0
mountPath: /srv/shared-k8s-eu-1-worker-1
- name: k8s-eu-1-worker-2-cassandra-0
mountPath: /srv/shared-k8s-eu-1-worker-2
- name: k8s-eu-1-worker-3-cassandra-0
mountPath: /srv/shared-k8s-eu-1-worker-3
- name: k8s-eu-1-worker-4-cassandra-0
mountPath: /srv/shared-k8s-eu-1-worker-4
- name: k8s-eu-1-worker-5-cassandra-0
mountPath: /srv/shared-k8s-eu-1-worker-5
# These are converted to volume claims by the controller
# and mounted at the paths mentioned above.
# do not use these in production until ssd GCEPersistentDisk or other ssd pd
volumeClaimTemplates:
- metadata:
name: k8s-eu-1-worker-1
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: k8s-eu-1-worker-1
resources:
requests:
storage: 1Gi
- metadata:
name: k8s-eu-1-worker-2
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: k8s-eu-1-worker-2
resources:
requests:
storage: 1Gi
- metadata:
name: k8s-eu-1-worker-3
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: k8s-eu-1-worker-3
resources:
requests:
storage: 1Gi
- metadata:
name: k8s-eu-1-worker-4
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: k8s-eu-1-worker-4
resources:
requests:
storage: 1Gi
- metadata:
name: k8s-eu-1-worker-5
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: k8s-eu-1-worker-5
resources:
requests:
storage: 1Gi
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: k8s-eu-1-worker-1
provisioner: k8s-sigs.io/k8s-eu-1-worker-1
parameters:
type: pd-ssd
metadata:
name: k8s-eu-1-worker-2
provisioner: k8s-sigs.io/k8s-eu-1-worker-2
parameters:
type: pd-ssd
metadata:
name: k8s-eu-1-worker-3
provisioner: k8s-sigs.io/k8s-eu-1-worker-3
parameters:
type: pd-ssd
metadata:
name: k8s-eu-1-worker-4
provisioner: k8s-sigs.io/k8s-eu-1-worker-4
parameters:
type: pd-ssd
metadata:
name: k8s-eu-1-worker-5
provisioner: k8s-sigs.io/k8s-eu-1-worker-5
parameters:
type: pd-ssd
---
But, clearly, I’m making some mistakes :
root@k8s-eu-1-master:~# kubectl apply -f ./cassandraStatefulApp/cassandra-statefulset.yaml
statefulset.apps/cassandra created
Warning: resource storageclasses/k8s-eu-1-worker-5 is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
The StorageClass "k8s-eu-1-worker-5" is invalid: parameters: Forbidden: updates to parameters are forbidden.
root@k8s-eu-1-master:~# kubectl get statefulsets
NAME READY AGE
cassandra 0/3 8s
root@k8s-eu-1-master:~# kubectl get statefulsets
NAME READY AGE
cassandra 0/3 8s
root@k8s-eu-1-master:~#
root@k8s-eu-1-master:~# kubectl describe statefulsets cassandra
Name: cassandra
Namespace: default
CreationTimestamp: Tue, 07 Nov 2023 11:00:59 +0100
Selector: app=cassandra
Labels: app=cassandra
Annotations: <none>
Replicas: 3 desired | 0 total
Update Strategy: RollingUpdate
Partition: 0
Pods Status: 0 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: app=cassandra
Containers:
cassandra:
Image: gcr.io/google-samples/cassandra:v13
Ports: 7000/TCP, 7001/TCP, 7199/TCP, 9042/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP, 0/TCP
Limits:
cpu: 500m
memory: 1Gi
Requests:
cpu: 500m
memory: 1Gi
Readiness: exec [/bin/bash -c /ready-probe.sh] delay=15s timeout=5s period=10s #success=1 #failure=3
Environment:
MAX_HEAP_SIZE: 512M
HEAP_NEWSIZE: 100M
CASSANDRA_SEEDS: cassandra-0.cassandra.default.svc.cluster.local
CASSANDRA_CLUSTER_NAME: K8Demo
CASSANDRA_DC: DC1-K8Demo
CASSANDRA_RACK: Rack1-K8Demo
POD_IP: (v1:status.podIP)
Mounts:
/srv/shared-k8s-eu-1-worker-1 from k8s-eu-1-worker-1-cassandra-0 (rw)
/srv/shared-k8s-eu-1-worker-2 from k8s-eu-1-worker-2-cassandra-0 (rw)
/srv/shared-k8s-eu-1-worker-3 from k8s-eu-1-worker-3-cassandra-0 (rw)
/srv/shared-k8s-eu-1-worker-4 from k8s-eu-1-worker-4-cassandra-0 (rw)
/srv/shared-k8s-eu-1-worker-5 from k8s-eu-1-worker-5-cassandra-0 (rw)
Volumes: <none>
Volume Claims:
Name: k8s-eu-1-worker-1
StorageClass: k8s-eu-1-worker-1
Labels: <none>
Annotations: <none>
Capacity: 1Gi
Access Modes: [ReadWriteOnce]
Name: k8s-eu-1-worker-2
StorageClass: k8s-eu-1-worker-2
Labels: <none>
Annotations: <none>
Capacity: 1Gi
Access Modes: [ReadWriteOnce]
Name: k8s-eu-1-worker-3
StorageClass: k8s-eu-1-worker-3
Labels: <none>
Annotations: <none>
Capacity: 1Gi
Access Modes: [ReadWriteOnce]
Name: k8s-eu-1-worker-4
StorageClass: k8s-eu-1-worker-4
Labels: <none>
Annotations: <none>
Capacity: 1Gi
Access Modes: [ReadWriteOnce]
Name: k8s-eu-1-worker-5
StorageClass: k8s-eu-1-worker-5
Labels: <none>
Annotations: <none>
Capacity: 1Gi
Access Modes: [ReadWriteOnce]
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedCreate 3s (x13 over 23s) statefulset-controller create Pod cassandra-0 in StatefulSet cassandra failed error: Pod "cassandra-0" is invalid: [spec.containers[0].volumeMounts[0].name: Not found: "k8s-eu-1-worker-1-cassandra-0", spec.containers[0].volumeMounts[1].name: Not found: "k8s-eu-1-worker-2-cassandra-0", spec.containers[0].volumeMounts[2].name: Not found: "k8s-eu-1-worker-3-cassandra-0", spec.containers[0].volumeMounts[3].name: Not found: "k8s-eu-1-worker-4-cassandra-0", spec.containers[0].volumeMounts[4].name: Not found: "k8s-eu-1-worker-5-cassandra-0"]
How do I have to specify the different volumeMounts
of each of the provisioners
in cassandra-statefulset.yaml
file ?