Multiple storage class support in statefulset for same volumeClaimTemplates for different replicas

Cluster information:

Kubernetes version: 1.27.1
Cloud being used: Bare Metal
Installation method: Via kubeadm
Host OS: SLES 15
CNI and version:
CRI and version:

In below sample template we would like to create pvc with different storage class for 2 replica set statefulset. We would like to attach different PVC with different pods. Hence,

  1. we are giving volumeClaimTemplates in deployment.yaml having name as pv-test and this volume claim name is used for mounting in container spec under volumeMounts.
  2. Also pvc.yaml template has been created which contains name generated on basis of replica and somehow created with same name as would have been created by volumeClaimTemplates if no pvc.yaml would have been exist.

Is it correct way of doing multistorage class handling as we are creating separate yaml for pvc for claim name creation for 2 pods. Also, we are keeping claimName in statefulset under volumeClaimTemplates metadata so that statefulset can map against volumeMounts name?


cat values.yaml

images:
name: nginx:1.14.2 #nginx:latest

updateStrategy:
type: RollingUpdate
partition: 1
isImageUpgrade: false

resources:
limits:
cpu: 4
memory: 4Gi
requests:
cpu: 4
memory: 4Gi

nodeSelectorTerms:

  • matchExpressions:
    - key: node-pool
    operator: In
    values:
    - pool3

persistentVolumeClaim:
storage:
className0: test-sc-1
className1: test-sc-2
size:
test: 2Gi

cat deployment.yaml

apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
labels:
helm.sh/chart: {{ .Chart.Version }}
annotations:
product-name: web
spec:
replicas: 2
serviceName: web-svc-hl
podManagementPolicy: “Parallel”
updateStrategy:
type: RollingUpdate
rollingUpdate:
partition: 1
selector:
matchLabels:
app.kubernetes.io/name: “web”
template:
metadata:
labels:
app.kubernetes.io/name: “web”
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
{{- toYaml .Values.nodeSelectorTerms | nindent 14 }}
serviceAccountName: web-usr
containers:
- name: web
args:
- /bin/sh
- -c
- sleep 30;touch /tmp/healthy; sleep 3600
image: {{ .Values.images.name }}
resources:
limits:
cpu: {{ .Values.resources.limits.cpu | quote }}
memory: {{ .Values.resources.limits.memory | quote }}
requests:
cpu: {{ .Values.resources.requests.cpu | quote }}
memory: {{ .Values.resources.requests.memory | quote }}
readinessProbe:
exec:
command: [“ls”, “/var/opt/podinstalled” ]
periodSeconds: 5
successThreshold: 1
lifecycle:
preStop:
exec:
command: [“hostname”]
volumeMounts:
- name: pv-test
mountPath: /var/test
securityContext:
capabilities:
add:
- IPC_LOCK
- SYS_RESOURCE
- IPC_OWNER
terminationGracePeriodSeconds: 3
volumeClaimTemplates:
- metadata:
name: pv-test

cat pvc.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-test-0
spec:
accessModes:

  • ReadWriteOnce
    resources:
    requests:
    storage: {{ .Values.persistentVolumeClaim.storage.size.test }}
    storageClassName: {{ .Values.persistentVolumeClaim.storage.className0 }}

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-test-1
spec:
accessModes:

  • ReadWriteOnce
    resources:
    requests:
    storage: {{ .Values.persistentVolumeClaim.storage.size.test }}
    storageClassName: {{ .Values.persistentVolumeClaim.storage.className1 }}