Mounting persistent volumes of a statefulset from a job

Asking for help? Comment out what you need so we can get more information to help you!

Cluster information:

Kubernetes version: 1.19.3 (k3s)
Cloud being used: bare-metal
Installation method: na
Host OS: Unbuntu
CNI and version: na
CRI and version: na

Hello

I would like to execute a Cronjob which mounts volumes from pods of a statefulset.
To do this, i wrote this in the yaml :

  • name: …
    displayName: …

    spec: |
    apiVersion: batch/v1beta1
    kind: CronJob
    metadata:

    spec:

    jobTemplate:
    spec:
    backoffLimit: 1
    template:
    spec:
    containers:

    image: …

    args:
    - …
    volumeMounts:
    - name : dbdata
    mountPath: /var/lib/clickhouse
    restartPolicy: Never
    volumes:
    - name: dbdata
    persistentVolumeClaim:
    claimName: datadir-db-0
    selector:
    matchLabels:
    app: db

The pb is that the “claimName” is a “fixed” string.
Here “datadir-db-0” is the claim for the first pod of the statefulset (which has the label app=db)
On the second pod of the statefulset, the second pod executed by the job will fail because “datadir-db-0” claim name does not exist.“datadir-db-1” claimname should be used instead.

How to mount the right persistent volume of the statfulset from each pods executed by the job ?

Regards.

nikkko