Discussion about storage class and persistent volume

Could you kindly clarify how to read or edit files on a (using longhorn sc) persistent volume? Would you kindly provide a quick explanation of this with examples?

Cluster information:

Kubernetes version: 1.24.3
Cloud being used: (put bare-metal on-premise)
Installation method: kubeadm
Host OS: Ubuntu arm64 20.04
CNI and version: cri-o://1.23.3
CRI and version: flannel

You can format your yaml by highlighting it and pressing Ctrl-Shift-C, it will make your output easier to read.

Hi ramk:

When you provide a persistent volume in the pod definition, you specify where the volume is mounted in the pod’s filesystem; in the following example (copied from Volumes), the volume is mounted on /test-ebs.

The application running in the container (k8s.gcr.io/test-webserver in the example) can read or write as it would normally do in Linux. Say you want to create a file in the volume, you can just echo "Hello world" > /test-ebs/hello.txt and read it using cat /temp-ebs/hello.txt.

As you created the volume (or requested one) specifying a StorageClass, the details on how to specifically “talk” to the storage backend are known to Kubernetes, so your process inside the container does not have to care about how it is actually “writing bytes” to the storage support.

If you deployed your pod in AWS and requested a volume, it would likely be an EBS volume. If you want to test the same application of your on-premises cluster (having Longhorn as a storage backend), the application will keep writing and reading files as it did when it is deployed in AWS.

Again, all the application running inside the pod cares about is that the volume is mounted in the pod’s filesystem, so it will write to and read from the mount point no matter what the storage backend is.

apiVersion: v1
kind: Pod
metadata:
  name: test-ebs
spec:
  containers:
  - image: k8s.gcr.io/test-webserver
    name: test-container
    volumeMounts:
    - mountPath: /test-ebs
      name: test-volume
  volumes:
  - name: test-volume
    # This AWS EBS volume must already exist.
    awsElasticBlockStore:
      volumeID: "<volume id>"
      fsType: ext4

Best regards,

Xavi

Hi Xavi,

Thanks, This is going to work. I must connect to the PV using FTP, though, in order to access the files with read and update privileges.I tried the NFS technique, and it worked, but would PV work?

If NFS works for you, it is a good solution. I assume that you now connect via FTP to the NFS server to upload/download files that are then “mounted” as a PV in the pod.

If the pod and someone is modifying data at the same time unexpected thing may happen.

Maybe it would be much simpler just to use kubectl cp, as it allows to copy files from/to containers depending on what you need.

Your application (or some additional component) should be in charge of the upload or download process, instead of accessing directly the PV.

You can run a sidecar container with git and pull/push data in and out of your container, for example.

Best regards,

Xavi