Persistence volume for Express.js public folder

Asking for help? Comment out what you need so we can get more information to help you!

Cluster information:

Kubernetes version: 1.12.8
Cloud being used: Azure
Installation method: Azure AKS
Host OS: Linux
CNI and version:

I am brand new to Kubernetes. I created an Express.js app that allows a user to upload a video that is formatted by FFMPEG. Once the video is formatted the video is stored in the public folder and link to the resource is provided to the user.

Everything worked in the Docker container and local Kubernetes instance. When I uploaded the app to Azure I discovered that the pods are indeed stateless and that videos saved in one pod are not accessible to the other pods.

I did some research and I understand that it is necessary to create a persistence volume that the pods can share. My question is how can I make that public folder read and write to that volume? I am guessing that I would:

volumeMounts:
- name: videocms-persistent-storage
mountPath: /public

I am not certain. Will all of the files in the pod all be stored in the volume? OR will only the files in the path on the pod (/public) be stored in the volume? Does the value I use for the mountPath actually tell Kubenetes which directory in the pod to enable the file share? Or is it just a virtual directory where the name of the directory is inconsequential and all of the files in the pod are all stored in that file share?

Please describe the problem, what you want to achieve, what you tried, etc. :slight_smile:

I tried this: https://docs.microsoft.com/en-us/azure/aks/azure-files-volume. I was able to successfully attach the volume. I could see that when I ran kubectl describe pod I could see that the volume displayed in the output.

To make a long story short I was not seeing that files were being shared across the three replicas. I reached out to the Kubernetes Slack group. One user answered. He recommending adding the mountOptions as suggested in the article. I did that, and that didn’t fix it. I also tried adding a persistence volume and a persistence volume claim. That still didn’t work, and that is where I am right now.

Below is my deployment yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: node-ffmpeg-video-cms-deployment 
  labels:
    app: node-ffmpeg-video-cms
spec:
  replicas: 3
  template:
    metadata:
      name: node-ffmpeg-video-cms
      labels:
        app: node-ffmpeg-video-cms
    spec:
      containers:
      - name: node-ffmpeg-video-cms
        image: nodeffmpegvideocmscr.azurecr.io/node-ffmpeg-video-cms:v1
        imagePullPolicy: IfNotPresent
        volumeMounts:
        - name: mystorageaccount17924
          mountPath: /www/var/public
      restartPolicy: Always
      volumes:
      - name: mystorageaccount17924
        azureFile:
          secretName: azure-secret
          shareName: node-ffmpeg-video-cms
          readOnly: false
  selector:
    matchLabels:
      app: node-ffmpeg-video-cms

---

apiVersion: v1
kind: Service
metadata:
  name: node-ffmpeg-video-cms-service
spec:
  selector:
    app: node-ffmpeg-video-cms
  ports:
    - port: 3000
  type: LoadBalancer

---

apiVersion: v1
kind: PersistentVolume
metadata:
  name: sample-storage
  # The label is used for matching the exact claim
  labels:
    usage: sample-storage
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  azureFile:
    secretName: azure-secret
    shareName: node-ffmpeg-video-cms
    readOnly: false
  mountOptions:
    - dir_mode=0777
    - file_mode=0777
    - uid=1000
    - gid=1000
  
---

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: sample-storage-claim
  annotations:
    volume.beta.kubernetes.io/storage-class: ""
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Gi
  selector:
    matchLabels:
      usage: sample-storage

This is the output when I use the kubectl describe pod command:

Name:           node-ffmpeg-video-cms-deployment-8547d97c69-wzq7j
Namespace:      default
Priority:       0
Node:           aks-nodepool1-22998726-0/10.240.0.4
Start Time:     Fri, 26 Jul 2019 14:55:29 -0500
Labels:         app=node-ffmpeg-video-cms
                pod-template-hash=8547d97c69
Annotations:    <none>
Status:         Running
IP:             10.244.0.24
Controlled By:  ReplicaSet/node-ffmpeg-video-cms-deployment-8547d97c69
Containers:
  node-ffmpeg-video-cms:
    Container ID:   docker://4c4f89dfc0058fcaa6fcba0b3dd66e89493715fe4373ffe625eacc0296a45ae1
    Image:          nodeffmpegvideocmscr.azurecr.io/node-ffmpeg-video-cms:v1
    Image ID:       docker-pullable://nodeffmpegvideocmscr.azurecr.io/node-ffmpeg-video-cms@sha256:2b949efa8535b59a927efbb4d7c6d24739691fa90fad86c91086dc4cfbadbe23
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Fri, 26 Jul 2019 14:55:31 -0500
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-7pb5v (ro)
      /www/var/public from mystorageaccount17924 (rw)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  mystorageaccount17924:
    Type:        AzureFile (an Azure File Service mount on the host and bind mount to the pod)
    SecretName:  azure-secret
    ShareName:   node-ffmpeg-video-cms
    ReadOnly:    false
  default-token-7pb5v:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-7pb5v
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                node.kubernetes.io/unreachable:NoExecute for 300s
Events:          <none>
Davids-MBP-2:video-cms dkrause$ kubectl describe pod node-ffmpeg-video-cms-deployment-8547d97c69-wzq7j
Name:           node-ffmpeg-video-cms-deployment-8547d97c69-wzq7j
Namespace:      default
Priority:       0
Node:           aks-nodepool1-22998726-0/10.240.0.4
Start Time:     Fri, 26 Jul 2019 14:55:29 -0500
Labels:         app=node-ffmpeg-video-cms
                pod-template-hash=8547d97c69
Annotations:    <none>
Status:         Running
IP:             10.244.0.24
Controlled By:  ReplicaSet/node-ffmpeg-video-cms-deployment-8547d97c69
Containers:
  node-ffmpeg-video-cms:
    Container ID:   docker://4c4f89dfc0058fcaa6fcba0b3dd66e89493715fe4373ffe625eacc0296a45ae1
    Image:          nodeffmpegvideocmscr.azurecr.io/node-ffmpeg-video-cms:v1
    Image ID:       docker-pullable://nodeffmpegvideocmscr.azurecr.io/node-ffmpeg-video-cms@sha256:2b949efa8535b59a927efbb4d7c6d24739691fa90fad86c91086dc4cfbadbe23
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Fri, 26 Jul 2019 14:55:31 -0500
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-7pb5v (ro)
      /www/var/public from mystorageaccount17924 (rw)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  mystorageaccount17924:
    Type:        AzureFile (an Azure File Service mount on the host and bind mount to the pod)
    SecretName:  azure-secret
    ShareName:   node-ffmpeg-video-cms
    ReadOnly:    false
  default-token-7pb5v:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-7pb5v
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                node.kubernetes.io/unreachable:NoExecute for 300s
Events:          <none>

What I am trying to achieve at a high level is described in the original post. From a technical standpoint I use an Express.js app running in a Node/FFMPEG container. The app provides a form to upload a video file that is formatted and provided as a link to the user.

When FFMPEG writes the file, it writes the file to the public directory of the Express.js app. Without volumes the user may not be able to download the file. The pod that responds to the request from the link may not the same pod that formatted and saved the file. From what I understand volumes are the way to share data, files, etc. My idea was to mount a volume for the public directory (or possibly a public/videos directory). I was thinking that may be the correct way to allow each of the three apps running in the three pods to share a common repository to write and share files.

Ugh, maybe someone with azure knowledge knows better. But the guide says:

If multiple pods need concurrent access to the same storage volume, you can use Azure Files to connect using the Server Message Block (SMB) protocol. This article shows you how to manually create an Azure Files share and attach it to a pod in AKS.

And I don’t find where the the SMB protocol is used there. Maybe the container images use that?

My guess is that this guide doesn’t use SMB and just a direct mount. Therefore, it won’t work (fs have cache and things that need to be cluster aware for sharing, and if not, it won’t work as expected).

But, really don’t know

Lol. Clearly I don’t know either. I thought the SMB protocol was set up on Azure’s servers. They have a link to the Kubernetes Github page detailing how to use azure_file. The first step in the README is to install cifs-utils. I just assumed that Azure would have that in place?

I figured it out. I just need to supply the correct path to the volumeMouts.mountPath. It needed to be /var/www/public vs. /www/var/public. I guess I need to learn more about Linux as well!

That made it worked, then? :slight_smile:

Yes! :smile: