Asking for help? Comment out what you need so we can get more information to help you!
Cluster information:
Kubernetes version: 1.12.8
Cloud being used: Azure
Installation method: Azure AKS
Host OS: Linux
CNI and version:
I am brand new to Kubernetes. I created an Express.js app that allows a user to upload a video that is formatted by FFMPEG. Once the video is formatted the video is stored in the public folder and link to the resource is provided to the user.
Everything worked in the Docker container and local Kubernetes instance. When I uploaded the app to Azure I discovered that the pods are indeed stateless and that videos saved in one pod are not accessible to the other pods.
I did some research and I understand that it is necessary to create a persistence volume that the pods can share. My question is how can I make that public folder read and write to that volume? I am guessing that I would:
I am not certain. Will all of the files in the pod all be stored in the volume? OR will only the files in the path on the pod (/public) be stored in the volume? Does the value I use for the mountPath actually tell Kubenetes which directory in the pod to enable the file share? Or is it just a virtual directory where the name of the directory is inconsequential and all of the files in the pod are all stored in that file share?
To make a long story short I was not seeing that files were being shared across the three replicas. I reached out to the Kubernetes Slack group. One user answered. He recommending adding the mountOptions as suggested in the article. I did that, and that didn’t fix it. I also tried adding a persistence volume and a persistence volume claim. That still didn’t work, and that is where I am right now.
What I am trying to achieve at a high level is described in the original post. From a technical standpoint I use an Express.js app running in a Node/FFMPEG container. The app provides a form to upload a video file that is formatted and provided as a link to the user.
When FFMPEG writes the file, it writes the file to the public directory of the Express.js app. Without volumes the user may not be able to download the file. The pod that responds to the request from the link may not the same pod that formatted and saved the file. From what I understand volumes are the way to share data, files, etc. My idea was to mount a volume for the public directory (or possibly a public/videos directory). I was thinking that may be the correct way to allow each of the three apps running in the three pods to share a common repository to write and share files.
Ugh, maybe someone with azure knowledge knows better. But the guide says:
If multiple pods need concurrent access to the same storage volume, you can use Azure Files to connect using the Server Message Block (SMB) protocol. This article shows you how to manually create an Azure Files share and attach it to a pod in AKS.
And I don’t find where the the SMB protocol is used there. Maybe the container images use that?
My guess is that this guide doesn’t use SMB and just a direct mount. Therefore, it won’t work (fs have cache and things that need to be cluster aware for sharing, and if not, it won’t work as expected).
Lol. Clearly I don’t know either. I thought the SMB protocol was set up on Azure’s servers. They have a link to the Kubernetes Github page detailing how to use azure_file. The first step in the README is to install cifs-utils. I just assumed that Azure would have that in place?
I figured it out. I just need to supply the correct path to the volumeMouts.mountPath. It needed to be /var/www/public vs. /www/var/public. I guess I need to learn more about Linux as well!