Kubernetes version: v1.14.2
Cloud being used: bare-metal
Installation method: kubeadm bootstrapping
Host OS: RHEL 7.6
CNI and version: Weave Net 2.5.2
CRI and version: Docker 18.9.6
Edit 2: Solved by way of configuration, courtesy of our storage providers.
I’m working on implementing an NFS storage into our kubernetes stage cluster. The storage is being hosted by another organization, and as part of their storage structure they use .snapshot directories that are generated at each mount point. This causes issues for us, as it leads to there being unwanted .snapshots in our volumes and containers, sometimes causing programs to fail.
This is mostly an issue when using helm charts, as these are not created or maintained by us and thus won’t take the .snapshot files into account.
My current workaround is to edit the manifests where issues appear, to add a
subPath: mount line to all
persistantVolumeClaims. However, as this (so far) needs to be done manually, I am hoping to find a more streamlined solution.
Alternatively, I’ve been thinking about writing a script that adds
subPath: mount to all relevant
volumeMounts in order to make the process automatic, and I’ve been looking at the kustomize.io project to see if it could be a good alternative.
Do you guys have any suggestions, is it for example possible to have Kubernetes ignore/hide or even unmount specific files or directories within the container context, or do you think I am stuck with adding sub-paths to my manifests?
Edit 1: We are currently using the nfs client provisioner, but as our storage service generates a
.snapshot directory at the root of each mount point, we end up with the
.snapshot present in our volumes despite this.