Method for excluding file or directory in volume mount?

Cluster information:

Kubernetes version: v1.14.2
Cloud being used: bare-metal
Installation method: kubeadm bootstrapping
Host OS: RHEL 7.6
CNI and version: Weave Net 2.5.2
CRI and version: Docker 18.9.6


Edit 2: Solved by way of configuration, courtesy of our storage providers.

Hi, all!
I’m working on implementing an NFS storage into our kubernetes stage cluster. The storage is being hosted by another organization, and as part of their storage structure they use .snapshot directories that are generated at each mount point. This causes issues for us, as it leads to there being unwanted .snapshots in our volumes and containers, sometimes causing programs to fail.
This is mostly an issue when using helm charts, as these are not created or maintained by us and thus won’t take the .snapshot files into account.

My current workaround is to edit the manifests where issues appear, to add a subPath: mount line to all volumeMounts of persistantVolumeClaims. However, as this (so far) needs to be done manually, I am hoping to find a more streamlined solution.
Alternatively, I’ve been thinking about writing a script that adds subPath: mount to all relevant volumeMounts in order to make the process automatic, and I’ve been looking at the kustomize.io project to see if it could be a good alternative.

Do you guys have any suggestions, is it for example possible to have Kubernetes ignore/hide or even unmount specific files or directories within the container context, or do you think I am stuck with adding sub-paths to my manifests?

Cheers,
Oscar

Edit 1: We are currently using the nfs client provisioner, but as our storage service generates a .snapshot directory at the root of each mount point, we end up with the .snapshot present in our volumes despite this.

You can’t really do that directly, but you could use an alternative like the nfs client provisioner to instead provision directories underneath your main share to hand out to your pods. Each new request will create a folder under there following the naming convention ${namespace}-${pvcName}-${pvName} and indirectly will “hide” the .snapshot directory.

1 Like

That’s really fitting, since I’m already using the nfs client provisioner that feels like it would simply be utilizing a similar “fix structure” (hiding the directory at a higher level) as the workaround I am using today, albeit in a more neat package. Also using the functionality of the provisioner we’re already using means it will be reproducible for coming clusters that use the same storage too, in an easier way than this manual workaround. Thanks for the tip!

Am I understanding it more or less correctly that the provisioner will then point to one mount point, where the .snapshot directory will be generated, and subsequently create folders under it which in turn are mounted into my directories without creating additional mount points (and .snapshots)?

1 Like

Yup! :slight_smile:

1 Like

I’m not sure if there’s anything I’m missing, but no difference came from moving from the previous installation of

helm install stable/nfs-client-provisioner --name nfs-client-provisioner --set nfs.server=X.X.X.X, storageClass.defaultClass=true, nfs.path=/share

to

helm install stable/nfs-client-provisioner --name nfs-client-provisioner --set nfs.server=X.X.X.X, storageClass.defaultClass=true, nfs.path=/share/stage

(i.e. adding a subdirectory to the nfs.path)

Looking at it it does feel like this change shouldn’t have made a huge difference, but I’m not sure what I need to do to achieve the desired effect (realizing this likely wasn’t what you meant).

To illustrate, I installed a quick registry chart from helm, and this is what happens:

λ kubectl exec -it registry-docker-registry-564ff68789-vnkww sh -n registry
/ $ ls /var/lib/registry/ -lA
total 8
drwxrwxrwx   64 root     root          8192 Aug 12 12:05 .snapshot

A stab in the dark; would it be better to mount my share to one of the servers in my cluster, and point my provisioner to that server rather than the NFS server, to move one “mount-step” away from the .snapshot generation?

The snapshot directory should not have been visible in the provisioned volumes even in the original as they’re all sub-directories that are created under there.

Does your storage system create .snapshot in each sub-directory automatically? I tested creating a directory at the root of ours (.test) and nothing was propagated to the directories beneath it.

As far as I understand it, it creates a .snapshot at the root of each mount point, making it visible only via that mount point. For example, I can only access the .snapshot from inside the container/volume on the disk, not from another mount point.

E.g. when I mount our nfs share on another server, that generates a .snapshot directory that I can access at the mount root on that server, but I won’t see the docker registry .snapshot file in the corresponding volumes folder.

It’s a pickle, I think we will contact the storage system maintainers and see if we can configure it to, for example, only create .snapshot at the very root of the share, and not in any sub-directories. However, until then I am stuck with this. Maybe my previous workaround is the way to go for now?

Yeah – your previous suggestion might be your best bet. If you don’t mind modifying the manifests “in flight” you could use something like OPA or metacontroller to patch the pod specs with the mounts, but that might be kind of difficult to manage depending on how many things you have / how different they are.

Yeah, as mentioned my current plan is to write some sort of script that finds all relevant mounts and adds the subPath to the manifests, but I would definitely prefer an option where an automated patch could be made possible, even if the road there requires some work. Everything to streamline things where possible, and all that.

As what I really only need to do at the moment is to add one line at certain places of the manifest, I hope that means I can work something out relatively painlessly. Worst case I’ll get practice and hopefully some new insights, not too shabby either! :sweat_smile:

Thanks for the tips and for knocking the idea about, at least this means I haven’t missed some built-in function and have been on a more or less reasonable path to working around the situation.

Cheers!

1 Like

Thrilling update: Configuration is king!

Finally, the storage service providers figured out how to configure so that the .snapshot directory doesn’t show up in our mounts, so we’re back to business! :raised_hands:

Still an educational endeavor, but I am very happy to not have to customize our kubernetes setup far too much to handle this.

1 Like

I meet the same issue also. Do you know how to configure to disbale .snapshot directory( the storage service providers figured out how to configure so that the .snapshot directory doesn’t show up in our mounts ) ? Thanks a lot.

Hi,
I’ll send an email to the technician who helped us and ask him to detail the solution, I’ll get back to you as soon as I have it.

In the meantime, I found this link that might hold the solution. (I’m not sure since I am not the one interacting with the storage settings)
https://kb.netapp.com/app/answers/answer_view/a_id/1034712/~/how-to-turn-off-access-to-.snapshot-directory-from-clients-

Description

Hide .snapshot directory from clients

Turn off access to .snapshot directory

Procedure

Run the following command on the filer:

vol options volumename nosnapdir on

This will disable access to the .snapshot directory that is present at client mount points and the root of the directories, and will make the .snapshot directories invisible. By default this option is off.

Note: This requires the share to be remapped or remounted for it to take effect.

I will verify this once the technician has replied.