Kubernetes version: 1.24.7-00
Cloud being used: bare-metal
Installation method: terraform + ansible
Host OS: Ubuntu 20.04
CNI and version: weave-net v2.8.1
CRI and version: containerd v1.6.12
Hello,
To make it clearer I will explain with a case. We have a service that synchronizes user data with google workspace. I set up an image with some pre-configured scripts
/opt/app/scripts
and then I deploy it to the cluster using PersistentVolumeClaim to keep the data between pod updates.
It turns out that the deployment is overwriting the initial data in /opt/app/scripts
One approach I used was to copy the /opt/app/scripts folder to /tmp and then via post hook, dump the files to the pv created.
Is this the best approach? If not, what is the best approach for this?
So it looks like the data-files volume is mounted at /opt/gcds/data.
It turns out that the deployment is overwriting the initial data in /opt/app/scripts
Can you help me parse that? Is it “one of the scripts is overwriting my data in /opt/gcds/data” or “something is overwriting my scripts in /opt/app/scripts” ? English is ambiguous.
I can’t say what is in /tmp when it starts up, but this hook is nuking your volumes (well, merging static data into them, I guess) every time the container starts. That seems to be on purpose, but I don’t know why.
It’s not actually mounting your volumes and you are just seeing the underlying empty directories
Something in your app is wiping those dirs
Something in the sotrage driver is wiping those dirs
Kubernetes itself doesn’t do anything here.
Here’s what I would do to figure out whether #1 is true.
In your Dockerfile:
RUN mkdir -p data scripts log && touch data/this_is_the_image_data scripts/this_is_the_image_scripts logs/this_is_th_image_log
Make sure the volumes have some real data in them (even just a file named “this_is_the_volume_data”.
Then run your deployment and see what you see in those dirs.
If you see “this_is_the_volume_data”, then you know it is being mounted.
If you see “this_is_the_image_data”, then you know it is not being mounted.
If you see anything else, then you know either it is being wiped or something else is mounted
If it is the last case, check the volume - does it still contain “this_is_the_volume_data” or was it wiped?
Further experiments:
Replace the image and command with something that just logs the contents of those dirs and then sleep forever. This will rule out your own app.
Mount the volumes as readOnly: true and see if someone complains.
Mount the volumes to a different dir and see if the behavior is different.