I can’t say what is in /tmp when it starts up, but this hook is nuking your volumes (well, merging static data into them, I guess) every time the container starts. That seems to be on purpose, but I don’t know why.
It’s not actually mounting your volumes and you are just seeing the underlying empty directories
Something in your app is wiping those dirs
Something in the sotrage driver is wiping those dirs
Kubernetes itself doesn’t do anything here.
Here’s what I would do to figure out whether #1 is true.
In your Dockerfile:
RUN mkdir -p data scripts log && touch data/this_is_the_image_data scripts/this_is_the_image_scripts logs/this_is_th_image_log
Make sure the volumes have some real data in them (even just a file named “this_is_the_volume_data”.
Then run your deployment and see what you see in those dirs.
If you see “this_is_the_volume_data”, then you know it is being mounted.
If you see “this_is_the_image_data”, then you know it is not being mounted.
If you see anything else, then you know either it is being wiped or something else is mounted
If it is the last case, check the volume - does it still contain “this_is_the_volume_data” or was it wiped?
Replace the image and command with something that just logs the contents of those dirs and then sleep forever. This will rule out your own app.
Mount the volumes as readOnly: true and see if someone complains.
Mount the volumes to a different dir and see if the behavior is different.