Cluster information:
Kubernetes version: 1.19
Cloud being used: AWS EKS
Installation method: EKS
Host OS:
CNI and version:
CRI and version:
Hello,
I’m using ansible to create secrets and a deployment on EKS. I’m declaring a secret like this:
kind: Secret
metadata:
name: appsettings-frontend
namespace: "{{ namespace }}"
data:
"app-settings.json": "{{ lookup('template', 'fe/appsettings.json.j2') | tojson | b64encode }}"
Then mounting the secret into the container at a subPath
volumeMounts:
- name: nginxconfig
mountPath: /etc/nginx/conf.d
readOnly: true
- mountPath: "/usr/share/nginx/html/assets/config/app-settings.{{ build_number }}.json"
name: appconfig
subPath: "app-settings.json"
volumes:
- name: nginxconfig
configMap:
name: nginxconf
- secret:
secretName: appsettings-frontend
defaultMode: 0444
name: appconfig
When I shell into the container, I see a bunch of app-settings.{{ build_number }}.json files. When I issue the mount command, I see that every new deployment of a new container, with a new build number, is generating a new mount that persists across container re-deployments. For example, if I deploy build_number 1’s container, I get a mount from tempfs to that. Then if I deploy build_number 2’s container, I get a mount for both build 1’s app-settings.1.json and build 2’s app-settings.2.json. If I destroy the deployment, all those mounts go away and the container only has the single deployed mount in it’s filesystem.
So… any idea how I can deploy container after container into a deployment and have the old mount points removed, so I only see the single mount point in the container at any given deployment iteration?
Thanks,
Paul