Asking for help? Comment out what you need so we can get more information to help you!
Cluster information:
Kubernetes version: v1.29.1
Cloud being used: (put bare-metal if not on a public cloud) bare-metal
Installation method:
Host OS: SUSE Linux Enterprise Server 15 SP5
CNI and version:
CRI and version: containerd://1.7.8
You can format your yaml by highlighting it and pressing Ctrl-Shift-C, it will make your output easier to read.
I am encountering an issue where mounted secrets are not being consistently updated across all pods after performing a time shift scenario. Here’s the setup and behavior observed:
Setup:
We have a Kubernetes secret mounted as a volume to multiple pods using the following volumeMounts and volumes configuration:
volumeMounts:
- name: certificate
mountPath: /run/secrets/certificate
volumes:
- name: certificate
secret:
secretName: secret
items:
- key: tls.crt
path: tls.crt
- key: tls.key
path: tls.key
Test Scenario:
The system time was shifted by +9 years.
After the first time shift, secrets were correctly updated on all pods.
The system time was then shifted by an additional +1 year (making a total of +10 years from the original time).
After this second time shift, the secret was updated correctly at the secret level (as verified by kubectl get secret cert -o yaml), but not all pods received the updated version of the secret.
Observation:
When checking the update time on the secret, it shows updates:
2024 - updateTime value
2033 - updateTime value
2034 - updateTime value
However, checking the mounted paths on individual pods, we see inconsistent behavior:
Pod 1 and Pod 2 have the updated secret from 2034.
Pod 3 and Pod 4 still show the secret from 2033.
Here are the outputs from the pods:
Pod 1:
bash-4.4$ ls-latR /run/secrets/certificate
…2034_09_05_11_09_53.1131091486
Pod 2:
bash-4.4$ ls-latR /run/secrets/certificate
…2034_09_05_11_09_34.1131341486
Pod 3:
bash-4.4$ ls-latR /run/secrets/certificate
…2033_09_05_09_02_23.3044270727
Pod 4:
bash-4.4$ ls-latR /run/secrets/certificate
…2033_09_05_09_02_46.3383279186
Question: Has anyone encountered a similar issue where secrets do not update correctly on all pods after a significant time shift? Could this be related to Kubernetes internal caching or time-related behaviors? Any guidance on how to ensure that secrets are updated consistently across all pods would be appreciated. Are there specific configurations or mechanisms I should look into to force all pods to update their mounted secrets?