Kubernetes version: 1.20.6
Cloud being used: bare-metal
Installation method: kubeadm
Host OS: Ubuntu 18.04.6 LTS
CNI and version: 0.8.7-00
CRI and version: 1.13.0-01
I have a secret mounted as a volume in several pods. I want to update one of the fields there containing a token with a new token daily through a
CronJob. That’s fine and it works as expected but the problem is the delay of the projection of the new token into these several pods. That delay is around 1 minute and it’s projected in different time on each of these pod during that 1 minute delay period (on one pod is projected after 10 seconds, on another after 30 seconds, on third after 1 minute). We need to invalidate the old token but the problem is that we don’t know when this will be safe because of these delays. Is it possible at least to be projected at the same time on all pods mounted the updated secret as a volume?
I have read all related official and unofficial Kubernetes documentation on the topic but here is the official summary:
- Mounted Secrets are updated automatically - Secrets | Kubernetes
When a volume contains data from a Secret, and that Secret is updated, Kubernetes tracks this and updates the data in the volume, using an eventually-consistent approach.
Note: A container using a Secret as a subPath volume mount does not receive automated Secret updates.
The kubelet keeps a cache of the current keys and values for the Secrets that are used in volumes for pods on that node. You can configure the way that the kubelet detects changes from the cached values. The configMapAndSecretChangeDetectionStrategy field in the kubelet configuration controls which strategy the kubelet uses. The default strategy is Watch.
Updates to Secrets can be either propagated by an API watch mechanism (the default), based on a cache with a defined time-to-live, or polled from the cluster API server on each kubelet synchronisation loop.
As a result, the total delay from the moment when the Secret is updated to the moment when new keys are projected to the Pod can be as long as the kubelet sync period + cache propagation delay, where the cache propagation delay depends on the chosen cache type (following the same order listed in the previous paragraph, these are: watch propagation delay, the configured cache TTL, or zero for direct polling).
- Kubelet Configuration (v1beta1) - Kubelet Configuration (v1beta1) | Kubernetes
I am also looking for a way to minimize this delay. Any clue?
The only idea which I have is to change kubelet parameter
Watch (default) to
Get. I am not sure if that will improve the situation and how it will impact kubernetes performance. Moreover the
kubelet sync period will still stay and maybe we should not try to tune it as there is nothing officially documented for it. Anybody tested that?
If nothing can be done on the topic, any other kubernetes method to achieve my goal.