In kubernetes rollout process, run application in container with modified arguments without terminating container / pod

I would like to update the parameters of already running application on the fly.
Application running inside the container gets notification of parameters changed (e.g. changing the environment variable in yaml and applying the change). Based on the updated env values, application decides to launch itself with new settings.

Is this possible with K8 deployment environment? Any suggestions on how can I solve this problem?

No, with environment variables that is one limitation: you can’t change them without pod re deploy (as the process was already lunched in that environment).

With configmaps/secrets you have more flexibility, because it is auto-updated on the filesystem and that might be easier to achieve, though.

However, you can’t change the flags in the yaml without a kubernetes deployment (that is, basically, killing running pods). You can’t change the pod spec without triggering a deployment.

You have alternatives if you really need that, but are needed at the app layer. And this is really independent from kubernetes: if you want your app to read new variables and apply a change, without stopping it in any way, it doesn’t matter how you are running (kubernetes or not) that logic is at the app layer.

One way, probably, is to watch changed to the config file (like with inotify or similar) and have the logig in your app for a hot-reload or re-exec itself, for example.

But are you really sure you need any of this? Please note that having immutable deployments is very useful too and you can create some problems that don’t exist at all when you add this kind of functionality, so be sure to understand the trade offs. For example, if you auto update a configuration you can introduce a syntax error and your app might crash at startup and your pods might be all crashing at the same time, causing downtime. And there is no trivial way to rollback, as changes to configmaps are not easy to rollback (with native kubernetes tools). If, instead, you create a new configmap that is version controlled you can: create a new deployment referencing this configmap, a syntax error will only make that deployment not move forward (assuming you use a readiness probe) and won’t cause you downtime. It is, then, quite easy to rollback: you just reference the old configmap or use kubectl rollout history.

So, while possible to do some things, you are losing an immutable property that also simplifies several other things. Is a trade off and you know for your use case what is better. But please be aware of this trade off :slight_smile:

Thank you for such a detailed explanation.

Np. Please keep us posted with what you finally do and how it works :slight_smile: