How to dynamically configure a container

Hi all.
If I have two pods with same base config.
Whats the approach to dynamically alter the configuration of one based on the configuration of another.

This is for a master/slave type setup.
I can run a shell script/kubectl with a post configuration command to do this … but is there a better way/alternative way ?

Was wondering if there was a way in the manifest to say something like …
If container with label (x) already exists … then do this.

thanks

Hi @wolverine,

I’m not sure to understand your question. By “base config”, do you mean yaml template?

If so, you could you kustomize, that gives you a way to create a base template, and then use this base template for different use cases.

Generally, this is used to have base templates for an app to deploy and then specialize these templates for each environment dev, staging, prod.

In your case, you could have a base template for your app, and then specialize it for master and slave

@feloy
Thanks for reply.
Currently I’ve expanded it a bit and using a python script imbedded via a configmap to determine who is master and who is slave at startup of container.

The hard part now is trying to have the pods failback after a failover. Here is the issue.

I have a master/slave I determine and configure at initial build. The arbiter of the M/S is a cluster of sentinels (redis). So once they detect an issue with the master they promote the slave to master.

The problem is if the (old) master crashed … and is rebuilt by replica/stateful set … it has a new IP. But nothing else in the stack knows about it. I can create a service for the node so I get a static IP … but internally I dont think redis will know about it.

Thats fine - I can destroy the old master on startup and reconfig as a slave based on that condition … but what happens when the entire stack goies down and everything tries to restart together … sort of a race condition.

Need to reconcile that somehow.

@wolverine
Deployments are made for stateless workloads.

If you have one stateful workload, you can use a statefulset instead of a Deployment, that gives your Pod a constant identity (same IP, name).

But when you have to manage several instances (master/slave), you generally need to develop your own specialized operator.

You can look at examples the operators for Kafka, PostgreSql, etc, that enable to deploy these solutions in K8s with high availability.

With a specialized operator, the operator will create the pod of the new master and will be able to store its IP somewhere.

@feloy
Thanks - am using a statefulset (with persistent volumeclaimtemplate)
The documentation I have seen with master/slave is fairly basic. Do you have a good example/link of developing own basic operator ? (reading on that topic now)

Another way I was thinking was to assign a label dynamically to the master pod and the slave pod. But not sure how to do that as part of container startup (prior to DB config).

You can have a look at kubebuilder, a framework that can help you on writing an operator:

Also, you can have a look at this talk (for an old version of kubebuilder, the details have changed, but the theory is still the same): https://youtu.be/Fp0QUf0Bwm0

@feloy
Thanks again !!!