Sandboxing contexts to safely use concurrent contexts

I’m building a system that stands up and takes down pods/deployments frequently. These kubectl run calls are subject to the current context which is set by the prior kubectl config use-context <ctx>. Please correct me if that is incorrect.

My new pods are stood up concurrently due to external factors. How can I ensure each kubectl run uses the correct context? Do I have to use a lock, such as LOCK ; kubectl config use-context ; kubectl run ; UNLOCK or is there a smarter way such as kubectl run --with-context <ctx> ...?

There are a couple of ways you could that.

First you could set the current context of the config file by running kubectl config set-context <ctx>.

Alternatively you can use separate config files for the contexts you want to run in kubectl --kubeconfig=config-demo or setting an env for the separate config export KUBECONFIG=<KUBECONFIGFILE>.

I prefer the second method as it’s a little more clear but it really depends on how you want to organize things.

Thanks for the replay @macintoshprime. A bit of digging suggests I can kubectl run --context <named context> <image> -- <bin> instead of kubectl run <image> -- <bin> which would use whatever the current context is. Does that sounds reasonable?

I’m not sure if --context works with kubectl create yet so that could be another hick up.

That sounds good, there isn’t really a wrong way to declare the context just depends on your preference for how to manage them.

Here are the docs in case you ever need a reference down the road.

--context is a kubectl flag so you shouldn’t have any issue with using it and create. You can run it like kubectl --context <ctx> to make it clearer if you like.

1 Like