Dedicated container per session

I believe i have a fairly unique use case, I am looking at wrapping a legacy native windows application that has a lot of static state with a REST interface and then deploying a React front end to interact with the REST interface. This leads me to need session affinity and there are some straight forward methods to achieve that, but because of the static state the container is very stateful to a particular session. Its basically a legacy desktop process with a lot a singleton and static state hosted in a container.

What i am wonder is it possible to make the container have the same lifetime as the client session.

When a new session is created we need to create a dedicated container that then has session affinity. When the session is closed or expires after some time threshold the container is destroyed.

If this ends up being hard to do, my plan B would be to just use session affinity to the container and then implement either interact with a child process with some sort of IPC or make the the container a reverse proxy and where i can spin up internal webservers.

Cloud being used: azure
Host OS: windows

You can use session affinity with an Ingress controller (like ingress-nginx).

I don’t follow why you need to spin up one container per session, though. If you need to do that, then you need create a container that interacts with the kubernetes API (you can use kubectl if it’s very simple, or use go or other languages) to spin the container up.

Not sure how you need to handle this, but something like a request arrives to a new app, it blocks until it creates a container and then it always routes the session to that container?

If one container can handle more than one session, then you can use an Ingress with session affinity, perhaps?

The container lifecycle can be as long as you need, if the node doesn’t crash or the container doesn’t crash. You may want to consider a stateful set for this, as each pod has an identity with stateful sets.

Also, you might want to use a persistent volume for the storage (so it is not deleted on pod crashes or restarts). You can use local volumes (that the host disk space, but with a cleaner interface that hostPath) or some other volume that azure provides.

But, to fully understand, how is this handled today without kubernetes?

Basically i legacy native c/c++ library that has a lot static state that is session specific, because it traditionally has only been used in a desktop application. So one container of cannot serve multuple sessions. There is currently is no production setup, but there is a protoype which basically uses a web server that spins up a child process for each session then uses IPC, So we basically route each session to a specific process internally. Obviously the ideal scenario is that we remove the statis state within the library which would then allow us to spin multiple sessions into a container. This is the long term plan but i am looking at a mid term plan to span the gap.

Containers seems like a nice way to encapsulate each session.

Does this explain the scenario well? The storage will probably be something like you mention as well as external storage like dropbox/google drive/sharepoint some other provider that we can auto save to, but those details aren’t really important for the container setup.

Another way to think of it would be something like MS Word, build custom web interface to interact with word and on the web server literally run a ms word. All interactions to that word document need to be routed to the same instance because it is working off a specific docx file. You could share the docx on disk, but then every node would need to open and save the document on each request. Which could be a bottle neck for each request depending on the size of the document.

Just to build on the word example, this library works on large flat data files, so it may take seconds to open the data sets

It’s always a difficult trade off between maintaining those things and spend time on do them differently and better.

I’d first go for using the PoC in one pod. One container, spawn processes there (is far from ideal, but I think it might be worth doing that than forcing a totally different architecture into something that is architectured in a completely different way).

Of course, if this doesn’t run in production and is an option to push back until it’s issues are solved or reasonably workaround, that is the best from the maintenance POV. But there might be reasons that make that not an option (like business reasons or whatever :-/)

If you want to explore if kubernetes can make this more manageable, I think you should play with a simpler controller and the kubernetes API. I’d start using kubectl, as this seems simple, and a pod with a service account (with enough permissions) handles the auth just fine. And kubectl is quite powerful yet easy for simple prototypes, IMHO. But no many ideas for such a weird requirement :smiley:

Take a look at the pod lifecycle docs and stateful sets, though, as those have more identity and might be more suitable to create those kind of resources for your use case.