Run pods without Docker container or "lightweight" alternative

Good day,

Are there any plugins or methods by which one could run an application via K8S without the application being “containerized” within a Docker image/container? My client has a hard requirement that they can’t/won’t run their app within a Docker container (need to minimize CPU overhead; won’t re-write pieces of their program to run in a container versus natively on CentOS/Ubuntu bare metal hardware; among other things). However, they want to benefit from using K8S to schedule/bring-up/tear-down pods automatically.

Is there some sort of “lightweight container/alternative” and/or a plugin that can accommodate such a use case? If not, is there some sort of “K8S design pattern” that would suit such a (ugly) deployment model? i.e. is there some way to have a simple pod that runs a minimal application (e.g. a shell) in a pod/container that launches/manages/terminates a service/daemon on the node itself? Additionally, if such a hybrid approach is possible/viable, is there a way for it to supply the service to run on the node (so we don’t have to manually install/uninstall the native application by hand)?

I realize this is a horrible/hacky way to use K8S, but if there’s a practical way to accommodate it, I have to try.

One last note: the application must run natively, and can’t be run in a VM.

Thank you.

To my knowledge there is nothing like that. Containers themselves generally have little to no overhead outside of initial start time and IO to the layered filesystem (considered a bad practice). If the container overhead would be considered too much, I’d just go diskless pxe and manage the server themselves as ephemeral systems.

There is no way that I know. You either run it as a container or you can’t schedule it with kubernetes. There is no way around it.

However, the hacks you suggest are probably possible (the devil is in the details, however, like data persistence, sysctl knobs, etc.). However, it will be running as a container in that case, which is a hard requirement your client didn’t want/can’t afford. Right? I’m confused :slight_smile:

Hacks is an appropriate conclusion: the goal is to leverage K8S as much as possible, while having the application live as a temporary, non-containerized daemon running directly on the node.

My client has a hard requirement that they can’t/won’t run their app within a Docker container (need to minimize CPU overhead; won’t re-write pieces of their program to run in a container versus natively on CentOS/Ubuntu bare metal hardware; among other things).

While we strive to meet users where they are, there are some cases
where FUD make it difficult to progress. If this were my client, I
would work to understand EXACTLY what this means and co-develop some
benchmarks and proof-of-concepts to either a) show their fears are
unfounded (CPU overhead should be a non-issue) or b) bring concrete
problems to the discussion.

Is there some sort of “lightweight container/alternative” and/or a plugin that can accommodate such a use case?

Containers are the lightweight thing.

We have a plugin interface (CRI) which you could, in theory, implement
however you want, but I don’t expect you’ll do much better than the
docker or containerd or CRI-O implementations that exist, unless you
disable containment (cgroups, namespaces) and I have no idea what
happens when you do that.