K8s for on-prem servers: One consistent K8s flavor or multiple different K8s flavors co-exist?

Asking for help? Comment out what you need so we can get more information to help you!

Hi K8s experts:

Due to business reasons, we run all applications at on-prem servers (NOT in public clouds like Azure, AWS or GCP), hence our choices of k8s solutions are limited to the followings:

  • OpenShift
  • Charmed Kubernetes (I think it’s from Ubuntu)
  • HPE Ezmeral
  • Vanilla K8S (complete open-source)
  • Minikube or Microk8s, etc.

We have many different server platforms, for example: VMs in VMWare, many physical servers, etc., and we plan to install K8s on most of the servers (including VMs, and physical servers). All these servers are all connected via high-speed network.

My question is: should we try our best to install the same K8s flavor on all our servers (VM or physical)? The intuition behind is to keep all software/applications running on the same K8s. For example, all applications run on OpenShift, or all applications run on Charmed, rather than some applications on OpenShift cluster while some applications on Charmed cluster, and some in microk8s, etc. I assume a consistent K8s across all servers and applications should be helpful in the long run, rather than deploying multiple different type of K8s platforms.

I appreciate your insight and expert opinion.

Thanks in advance,

Cluster information:

Kubernetes version:
Cloud being used: (put bare-metal if not on a public cloud)
Installation method:
Host OS:
CNI and version:
CRI and version:

You can format your yaml by highlighting it and pressing Ctrl-Shift-C, it will make your output easier to read.

IMO, yes. Each one is going to be slightly different on what versions of k8s they support, how they’re installed and maintained. You also probably don’t want to wrestle with different support contracts.

1 Like

@mrbobbytables Thanks for the insight! I have one more question: our on-prem environment includes various different hardware platforms, for example, many VMs (on VMWare), many physical servers, etc.
I wonder if the following scenario makes sense or realistically workable: we build a K8s cluster containing some worker nodes, at first it might be small (only a few worker nodes), but we might gradually expand with more worker nodes along the time. One problem is that the worker nodes could be on different hardware platforms, for example, some worker nodes on VMs (on VMWare), some nodes on HPE Apollo servers, and some nodes on DELL servers, etc. (but all running Linux).

Also, the applications running on the nodes are different as well. Do you think the the above scenario makes sense or reasonable?
Should we create a big K8s cluster cover all worker nodes (the nodes could be on different hardware platform, or running different applications), or we should created multiple small K8s clusters, one for each application?

I appreciate your insight !