Hi, I’m from a software development company that also provides application hosting, we haven’t used Kubernetes yet, but intrigued to use it on production after testing it.
We have many customers that have many applications to host. currently the customer orders a VPS (or EC2 servers in a network) and we manage it to host their applications and monitor it. This is kind of a pain because we have so many servers to manage.
My idea is to create a Kubernetes cluster for hosting all of the applications and isolate each customer apps into its own namespaces (with some limits, and specific nodes).
However, we’re not really sure if this is the best practice, most of our apps are stateful and monolithic, how chaotic would it be if the idea implemented?
Since you don’t have experience managing Kubernetes (I assume), perhaps a better idea would be to use a managed-Kubernetes service from AWS (you mentioned EC2). EKS would provide a managed control-plane and if you don’t want to manage any EC2 instances, you may consider using Fargate to deploy your client’s applications.
If you are familiar with IAM policies it may be easier for you to properly isolate your clients applications from each other (and from other’s clients applications). AWS offers its own monitoring services and even Prometheus+Grafana+ELK/OpenSearch, that you may need to deploy to monitor your self-managed cluster and applications.
You can fully isolate applications in your self-managed cluster, but it requires you to have some knowledge on Network Policies and RBAC. Namespaces on its own does not always isolate applications between them (depends on configuration). This is old (2016), but in Kubernetes Namespaces: use cases and insights one of the reasons not to use Namespaces is:
In some circumstances Kubernetes Namespaces will not provide the isolation that you need. This may be due to geographical, billing or security factors. For all the benefits of the logical partitioning of namespaces, there is currently no ability to enforce the partitioning. Any user or resource in a Kubernetes cluster may access any other resource in the cluster regardless of namespace. So, if you need to protect or isolate resources, the ultimate namespace is a separate Kubernetes cluster against which you may apply your regular security|ACL controls.
In 2017, Kubernetes 1.8, Network Policies provided “the ability to enforce the partitioning” (that was lacking in the previously mentioned article from 2016) but you have to configure them as the Default policies:
By default, if no policies exist in a namespace, then all ingress and egress traffic is allowed to and from pods in that namespace.
If you want to move away from managing “so many servers”, you will be changing managing “many EC2” instances to managing “many EC2” instances (in your self-managed cluster) and having to deal with Kubernetes.
My advice would be to focus on your core business (coding) and use a managed Kubernetes service from your favorite hyperscaler (AWS/Azure/GCP).
I’ve been working on exactly this use case — namespace-based multi-tenancy for hosting providers.
@xavi makes a great point about Network Policies and RBAC not being enough out of the box. We ran into the same issue. Namespaces alone don’t give you real isolation — you need at minimum:
Network-level isolation (not just NetworkPolicies, but ideally separate subnet/CIDR per tenant via something like Kube-OVN)
Resource quotas enforced per namespace (CPU, memory, storage, EIPs)
RBAC tied to an identity provider (Keycloak works well) so each client only sees their own namespace
Per-tenant billing/metering — otherwise you can’t actually charge clients properly
We ended up building a platform around this pattern: Organization → Project (namespace), where each project gets its own VPC-like network segment, quotas, and RBAC roles. It also supports running KubeVirt VMs alongside containers in the same namespace, which is useful when clients have legacy monolithic apps that aren’t containerized yet (which sounds like your case).
It’s called Kube-DC. There’s also a live sandbox if you want to see how the namespace isolation works in practice without installing anything.
For your specific situation with stateful monolithic apps, I’d actually suggest starting with KubeVirt VMs inside isolated namespaces rather than trying to containerize everything at once. You get the multi-tenancy benefits without rewriting your clients’ apps.