Kubernetes namespace as a Service

Hi, I’m from a software development company that also provides application hosting, we haven’t used Kubernetes yet, but intrigued to use it on production after testing it.

We have many customers that have many applications to host. currently the customer orders a VPS (or EC2 servers in a network) and we manage it to host their applications and monitor it. This is kind of a pain because we have so many servers to manage.

My idea is to create a Kubernetes cluster for hosting all of the applications and isolate each customer apps into its own namespaces (with some limits, and specific nodes).

However, we’re not really sure if this is the best practice, most of our apps are stateful and monolithic, how chaotic would it be if the idea implemented?

Since you don’t have experience managing Kubernetes (I assume), perhaps a better idea would be to use a managed-Kubernetes service from AWS (you mentioned EC2). EKS would provide a managed control-plane and if you don’t want to manage any EC2 instances, you may consider using Fargate to deploy your client’s applications.

If you are familiar with IAM policies it may be easier for you to properly isolate your clients applications from each other (and from other’s clients applications). AWS offers its own monitoring services and even Prometheus+Grafana+ELK/OpenSearch, that you may need to deploy to monitor your self-managed cluster and applications.

You can fully isolate applications in your self-managed cluster, but it requires you to have some knowledge on Network Policies and RBAC. Namespaces on its own does not always isolate applications between them (depends on configuration). This is old (2016), but in Kubernetes Namespaces: use cases and insights one of the reasons not to use Namespaces is:

In some circumstances Kubernetes Namespaces will not provide the isolation that you need. This may be due to geographical, billing or security factors. For all the benefits of the logical partitioning of namespaces, there is currently no ability to enforce the partitioning. Any user or resource in a Kubernetes cluster may access any other resource in the cluster regardless of namespace. So, if you need to protect or isolate resources, the ultimate namespace is a separate Kubernetes cluster against which you may apply your regular security|ACL controls.

In 2017, Kubernetes 1.8, Network Policies provided “the ability to enforce the partitioning” (that was lacking in the previously mentioned article from 2016) but you have to configure them as the Default policies:

By default, if no policies exist in a namespace, then all ingress and egress traffic is allowed to and from pods in that namespace.

If you want to move away from managing “so many servers”, you will be changing managing “many EC2” instances to managing “many EC2” instances (in your self-managed cluster) and having to deal with Kubernetes.

My advice would be to focus on your core business (coding) and use a managed Kubernetes service from your favorite hyperscaler (AWS/Azure/GCP).

1 Like