Multiple namespaces vs multiple clusters?

Hi Guys,

I’m working on moving a big group of ~70 engineers (6 teams), developing one product, to use Kuberenetes.

Do you have a recommendation of whether we should go with one cluster and multiple namespaces or just multiple clusters? any pros & cons?

Thanks!
Idan

3 Likes

Depends on your physical resources and human resources I would say.

If your engineers are familiar with Kubernetes operations and your budget allows you to, I would go with multiple clusters for better isolation and for not creating a single point of failure. Otherwise, one cluster with multiple namespaces + RBAC enabled and network policy should work.

4 Likes

They aren’t all familiar with K8s, but we can take the time to teach them.

We run on top of Azure, so no physical resources problem, just costs maybe.

The advantages I see in using namespaces, is that we share the same infra, both in terms of scale and internal networks, so we can also use things like service-mesh for network policies, advanced routing, canary, etc. (Envoy, Istio, etc.)

For HA, we can have two clusters instead of one.

I would separate your cluster based on concerns. Keep development, testing & production in separate clusters for sure (you don’t want dev taking down prod). You can then namespace your workloads and implement a strict RBAC policy to maintain separation of concerns.

The most important question is do you have the right tools and manpower to effectively manage multiple clusters? This should give you a better idea on feasibility of one or more clusters.

I just went through the process of justifying a switch from individual Azure Resource Group clusters for our two-product, 6 cluster, deployment to just two clusters: One PreProd and one Prod cluster.

We use acs-engine so we pay for Master nodes, and Agentpool nodes. Each cluster has a minimum of 3 Masters and 3 Agentpools. For our pre-prod clusters they are underutilized. Even our prod are at times. Combining our 4-6 pre-prod clusters into a single multi-tenant cluster based on namespaces will save us a lot of VM costs, increase our utilization ratio, and decrease the administrative overhead for our devops staff. Another benefit is that you can apply things like Istio once, to the all the “clusters” and things like Spinnaker would benefit as well.

Update: we went with one cluster and multiple namespaces, so far so good. It lets us manage all the services more easily with one ingress controller, centralized logs collection via fluentd, centralized monitoring with Prometheus, cluster policies, etc. so for now it looks like a good approach.

Clarification: we have several production clusters in each geographical region, and we have separate clusters for development, staging, and production.

2 Likes

@Idan - we’re trying to implement a similar scheme for our GKE clusters: Clusters for environment separation; Namespaces for workload separation by team/ownership.

Couple of questions to understand your experience with the setup you described:

  1. How do you manage the access to different namespaces across the teams - do you use a shared service account per namespace or do you grant namespace access to individual user accounts via RoleBindings?
  2. Do you pre-generate kubeconfig for each namespace?

We are using the Guard project to do AAD based authentication to the API server, so the kubeconfig we use has no credentials in it, the users authenticate with their personal AAD user.

We then use RBAC to allow specific AAD groups with permissions to specific namespaces.

Why you think that we can’t have one cluster with different namespaces to reflect environments (dev,test,prod) with proper RBAC + Autoscaling rather than having multiple clusters? what exact performance concerns or dragging down we were talking ?

We have two clusters per supported geo (for DR purposes), and all eng teams are deploying to the same clusters with different namespaces.

Our SRE team is responsible for provisioning, configuring, monitoring and upgrading the clusters. They also responsible for the layers on top of K8s such as Prometheus, Log collection, Guard, PSPs, RBAC, etc.

That said, we still keep the dev clusters separate, since it allows us to operate it differently in terms of:

  1. Response to crisis / SLA - if dev cluster is down it’s not as urgent as production clusters
  2. Access control and security - Isolated env, JIT, etc.
  3. Stability - dev clusters tend to contain a lot of garbage and legacy stuff
  4. Billing - it’s easier to divide the two
  5. Upgrades - we can test K8s upgrades on a different environment than Prod - you cannot upgrade a specific namespace in K8s!

Finally, if you are going to have several clusters for production, in our case we have 10 production clusters, then from COGs perspective, it doesn’t really matter if you have 10 or 11 clusters, I can understand why it’s more appealing when you only have one cluster.

1 Like

Hi Idan/Team,
Greetings!!
Is there any guidance from CNCF / CSP(s) on this specific topic (namespace VS multi-cluster), kind of like a best practice recommendations and so on.
What should be the specific use-case to choose one approach over another , limitations / advantages , etc ?