How hard is Multi-Tenancy?


#1

Hello! This is my first post, and it is a question.

According to the documentation, Kubernetes is able to create different tenants to guarantee the coexistence of different lines of work within the same cluster. However, as I read in blogs and other sites from one or more years ago, multi-tenancy was still in early stages. Have there been improvements since then?

From what I see, it is functional and perfectly valid for safe work environments and where you can trust all users, but how do you handle it in adverse situations? Is it possible, today, to break the separation between tenants or the API? What are the main vulnerabilities?

Thank you.


#2

It really depends on the attack vectors you want to have, is a trade off.

Bugs in docker happen, you can use network policy to limit networking stuff, or even have noisy neighbors depending what pods are doing and things are scheduled, etc. But it is really a trade off.

You may want different kubernetes clusters for different things, or even use several different accounts for aws/Google/something, to isolate even more.

I don’t know how to answer in a more useful way. I really think there are tons of things to consider and some may be risks you want to have, some may not.


#3

My main focus is isolation. As I have tried, from a pod in a namespace you can connect a pod in another namespace and also have DNS resolutions, so there is multi-tenancy but you can not expect to use it in environments where pods/users can be conflictive.

As I understand right now, if I want isolation in my cluster, Kubernetes can not handle it by itself, so I need to use different kubernetes clusters or spend time creating case-specific network policies. Isn’t it?

Thank you.

EDIT: I have seen this, which seems like a clean way to handle isolation. I’m going to try it and update with more information.


#4

Well, it depends isolation at which layer.

For what you are saying, network policy, that I said in my first comment, should be enough. Or am I missing something?

With that only, though, pods from different namespaces might be scheduled on the same node. That might be a risk or not for your use case, for example. It depends on how you need to manage this, but you might able to use labels to avoid this.

As I said, it really depends on what you need. Network communication can be solved by network policy kubernetes object. Hopefully that is enough for you? :slight_smile:


#5

Excuse me, but I did not know anything about Network Policies because we use Flannel and, as long as I have read, it does not support them. However, I’m going to run a test server with Calico and see how isolation works actually.

Pods from different namespaces in the same node is not a problem for us, as long as there are quotas and limits to prevent DoS. However, I appreciate the explanation.

I’m going to run the tests and, then, update with more info.

Thank you very much.


#6

Great!

Yeah, not all network overlays support network policies. But some surely do, hope those work well for your use case :slight_smile: