Kubernetes: How To Ensure That Pod A Only Ends Up On Nodes Where Pod B Is Running

Asking for help? Comment out what you need so we can get more information to help you!

Cluster information:

Kubernetes version: 1.15
Cloud being used: AWS
Host OS: Linux
CNI and version: vpc-cni 1.5.5

Question:

I have two use cases where teams only want Pod A to end up on a Node where Pod B is running. They often have many Copies of Pod B running on a Node, but they only want one copy of Pod A running on that same Node.

Currently they are using daemonsets to manage Pod A, which is not effective because then Pod A ends up on a lot of nodes where Pod B is not running. I would prefer not to restrict the nodes they can end up on with labels because that would limit the Node capacity for Pod B (ie- if we have 100 nodes and 20 are labeled, then Pod B’s possible capacity is only 20).

In short, how can I ensure that one copy of Pod A runs on any Node with at least one copy of Pod B running?

I never experienced in your particular case, but you can probably use affinities: Assigning Pods to Nodes - Kubernetes

The reason for the use case is users are creating Daemonsets on a shared cluster. When there are big scaling events caused by other users, they end up having lots of pods scheduled on nodes for no reason. Really they are choosing to use a Daemonset with the idea that if any of their pods are on a Node, then they want their daemonset pod to be on that node. So, I’m trying to come up with a solution to that where people’s Daemonsets only end up on nodes with their own pods.

I think you need custom logic. Either run a loop to label nodes like “has-user-a: yes” and use a DaemonSet or write your own DaemonSet-like thing to schedule the helper that you want on the nodes you want.

Might be useful: Inter-pod affinity and anti-affinity