Hi Team,
Let us take a case where we cannot use replicas to scale down pods in a particular namespace,
As per the k8 default behavior, whenever we delete or terminate a pod, k8 scheduler will try to reschedule the pod in the same node or in any other available nodes in the cluster, to avoid rescheduling of terminated pods, we have to apply node level unschedulable taints. Since we apply taints at node level, we may have to apply tolerations for this taint for all other the pods in other name spaces.
In order to terminate pods in one namespace, we end up in applying tolerations for the pods in other namespaces due to node level taints. I feel this is a limitation as one user belongs to one namespace can not drain his pods without impacting other users in other namespaces in this particular case.
It is good to have namespace level taints to restrict pod rescheduling in the intended namespace, this will avoid changes in other namespaces.
Any reason for not providing taints at namespace level in the k8? or
Do we have any other alternatives to confine/restrict the above behavior to the intended namespace?