Kubernetes version: v1.15
Cloud being used: bare-metal
I have a deployment with a service exposing the pods of this deployment.
I can see that one of the pod of this deployment is not working correclty, but the liveness and readiness probes do not detect the problem, so this pod is still exposed throw the service.
I would like to isolate this pod, so:
- the service removes this ill pod from the list of its endpoints, and
- In paralell, I can troubleshoot on the ill pod, without interfering with the production (bypassing the service and accessing it with its IP or making a port-forward on this specific pod).
Moreover, I would like to create another pod, so the production can work again with ths fresh new and working pod.
I can think about two solutions for removing this ill pod from the list of endpoints of the service and start a new pod:
Augment the number of replicas of the deployment, to create a new pod, then manually edit the Endpoints resource associated with this service => I’m not sure what would be the effect, is the Endpoints controller appreciates?
Change the labels of the ill pod, so this pod is not “selected” anymore neither by the service nor by the Deployment => the pod will be removed from the endpoints of the service and, as a side effect, the pod will also be removed from the pods controlled by the Deployment => the deployment will start a new Pod to make the number of replicas correct.
I like the second solution, is this a good practice, or are there some unwanted side-effects?