So last night something on my esxi lab environment happened and one of my nodes vm completely shut off. Not gracefully.
When i did a kubectl get pods i saw that the pods that had been running on that node showed terminating, and new pods had spun up on the other node. However, since the previous nodes were still “terminating” it appears they were still be sucked in to the svc. So the svc was not fully functional because it was trying to route traffic to pods that were terminating.
The system restored stability when the node booted back up. the pods completed their termination and the svc’s came back online.
Is there something I can configure that would force delete of terminating pods after X amount of time? or if the node is gone?