Asking for help? Comment out what you need so we can get more information to help you!
Kubernetes version: 1.25.10-gke.2700
Cloud being used: gcp
We started noticed a increasing volume of 502 requests when we scale the number os pods serving our application. After some research we found that after a pod receives a SIGRTERM there is a time period where kube-proxy has still not updated iptables to remove this pod from the service backend. Is there a config/workaround to prevent requests from reaching this terminating pod?