Our worker service is a long running service. When scale in happen or deployment happen, we expect the PODs can finish existing works (max alive 1 week) then exit.
What I have tried is I make a deployment with 10 pods and set terminationgraceperiodseconds = 604800, and then scale down instance to 1, that works good.
Question here is our service will have hundreds POD, which means worse case is hundreds pod will be in terminating status, run 7 days then exit. Is this workable in K8s world, or any potential issue? Thank you if any comment~