Adding preStop to the istio sidecars

Hi everyone,

I am facing a problem with pods deployed with Istio sidecars when sending grpc requests. When I delete a pod serving some grpc requests (e.g. downscaling a deployment) my desired behavior is that while the pod is in the terminating state it does not accept any new request but continues serving the existing requests until it is really terminated. I tried to add this by extending the pod termination period by increasing the terminationGracePeriodSeconds and also adding the preStop command to wait for 10s before removing the container .
- /bin/sh
- -c
- /bin/sleep 10

I expected that the pod in the terminating state finishes the ongoing requests and then terminates the pod. However, it seems that as soon as the pod goes into the terminating state all the connections are closed immediately and the grpc requests on the client side end up with the following error:

‘<AioRpcError of RPC that terminated with:\n\tstatus = StatusCode.UNAVAILABLE\n\tdetails = “upstream connect error or disconnect/reset before headers. reset reason: connection termination”\n\tdebug_error_string = “UNKNOWN:Error received from peer ipv4: {grpc_message:“upstream connect error or disconnect/reset before headers. reset reason: connection termination”, grpc_status:14, created_time:“2023-04-27T00:41:55.091286058+00:00”}”\n>’

I should note that when the pod is in the preStop state the pod is still up (looking at it logs) but the connections seems to be removed in the K8S service. However, if I remove the Istio containers everything works as expected (The requests are served as expected and the connection is closed after the preStop period).

Is there a way to add the same preStop to istio containers too? I’m using the following line to bring sidecars:
labels: “true”