Asking for help? Comment out what you need so we can get more information to help you!
Cluster information:
Kubernetes version:1.26.6
Cloud being used: aks
Installation method: Azure
Host OS: linux
CNI and version: ??
CRI and version: ??
Hi, I am fairly new to k8s, so please be patient with me.
I have an in memory service using c#. It has a grpc interface and heavily uses Interlocked.Increment to keep a count of something.
This is all good and all is well. However, to take care of when a pod is moved, I need to transfer the data to the next version of the pod.
I am currently trying to use the IHostApplicationLifetime interface and hooking into the ApplicationStopping hook.
Again, this all seems good and works locally testing as a standalone process and mimicing the steps.
What I am doing is in the shutdown hook, I drain the requests using custom code and then write the in memory data to the underlying storage. A Persistent Volume.
I used this as I had read that using Strategy Recreate, the next pod would then have access to the data.
What I seem to notice is that the SIGTERM is sent and the pod starts it’s shutdown, at the SAME TIME the next pod is created. The first pod, writes to disk, and the second can’t find it.
What appears to be happening is that the second pod seems to gain access to a “copy” of the volume (maybe?) and so never sees the saved data.
Can anyone confirm that the volume is cloned as this doesn’t seem correct to me and it should be the exact storage under the hood and only one pod should have access at a time, which again should mean that the second part can not start before the first pod has fully been terminated and finished its clean down.
Hope that this all makes sense.