I/O Fencing on k8s

Hi, I have several questions about I/O behavior in k8s.

I’m trying to configure k8s with 2+ pods using a shared disk (or persistent volumes with same physical disk), and same application (DBMS) which manages that shared disk will be running on each pods.

Under this configuration, I want to know when the I/O of certain pod is terminated. This is required because the DBMS should guarantee the consistency of the DB.

Typically in linux, an application send I/O request to kernel and it will be added to I/O scheduler in kernel, so the I/O request will be processed asynchronously.

Therefore in bare-metal linux (2+ nodes, instead of 2+ pods), the application (DBMS) can guarantee that I/O of it’s node is terminated by shutdown the node itself. (because there will be no I/O when the node is terminated)

However, under k8s, the configuration is changed from 2+ nodes to 2+ pods, so terminating pod instead of node may not guarantee the I/O is terminated.

Can I assume that the I/O request of terminated pod in host node kernel will not be processed after pod is terminated? If not, can I know the exact timing of that? (ex. via pod/pv/pvc status)

Also, I am considering that unbound the persistent volume from PVC but not sure this will prevent the remaining I/O.

Thank you in advance! :grinning: