Shutting down one of master nodes (NotReady) still shows pods as RUNNING on it


Cluster information:

Kubernetes version v1.24.7
VMware Nodes
Node's OS: Oracle Linux 8.5

We have 6 nodes (3 master, 3 worker).
We do some resilience testing.

Shutting down one of the master nodes (NotReady) still shows pods as RUNNING on it without time limit, i.e. the situation doesn’t remedy itself.

If we start the node again kubectl starts to pick up proper life signs.

We do not have application pods on master nodes, but this is not expected and it is prone to disable our environment over time.

There are similar bug(s) fixed for v1.18/v1.19 but this is v1.24, so these fixes should already be there.

Also, all pods are parts of the deployments/replicasets - no single pods are present on these nodes.

Please advise what we are doing incorrect and how to proceed.

Best Regards & Thanks!