Node down - pods shown still as Running for hours, others stuck in Terminating

We have the same/related issue (also k3s v1.21.14) also on-prem.
Sometimes nodes do fail and then all pods on that node stay in terminating state. While they are replaced on other nodes, it’s still not possible to connect to some services (example: kubeflow) until the terminating pods are deleted completly or the node that went down has recovered (and deletes the old pods by itself)
It is mind boggling to me why that is, as the sole purpose of kubernetes is to make sure everything always is available.