Why "kubectl get pods" show pod still running while its node was poweroff?

Cluster information:

Kubernetes version: v1.20.4
Cloud being used: bare-metal
Installation method: kubeadm init
Host OS: Centos8
CNI and version: calico 3.18.1
CRI and version: docker 20.10.5

I create a pod and then shutdown the node which runs that pod, the “kubectl get pods” shows its status is “running”, why? Shouldn’t it be dead?

And sometimes it would change status to “terminating”, and keep that status.

Is there some how probe from master? and if so, what’s the timeout to change the status?

What’s the expected correct status?

Hi! We took a stab at answering this question in office hours, check it out (about 35m in)

This is my mistake. The replicas of Deployment on shutdown node were moved to other healthy node after some minutes.

The side question is, I wish to customize the failover time, for example:

  containers:
  - name: multitool3
    image: praqma/network-multitool
  tolerations:
  - key: "node.kubernetes.io/unreachable"
    operator: "Exists"
    effect: "NoExecute"
    tolerationSeconds: 60
  - key: "node.kubernetes.io/not-ready"
    operator: "Exists"
    effect: "NoExecute"
    tolerationSeconds: 60

Then what’s difference between unreachable and not-ready?

The DaemonSet is also special. Their pods keep status as running while the node are down. For example, kube-proxy-dh7pq and calico-node-xkwfh. Why?