How do we debug one of the cluster nod that become "NotReady" with "PLEG is not healthy?

After updating cluster, one of the worker node has following error: PLEG is not healthy: pleg was last seen active 5m54.105590048s ago; threshold is 3m0s

The cluster nodes become “NotReady” with the following reason: PLEG is not healthy: pleg was last seen active h m **s ago;

Where and What could be root cause and where i have to look for this issue ?

After updating container this issue happens but when restarted node issue gone from NotReady to Ready. But my question is why at the time of update it shows ?

Rancher log:

Warning FailedSync error determining status: rpc error code = DeadlineExceeded desc= context deadline exceeded

Check if the Kubelet on that node is running and review the logs for the Kubelet. If you’re using k3s, they have the kubelet rolled into this all-in-one process, I’m not sure how to check it but I would guess you would start with the k3s service.