Newbie question, say, I have a cluster named “myCluster” and Rancher (UI) informs that 2 nodes are having issues, so, I’m thinking of draining them and restart them.
Is there a way to drain each of them and restart each with kuberctl?
If so, how?
Update:
Did quick research,
kubectl drain <node name>
but how and why does cordon and uncordon come into play here?
Thanks.
when you say -
kubectl drain
kubernetes cordons that node, meaning there will no new/existing pods scheduled on that node.
existing pods get evicted and move to other nodes.
$ k drain worker02 --ignore-daemonsets
node/worker02 already cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/kube-proxy-x6trg, kube-system/weave-net-s2cg9
evicting pod default/testd-96769dccb-fqhxh
evicting pod default/testd-96769dccb-bg6p7
pod/testd-96769dccb-bg6p7 evicted
$ k get nodes
NAME STATUS ROLES AGE VERSION
master01 Ready control-plane,master 30d v1.21.0
worker01 Ready 29d v1.21.0
worker02 Ready,SchedulingDisabled 29d v1.21.0
$
pod/testd-96769dccb-fqhxh evicted
node/worker02 evicted
$
$ k uncordon worker02
you can run k uncordon command to bring back the node. In one way cordon and uncordon are stop and start or restart of your node because everything running on the cordoned node gets evicted and created newly on other nodes.
$ which k
k: aliased to kubectl
hope this helps.
1 Like
Awesome, perfect answer, thank you very much.
Could you also take a look at my other question about increasing quota for namespace?