The behavior would be the same in both instances. The pod when it fails would come back up on a node that has capacity to deploy the new instance. This could be the same node as it was currently on (assuming deployments), depending on any taints/affinities you may have set up.
Awesome As long as you have more than 1 worker node as part of the cluster then that will work.
If you have to ensure the control plane is HA you will want more than one of those as well. Typically you would have at least 3 worker nodes and 3 control plane nodes to ensure HA.
In the instance of a node going down the workloads that were running on that node will get rescheduled on another node.
I think two very different things are being confused here.
The link you posted explains different ways to have an HA kubernetes control plane (i.e. etcd and kube-api, kube-controller-manager, etc.). That is basically having HA on the Masters/controllers.
That has nothing to do with a pod being scheduled to another node if a worker node fails. Because this is about workers nodes, and the other is about the controllers.
If I understand correctly, you want a pod running on a worker node to be scheduled to another node if the node that it was running fails. Is this correct?
If that is the case, the link has nothing to do. The link is about the controllers. Ignore it for this.
If that is correct, too, this is probably the case already (no effort required on your side). If that pod is created with a kubernetes resource “kind: deployment”, kubernetes will guarantee that (note that it may take some time to detect a node being down) as long as there is enough capacity in the other nodes.