Ha kubernetes

hi,
i like to uderstand things.

let`s say we have 2 master nodes and 3 workers.
In ha kubernete we have 3 instances of the same pod,each running on each worker?

in non ha kubernete if a pod fails then this pod is not started on other worker.right?
thnx

The behavior would be the same in both instances. The pod when it fails would come back up on a node that has capacity to deploy the new instance. This could be the same node as it was currently on (assuming deployments), depending on any taints/affinities you may have set up.

then why is needed of kubernete ha if the pod is running on other node when the initial node where the pod run fail?

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/ha-topology/

Which deployment set up are you looking at?

i like to have a solution where when a node fails then the pod that run on it to start on other node.

Awesome :slight_smile: As long as you have more than 1 worker node as part of the cluster then that will work.

If you have to ensure the control plane is HA you will want more than one of those as well. Typically you would have at least 3 worker nodes and 3 control plane nodes to ensure HA.

In the instance of a node going down the workloads that were running on that node will get rescheduled on another node.

I think two very different things are being confused here.

The link you posted explains different ways to have an HA kubernetes control plane (i.e. etcd and kube-api, kube-controller-manager, etc.). That is basically having HA on the Masters/controllers.

That has nothing to do with a pod being scheduled to another node if a worker node fails. Because this is about workers nodes, and the other is about the controllers.

If I understand correctly, you want a pod running on a worker node to be scheduled to another node if the node that it was running fails. Is this correct?

If that is the case, the link has nothing to do. The link is about the controllers. Ignore it for this.

If that is correct, too, this is probably the case already (no effort required on your side). If that pod is created with a kubernetes resource “kind: deployment”, kubernetes will guarantee that (note that it may take some time to detect a node being down) as long as there is enough capacity in the other nodes.