I am learning kubernetes and struggling to understand the control-plane redundancy. I understand that control-plane nodes contain additional pods / services, such as the etcd service and api service. If I understand it correctly, kube-proxy handles the internal redundancy (etcd redundancy with multiple control-plane nodes as an example) but for control plane redundancy to the kube-apiserver, I believe that requires something like HAPROXY to facilitiate that?
I have seen different opinions on where that HAPROXY should be placed to facilitate that redundancy, and I am not even sure if HAPROXY is the right choice if I am running Cilium (which I think also has a proxy of some sort?).
Should the HAPROXY be external to Kubernetes or are there highly redundant ways to deploy the HAPROXY within Kubernetes?
When does HAPROXY need to be installed. Does it need to be up before the first node is configured? Can it be added with the second node or after three nodes are up and running?
Is HAPROXY the recommended way of providing API access redundancy or can I use other methods to achieve the same thing?
Why do we deploy redundancy for things like etcd, but we don’t just include API redundancy in kubernetes by default?
I am sure this is discussed somewhere in a way that should have answered these questions, but my quest to understand these pieces has only left me more confused.
Cluster information:
Kubernetes version: v1.34.1
Cloud being used: Baremetal
Installation method: kubeadm?
Host OS: Rocky Linux 10.0 (Red Quartz)
CNI and version: Cilium v1.18.2
CRI and version: containerd://2.1.3
This never starts. The kubeadm process seems fine until it tries to call the kube-apiserver, but that always fails. I can get node one to come up by changing everything that points to 10.1.1.100:6443 to the host ip 10.1.1.101 and then it works fine, but I cannot add a node because I don’t have a stable IP. I cannot figure out what dependency I am missing.