Where and how to deploy and HAPROXY for control-plane redundancy

I am learning kubernetes and struggling to understand the control-plane redundancy. I understand that control-plane nodes contain additional pods / services, such as the etcd service and api service. If I understand it correctly, kube-proxy handles the internal redundancy (etcd redundancy with multiple control-plane nodes as an example) but for control plane redundancy to the kube-apiserver, I believe that requires something like HAPROXY to facilitiate that?

I have seen different opinions on where that HAPROXY should be placed to facilitate that redundancy, and I am not even sure if HAPROXY is the right choice if I am running Cilium (which I think also has a proxy of some sort?).

  • Should the HAPROXY be external to Kubernetes or are there highly redundant ways to deploy the HAPROXY within Kubernetes?
  • When does HAPROXY need to be installed. Does it need to be up before the first node is configured? Can it be added with the second node or after three nodes are up and running?
  • Is HAPROXY the recommended way of providing API access redundancy or can I use other methods to achieve the same thing?
  • Why do we deploy redundancy for things like etcd, but we don’t just include API redundancy in kubernetes by default?

I am sure this is discussed somewhere in a way that should have answered these questions, but my quest to understand these pieces has only left me more confused.

Cluster information:

Kubernetes version: v1.34.1
Cloud being used: Baremetal
Installation method: kubeadm?
Host OS: Rocky Linux 10.0 (Red Quartz)
CNI and version: Cilium v1.18.2
CRI and version: containerd://2.1.3

Hi,

I believe you should place LB outside of k8s cluster.

You can use any LB that you like. I do not see any reason why would you be forced to use haproxy.

I have been trying to deploy an HA proxy with this configuration (in this example 10.1.1.100 is the haproxy front end IP):

global
    log stdout format raw local0
    maxconn 4000
    user haproxy
    group haproxy
    daemon

defaults
    mode tcp
    log global
    option tcplog
    option dontlognull
    retries 3
    timeout connect 10s
    timeout client 1m
    timeout server 1m

frontend kubernetes
    mode tcp
    bind *:6443
    default_backend kubernetes-masters

backend kubernetes-masters
    mode tcp
    balance source
    server master-1 10.1.1.101:6443 check
    server master-2 10.1.1.102:6443 check
    server master-3 10.1.1.103:6443 check

But then I am unclear how to configure the first node to have a stable IP. This is my kube-admin.yaml for configuration of the first node.

kind: ClusterConfiguration
apiVersion: kubeadm.k8s.io/v1beta4
kubernetesVersion: 1.34.1
controlPlaneEndpoint: "10.1.1.100:6443"
clusterName: KUBE-C1
networking:
  podSubnet: 10.211.0.0/16
  serviceSubnet: 10.212.0.0/16
apiServer:
  certSANs:
    - 10.1.1.100
    - 10.1.1.101
    - 10.1.1.102
    - 10.1.1.103
---
kind: InitConfiguration
apiVersion: kubeadm.k8s.io/v1beta4
localAPIEndpoint:
  advertiseAddress: 10.1.1.100
  bindPort: 6443
nodeRegistration:
  kubeletExtraArgs:
  - name: node-ip
    value: 10.1.1.101

---
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
cgroupDriver: systemd

This never starts. The kubeadm process seems fine until it tries to call the kube-apiserver, but that always fails. I can get node one to come up by changing everything that points to 10.1.1.100:6443 to the host ip 10.1.1.101 and then it works fine, but I cannot add a node because I don’t have a stable IP. I cannot figure out what dependency I am missing.