N00b question: high availability-control-plane - how to setup?

I wanna have a high availability-control-plane so if one Node is down
all work fine.
Using Debian 11.3, with k8s 1.24.2, keepalived 2.1.5

So I build the 1st control-plane

kubeadm init --control-plane-endpoint "k8scluster:6443" --upload-certs --skip-phases=addon/kube-proxy

So I added the 2 other nodes:

kubectl label node k8sn2 node-role.kubernetes.io/worker=worker
kubectl label node k8sn3 node-role.kubernetes.io/worker=worker
kubectl label node k8sn2 node-role.kubernetes.io/control-plane=control-plane
kubectl label node k8sn3 node-role.kubernetes.io/control-plane=control-plane

on nodes k8sn2

kubectl label node k8sn1 node-role.kubernetes.io/worker=worker

root@k8sN2:~# kubectl get nodes
NAME    STATUS   ROLES                  AGE   VERSION
k8sn1   Ready    control-plane,worker   19m   v1.24.2
k8sn2   Ready    control-plane,worker   15m   v1.24.2
k8sn3   Ready    control-plane,worker   15m   v1.24.2
root@k8sN2:~# cat /etc/hosts       localhost   k8scluster   k8sN1   k8sN2   k8sN3

For the Cluster IP I use keepalived, = ClusterIP

2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 52:54:00:be:b2:f2 brd ff:ff:ff:ff:ff:ff
inet brd scope global enp1s0
valid_lft forever preferred_lft forever
inet scope global enp1s0
valid_lft forever preferred_lft forever

If shutdown the Node: k8sN1 then the command “kubectl get nodes” did not work.

What did I wrong ?