Hi,
I wanna have a high availability-control-plane so if one Node is down
all work fine.
Using Debian 11.3, with k8s 1.24.2, keepalived 2.1.5
So I build the 1st control-plane
kubeadm init --control-plane-endpoint "k8scluster:6443" --upload-certs --skip-phases=addon/kube-proxy
So I added the 2 other nodes:
kubectl label node k8sn2 node-role.kubernetes.io/worker=worker
kubectl label node k8sn3 node-role.kubernetes.io/worker=worker
kubectl label node k8sn2 node-role.kubernetes.io/control-plane=control-plane
kubectl label node k8sn3 node-role.kubernetes.io/control-plane=control-plane
on nodes k8sn2
kubectl label node k8sn1 node-role.kubernetes.io/worker=worker
root@k8sN2:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8sn1 Ready control-plane,worker 19m v1.24.2
k8sn2 Ready control-plane,worker 15m v1.24.2
k8sn3 Ready control-plane,worker 15m v1.24.2
root@k8sN2:~# cat /etc/hosts
127.0.0.1 localhost
172.16.254.30 k8scluster
172.16.254.31 k8sN1
172.16.254.32 k8sN2
172.16.254.33 k8sN3
For the Cluster IP I use keepalived, 172.16.254.30 = ClusterIP
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 52:54:00:be:b2:f2 brd ff:ff:ff:ff:ff:ff
inet 172.16.254.31/16 brd 172.16.255.255 scope global enp1s0
valid_lft forever preferred_lft forever
inet 172.16.254.30/32 scope global enp1s0
valid_lft forever preferred_lft forever
If shutdown the Node: k8sN1 then the command “kubectl get nodes” did not work.
What did I wrong ?
Thanks