Cluster information:
Kubernetes version:1.30
Installation method:
On premise RHEL 8.6
CNI and version: 1.4
CRI and version: 1.7.16
successfully joined the new master node to the existing Kubernetes cluster and managed to drain and delete the old master node. However, when using the kubeadm reset command to clean the old master node, the cluster goes down.
To join the new master node to the existing cluster, I used the token and certificate key from the old master node.
While installing the control plane components on the new master node, I specified the Load Balancer and initial node IP as the new master node name and updated the Flannel configuration accordingly. Despite these steps, the cluster still goes down after resetting the old master node.