I have two K8s clusters running on different versions, I would like to merge both the clusters into one.
how can we do that?
first cluster-
vagrant@master01:~$ k get nodes
NAME STATUS ROLES AGE VERSION
master01 Ready control-plane,master 4d1h v1.21.0
worker01 Ready 4d v1.21.0
worker02 Ready 4d v1.21.0
vagrant@master01:~$
Second Cluster-
vagrant@master19-01:~$ k get nodes
NAME STATUS ROLES AGE VERSION
master19-01 Ready control-plane,master 46h v1.20.0
worker19-01 Ready 46h v1.20.0
worker19-02 Ready 46h v1.20.0
vagrant@master19-01:~$
I would like to merge second cluster to first cluster. Please help.
As far as I know, there is no way to âmergeâ two clusters. The workloads state lives in the API server and itâs ETCD, merging the clusters would imply merging somehow the data in etcd and donât think thatâs doable.
I think the only way is to migrate backup the storage and migrate workloads from cluster2 into cluster1.
Thank you for your reply.
I am thinking to take backup of all the namespaces in a yaml file from cluster one and just apply this yaml in the second cluster.
on cluster one-
kubectl get all --all-namespaces -o yaml > all_deploy_services_from_cluster1.yaml
By âmerging the clusterâ are you wanting to consolidate pods in the one cluster, use the workers and/or control plane nodes in the same cluster, or both?
Consolidate pods in one cluster
The only way I know how to do this would just be to apply your yamls from the dying cluster to the new cluster.
2/3. Consolidate cluster resources
Apply pods from the dying cluster to the aggregate cluster
unjoin the old cluster (workers/masters)
join the new workers and/or control plane nodes to the new cluster
Unless youâre running at near capacity on the gaining cluster or the dying cluster has a large quantity of pods, you should be able to get them running on the new one. And if you run into resource constraints, additional workers take 2 seconds to join the new cluster.
I am exploring to use - https://velero.io
Velero is an open source tool to safely backup and restore, perform disaster recovery, and migrate Kubernetes cluster resources and persistent volumes.
I think this can solve the problem. Thank you everyone!