Manual upgrade & backup of a K8s cluster

I manually install a kubernetes cluster of 3 nodes (1 master, 2 slave). Now, I want to perform a upgrade of the k8s version (say, from 1.7 to 1.11). As the gap is long, the preferred method would be to forcefully reinstalled all the required packages. Is it a better way to do this? If yes, could you please tell me how?

Assume I do the upgrade by re-installing packages, I would want to manually backup everything (configuration, namespaces, and especially persistent volumes). From the kubernetes homepage, I found juju is recommended. but as I’m not running juju, what would be an alternative to do it manually?

Thank you!

Such a long Gap is not tested (like 1.11 master with 1.7 nodes). And usually each release has some action required things, for example alpha features that changes the on-disk format for some resources, or yaml fields, etc.

The safe advice is to install a new cluster and migrate your services there. Specially if you only have 2 workers.

But it really depends on what features you are using (like alpha features that might have changed or even don’t exist, etc.).

Looking at the action required things from each release, and going one release at a Time, one by one, might be okay too.

As ratha said you can manually upgrade the cluster version 1:1. you can do a backup of your’re cluster in two menthods.

  1. Backup all resource through yaml file (only kubeAPI resources)

$kubectl get all --all-namespaces -o yaml > backupAPI.yaml

  1. Take the snapshot of the etcd, where the cluster infor stored.

ETCDCTL_API=3 etcdctl snapshot save snapshot.db

This will locally save the snapshot of the etcd.

Restore:

ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt \
 --name=master \
 --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key \
 --data-dir /var/lib/etcd-from-backup \
 --initial-cluster=master=https://127.0.0.1:2380 \
 --initial-cluster-token=etcd-cluster-1 \
 --initial-advertise-peer-urls=https://127.0.0.1:2380 \
 snapshot restore **/tmp/snapshot-pre-boot.db**

<add the master list based on master number(default is localhost) >

Modify the file
/etc/kubernetes/manifests/etcd.yaml

--data-dir=/var/lib/etcd-from-backup
--initial-cluster-token=etcd-cluster-1

Add the volume data

 volumeMounts:
    - mountPath: /var/lib/etcd-from-backup
      name: etcd-data
    - mountPath: /etc/kubernetes/pki/etcd
      name: etcd-certs
  hostNetwork: true
  priorityClassName: system-cluster-critical
  volumes:
  - hostPath:
      path: /var/lib/etcd-from-backup
      type: DirectoryOrCreate
    name: etcd-data
  - hostPath:
      path: /etc/kubernetes/pki/etcd
      type: DirectoryOrCreate
    name: etcd-certs

Just to add extra to my above post, it’s better to include the certs in backup

ETCDCTL_API=3 \ 
  etcdctl snapshot save backup-etcd/snapshot.db \
  --cacert=/etc/kubernetes/pki/etcd/ca.crt \
  --cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt \
  --key=/etc/kubernetes/pki/etcd/healthcheck-client.key

Thanks
AjithReddy

sudo ETCDCTL_API=3 etcdctl snapshot restore /home/yourpath/etcd_backup.db \ –initial-cluster etcd-restore=https://host1:2380 \ –initial-advertise-peer-urls https://host1:2380 \ –name etcd-restore \ –data-dir /var/lib/etcd