I am using below commands for etcd backup and restore. ofcourse they successfully worked for cluster created by kubeadm however they are not working for the cluster created by Hardway. please provide your suggestions
Backup:
ETCDCTL_API=3 etcdctl snapshot save mysnapshot.db --endpoints=https://127.0.0.1:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key
List members:
ETCDCTL_API=3 etcdctl member list --endpoints=https://127.0.0.1:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key
ETCDCTL_API=3 etcdctl member list --endpoints=https://127.0.0.1:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key
This doesn’t answer your question directly, but I wouldn’t recommend etcd backup as a way of protecting a Kubernetes cluster. Look at https://stateful.kubernetes.sh/ for other options.
ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt \
--name=master \
--cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key \
--data-dir /var/lib/etcd-from-backup \
--initial-cluster=master=https://127.0.0.1:2380 \
--initial-cluster-token etcd-cluster-1 \
--initial-advertise-peer-urls=https://127.0.0.1:2380 \
After running above command then deplend on etcd state where pod or as service make sure to change the config passing initial-cluster-token and data directory path accordingly and restart
snapshot restore /tmp/snapshot-pre-boot.db
Please check out the following project that simplifies etcd backup. It obviates the need to explicitly create etcd snapshots and also provides the benefit of automatically backing up the snapshot file to any S3 bucket.
Update ETCD POD to use the new data directory and cluster token by modifying the pod definition file at /etc/kubernetes/manifests/etcd.yaml . When this file is updated, the ETCD pod is automatically re-created as thisis a static pod placed under the /etc/kubernetes/manifests directory.
Update --data-dir to use new target location
--data-dir=/var/lib/etcd-from-backup
Update new initial-cluster-token to specify new cluster
--initial-cluster-token=etcd-cluster-1
Update volumes and volume mounts to point to new path
ETCDCTL_API=3 etcdctl member list --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key --endpoints=127.0.0.1:2379
see if pods, deployments and services have been recreated
Copy paste the complete script below… Then run the command to verify the same.
cat << EOF > etcd_snapshot_backup.sh
#How can I save the etcd-backup Snapshot in a single command #Author :Lindos_tech_geeks
echo -n "Please enter the location to save the backup : ";read loc;ETCDCTL_API=3 etcdctl snapshot save $loc --endpoints=https://127.0.0.1:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key echo "You are verifying the output of the saved snapshot $loc" ETCDCTL_API=3 etcdctl --write-out=table snapshot status $loc
EOF
#Then run the command
sh etcd_snapshot_backup.sh
If you are using a browser based shell sometimes the cat based creation of file adds some junk characters, in that case please copy only the bold part of command to a vim editor
Sample O/P
master $ vim etcd_snapshot_backup.sh
master $ sh etcd_snapshot_backup.sh
Please enter the location to save the backup : /root/etcdbackup.db
Snapshot saved at /root/etcdbackup.db
You are verifying the output of the saved snapshot /root/etcdbackup.db
±---------±---------±-----------±-----------+
| HASH | REVISION | TOTAL KEYS | TOTAL SIZE |
±---------±---------±-----------±-----------+
| 4743dec6 | 3245 | 1466 | 3.4 MB |
±---------±---------±-----------±-----------+
master $
Hello, @swaroopcs88
Use server.key and server.crt instead of apiserver-etcd-client.crt and apiserver-etcd-client.key. server.key and server.crt files located at /etc/kubernetes/pki/etcd/.
thank you, Tej, what is the --data-dir value? will it be the snapshot saved location?
I am not seeing --initial-cluster-token in etcd.yaml file.
please advise.
Update new initial-cluster-token to specify new cluster