ETCD - backup and restore management

i tried to backup etcd to local cluster, but it seems only endpoint didn’t work properly, only this command work for me

ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt \
     --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key \
     snapshot save 

are those certs mandatory to backup the etcd db?

while restoring, why these additional parameters need to be passed in, such as:

ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt \
     --name=master \
     --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key \
     --data-dir /var/lib/etcd-from-backup \
     --initial-cluster=master=https://127.0.0.1:2380 \
     --initial-cluster-token etcd-cluster-1 \
     --initial-advertise-peer-urls=https://127.0.0.1:2380 \
     snapshot restore ...

when kubelet is not working properly, can use
journalctl -u kubelet to check the status, the config file normally reside on /var/lib/kubernetes, apiserver url port should be 6443 in general, then can use systemctl restart kubelet to reboot the service and check the status

etcd server sometimes has different ca from kube-apiserver, do check carefully for that

set number
set tabstop=2
set expandtab
set shiftwidth=2

for vim easier editing

hi, can you provide the link for

ETCDCTL_API=3 etcdctl --endpoints=https://....

Here is the link Operating etcd clusters for Kubernetes | Kubernetes

This complete setup to back and validate the satus of back

ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key member list

ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key snapshot save

ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key snapshot status

alias k=‘kubectl’
alias kgp=‘kubectl get pods’
alias kgs=‘kubectl get service’
alias kd=‘kubectl delete’
alias kcf=‘kubectl create -f’
alias kaf=‘kubectl apply -f’
alias kgpa=‘kubectl get pods --all-namespaces’

vi ~/.vimrc
set number
set tabstop=2
set expandtab
set shiftwidth=2
set cursorline

Here’s what my lecturer told me on the steps
To make use of etcdctl for tasks such as back up and restore, make sure that you set the ETCDCTL_API to 3.

You can do this by exporting the variable ETCDCTL_API prior to using the etcdctl client. This can be done as follows:

Backup

master $ export ETCDCTL_API=3
master $ etcdctl -h | grep -A 1 API
    API VERSION:
            3.3
master $
master $ head -n 35 /etc/kubernetes/manifests/etcd.yaml  | grep -A 20 containers
  containers:
  - command:
    - etcd
    - --advertise-client-urls=https://172.17.0.12:2379
    - --cert-file=/etc/kubernetes/pki/etcd/server.crt
    - --client-cert-auth=true
    - --data-dir=/var/lib/etcd
    - --initial-advertise-peer-urls=https://172.17.0.12:2380
    - --initial-cluster=master=https://172.17.0.12:2380
    - --key-file=/etc/kubernetes/pki/etcd/server.key
    - --listen-client-urls=https://127.0.0.1:2379,https://172.17.0.12:2379
    - --listen-metrics-urls=http://127.0.0.1:2381
    - --listen-peer-urls=https://172.17.0.12:2380
    - --name=master
    - --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt
    - --peer-client-cert-auth=true
    - --peer-key-file=/etc/kubernetes/pki/etcd/peer.key
    - --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
    - --snapshot-count=10000
    - --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
    image: k8s.gcr.io/etcd:3.4.3-0

master $ etcdctl \
> --endpoints=https://127.0.0.1:2379 \
> --cacert=/etc/kubernetes/pki/etcd/ca.crt \
> --cert=/etc/kubernetes/pki/etcd/server.crt \
> --key=/etc/kubernetes/pki/etcd/server.key \
> snapshot save /tmp/snapshot-pre-boot.db
Snapshot saved at /tmp/snapshot-pre-boot.db
master $

Restore, while referencing the configuration from /etc/kubernetes/manifests/etcd.yaml and
adding in --initial-cluster-token=etcd-cluster-1
and
modifying --data-dir=/var/lib/etcd to point to a new location: --data-dir=/var/lib/etcd-from-backup

ETCDCTL_API=3 etcdctl snapshot restore /tmp/snapshot-pre-boot.db \
--endpoints=https://127.0.0.1:2379 \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt \
--key=/etc/kubernetes/pki/etcd/server.key \ 
--name=master \
--data-dir=/var/lib/etcd-from-backup \
--initial-cluster=master=https://127.0.0.1:2380 \
--initial-cluster-token=etcd-cluster-1 \
--initial-advertise-peer-urls=https://127.0.0.1:2380

Next edit /etc/kubernetes/manifests/etcd.yaml and replace all data-dir entries that have /var/lib/etcd with /var/lib/etcd-from-backup
Next add this line --initial-cluster-token=etcd-cluster-1 to the container configuration section
image

Next validate that cluster is restore with kubectl get all --all-namespaces.

It may take a while for the restore to complete depending on how large it is