i tried to backup etcd to local cluster, but it seems only endpoint didn’t work properly, only this command work for me
ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key \
snapshot save
are those certs mandatory to backup the etcd db?
while restoring, why these additional parameters need to be passed in, such as:
ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt \
--name=master \
--cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key \
--data-dir /var/lib/etcd-from-backup \
--initial-cluster=master=https://127.0.0.1:2380 \
--initial-cluster-token etcd-cluster-1 \
--initial-advertise-peer-urls=https://127.0.0.1:2380 \
snapshot restore ...
when kubelet is not working properly, can use
journalctl -u kubelet to check the status, the config file normally reside on /var/lib/kubernetes, apiserver url port should be 6443 in general, then can use systemctl restart kubelet to reboot the service and check the status
etcd server sometimes has different ca from kube-apiserver, do check carefully for that