when kubelet is not working properly, can use
journalctl -u kubelet to check the status, the config file normally reside on /var/lib/kubernetes, apiserver url port should be 6443 in general, then can use systemctl restart kubelet to reboot the service and check the status
etcd server sometimes has different ca from kube-apiserver, do check carefully for that
This complete setup to back and validate the satus of back
ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key member list
ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key snapshot save
ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key snapshot status
alias k=‘kubectl’
alias kgp=‘kubectl get pods’
alias kgs=‘kubectl get service’
alias kd=‘kubectl delete’
alias kcf=‘kubectl create -f’
alias kaf=‘kubectl apply -f’
alias kgpa=‘kubectl get pods --all-namespaces’
vi ~/.vimrc
set number
set tabstop=2
set expandtab
set shiftwidth=2
set cursorline
Here’s what my lecturer told me on the steps
To make use of etcdctl for tasks such as back up and restore, make sure that you set the ETCDCTL_API to 3.
You can do this by exporting the variable ETCDCTL_API prior to using the etcdctl client. This can be done as follows:
Restore, while referencing the configuration from /etc/kubernetes/manifests/etcd.yaml and
adding in --initial-cluster-token=etcd-cluster-1
and
modifying --data-dir=/var/lib/etcd to point to a new location: --data-dir=/var/lib/etcd-from-backup
Next edit /etc/kubernetes/manifests/etcd.yaml and replace all data-dir entries that have /var/lib/etcd with /var/lib/etcd-from-backup
Next add this line --initial-cluster-token=etcd-cluster-1 to the container configuration section
Next validate that cluster is restore with kubectl get all --all-namespaces.
It may take a while for the restore to complete depending on how large it is
I tried all ways above my etcd comes up in docker ps -a | grep etcd …
But i am etcd static pod does not comes up , it shows me in pending state. can you tell me why …
I had the exact same issue.
The way to resolve is:
update the volumeMounts to reflect the new data path. In this case the new data directory of “/var/lib/etcd-from-backup”
So, the new mountVolume section looks like this:
What’s working for me were:
Step 1: save db file ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key snapshot save /opt/snapshot-pre-boot.db
Step 3, edit the etcd yaml file. Starts with identifying the lines needs to be updated: cat /etc/kubernetes/manifests/etcd.yaml | grep -i lib/etcd -n
replace all with lib/etcd-from-back
and then add the initial cluster token to the spec.containers[0].command list - --initial-cluster-token=etcd-cluster-1
I generally get Error: expected sha256 when trying to restore from backup. In that case, I used --skip-hash-check=true and it worked for me. Here are my steps
ETCDCTL_API=3 etcdctl --cacert=/etc/kubernetes/pki/etcd/ca.crt
–cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key
snapshot save /tmp/etcd-backup.db
ETCDCTL_API=3 etcdctl --write-out=table snapshot status /tmp/etcd-backup.db
after you restore the etcd backup to lets say : /var/lib/last-backup directory , after that you get into the static manifest and update the Hostpath and that will reflect to you etcd container.