Cluster information:
Kubernetes version: 1.25.3
Cloud being used: bare-metal
Installation method: k3s
Host OS: Ubuntu 22.04
I’m trying to install k3s on a master and a worker node, both running Ubuntu 22.04. The master installs no problem but the worker hangs on [INFO] systemd: Starting k3s-agent. The log files on the worker say:
Oct 27 14:22:32 myworker k3s[69044]: E1027 14:22:32.202428 69044 server.go:291] "Unable to authenticate the request due to an error" err="Post \"https://127.0.0.1:6444/ap>
Oct 27 14:22:32 myworker k3s[69044]: E1027 14:22:32.230764 69044 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://127.0.0.1:6444/ap>
Oct 27 14:22:32 myworker k3s[69044]: E1027 14:22:32.288643 69044 kubelet.go:2448] "Error getting node" err="node \"myworker\" not found"
The master can see the worker node but it says “NotReady”:
NAME STATUS ROLES AGE VERSION
mymaster Ready control-plane,master 20m v1.25.3+k3s1
myworker NotReady <none> 11m v1.25.3+k3s1
I’ve installed docker and it’s running on both master and worker. They are both on an internal subnet. I’ve another server which I tried to set up as a worker node which has the same issue.
I’ve tried several different ways of installing the master node after looking here :
curl -sSL https://get.k3s.io | INSTALL_K3S_EXEC="server --docker --debug --node-external-ip=<master-ip> --advertise-address=<master-ip>" sh -
curl -sSL https://get.k3s.io | INSTALL_K3S_EXEC='server --docker' sh -
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--write-kubeconfig ~/.kube/config --write-kubeconfig-mode 666 --tls-san <master-ip> --node-external-ip=<master-ip>" sh -
and the install for the worker node seems fairly standard, I’ve tried it with and without the docker flag:
curl -sfL http://get.k3s.io | K3S_URL=https://<master-ip>:6443 K3S_TOKEN=${TOKEN} sh -s - --docker
I’ve opened ports:
sudo ufw allow 443/tcp
sudo ufw allow 6443/tcp
Obtained the token:
cat /var/lib/rancher/k3s/server/node-token
created the directories .kube and config, copied the config over:
sudo cat /etc/rancher/k3s/k3s.yaml > /home/ubuntu/.kube/config/k3sconfig.yaml
changed permissions:
chown ubuntu /home/ubuntu/.kube/config/k3sconfig.yaml
chgrp ubuntu /home/ubuntu/.kube/config/k3sconfig.yaml
Followed these instructions, these and these. Looked through flags to see if there’s anything that could help. It looks like the worker node is having issues, rather than the connection to the master.
This is the output from kubectl describes nodes myWorker:
Name: myWorker
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=myWorker
kubernetes.io/os=linux
Annotations: node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Fri, 28 Oct 2022 11:05:18 +0000
Taints: node.kubernetes.io/unreachable:NoExecute
node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule
node.kubernetes.io/unreachable:NoSchedule
Unschedulable: false
Lease: Failed to get lease: leases.coordination.k8s.io "myWorker" not found
How do I get the node to a ready state? How do I further debug it? I’m new to Kubernetes, hoping there’s a simple fix!