I’ve installed Microk8s on 64bit Raspberry Pi OS (formally Raspbian) through snap, on three RPis - master node and two workers nodes (which joined the cluster). All works fine but for one detail:
I hoped not to have master node be part of the cluster and when I remove itself from it (remove node) all still works well as I can tell (deployed test container, configured all with metallb and such) but I’ve got 100MB of logs each day with something like:
microk8s.daemon-kubelet[11580]: E0728 17:03:39.371723 11580 kubelet.go:2268] node "my-master-host-name" not found
BTW - all seem like clean install (sorted all issues and microk8s inspect returns clean bill of health) and I did it several times just to be sure I know what I am doing and if I keep master node to be part of the cluster all is fine (I did cordoned it at the end).
Question is: is that some kind of bug or misconfiguration I am hitting or master node shouldn’t ever be removed from the cluster itself?
Just to say it again - I haven’t done anything extra from what snap package did in terms of installation and/or configuration - just installed it on three RPis, added two to the master and removed master from the cluster when logs started. And repeated steps on clean install again it just for confirmation.
Cluster information:
Kubernetes version: snap returns microk8s v1.18.6 1558 latest/stable canonical✓ classic
Cloud being used: bare-metal on RPis
Installation method: snapd
Host OS: Raspberry Pi OS (debian derived 64bit arm linux)
CNI and version:
CRI and version:
You are not supposed to stop the master node. A cluster without the control plane is not really a kubernetes cluster. The fact that you do not want the master node running probably means you do not need kubernetes. In this case you can consider deploying your workload by manually starting the docker containers on each node.
Why do you want to stop the master?
The error message you see in the logs is the worker nodes complaining that they cannot find the master.
When you say remove the master I’m assuming it’s still there running the api server and etcd? Just not in the nodes list?
I can’t speak for microk8s as I haven’t used it, but technically a node in Kubernetes is something that runs a kubelet process and can be used to schedule workloads.
There are some K8s distros that don’t run kubelet on the control plane nodes, so those nodes don’t show up in kubectl get nodes and can’t be used to schedule workloads. So theoretically not part of the cluster from a scheduling perspective.
That’s fine, workers don’t care, they just need to communicate with the API server.
The master node runs the control plane. The rest do not have the other components such as apiserver.
The microk8s remove-node actually removes it from the cluster. It also means the cluster loses the apiserver, etcd, controller manager and scheduler.
@stephendotcarterremove the master means what balchua1 said: microk28 remove-node. It make sense what you said. I think in this case kubelet was put in some strange situation - it wasn’t part of the cluster while current node was still acting as control plane node (with all other services being just fine), so it actually didn’t belong to anything. Or something like that…
@balchua1 - what you said would make sense if it reality wasn’t having any issues: it seemed that cluster didn’t lose apiserver, etcd, controller manager nor scheduler. As I said - aside of kubelet that got in some weird state (complaining that node it runs on it is not there) - rest of the world just worked as nothing happened.
I think I’ll just, for now, keep ‘master’ node cordoned and let microk8s install on it whatever it insist installing - while rest of the worker nodes will pick up the rest…
I think I was right in both cases (to the extend) - it is misconfiguration (getting kubelet to end up in some odd state) and philosophy - microk8s was made to simplify things but not cater for this situation I’ve put that particular set of services into ¯_(ツ)_/¯.