We have been busy adding a new and exciting feature to MicroK8s. You can now join two or more deployments to form a cluster! A preview release is available for you to test-drive and give us your feedback. Here is how to setup a MicroK8s cluster:
On two or more machines install MicroK8s from the 1.15/edge/clustering channel:
One of these machines will act as the master, hosting the control plane. On that machine, run the following command for each node you want to add:
sudo microk8s.add-node
The microk8s.add-node will generate a connection string in the form of <master_ip>:<port>/<token> and will prompt you to use the microk8s.join command from the node joining the cluster. For instance, a join command instructing a node to form a cluster with the master may look like:
@kjackal do we need to use calico or flannel for pod networking as it’s mandatory for making cluster especially when you create master and worker using VMs.
does microk8s do this networking thing automatically??
and secondly can we customize the default ingress addon in microk8s??
I have been playing with the feature, and if I understand the situation correctly, the master node will only run the control plane, and won’t be aued for scheduling.
I understand this is a sensible setup, but is there an option to make the master node actually run a kubelet?
This is not correct. The master node runs kubelet and is able to host workloads. It is up to the administrator to shape the scheduling using k8s options.
Apologies, there was a problem on my end. The master node had a computer name with illegal characters in it (an underscore). This causes kubelet to fail to launch, so the master node did not show up in the list of nodes.
I have realised this when I have reinstalled the OS and bumped into the same issue on K3S. On K3S the error was:
Apr 01 18:10:36 nuci3_4th k3s[2136]: I0401 18:10:36.534710 2136 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
Apr 01 18:10:36 nuci3_4th k3s[2136]: I0401 18:10:36.535947 2136 kubelet_node_status.go:70] Attempting to register node nuci3_4th
Apr 01 18:10:36 nuci3_4th k3s[2136]: E0401 18:10:36.541608 2136 kubelet_node_status.go:92] Unable to register node "nuci3_4th" with API server: Node "nuci3_4th" is invalid: metadata.name: Invalid value: "nuci3_4th": a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex us
Apr 01 18:10:36 nuci3_4th k3s[2136]: E0401 18:10:36.605614 2136 kubelet.go:2263] node "nuci3_4th" not found
What is a recommended setup for a clustered microk8s? Are there any best practices for creating such a cluster?
I am looking for a solution for creating my own public cloud based on the following scenario: Failover IP -> 2-3 vps/root servers. Could I achieve with microk8s a HA clustered Kubernetes environment?
Thanks for this feature! It makes it really easy to deploy a small cluster to learn Kubernetes.
I’ve created 3 VM’s (Ubuntu 20.04 LTS) using multipass and installed microk8s (using snap) on each of them. One is designated as the master and 2 as the workers. After the workers are joined, I’m seeing the following output:
ubuntu@kmaster:~$ kubectl get no
NAME STATUS ROLES AGE VERSION
kmaster Ready <none> 10h v1.18.2-41+b5cdb79a4060a3
kworker1 Ready <none> 10h v1.18.2-41+b5cdb79a4060a3
kworker2 Ready <none> 10h v1.18.2-41+b5cdb79a4060a3
The role of the kmaster is <none>. Shouldn’t that be “master”?
I’m new to MicroK8s as well, but my understanding is that the master isn’t tainted (ie, marked as master) so that it can still act as a worker and participate in sharing the workload. Normally in a K8s cluster the master doesn’t run workload.
Late to this thread, but I have installed clustering on Raspberry Pi 4, but the system is very heavily loaded and I would like to understand why. As I am not a performance guru and don’t know what is going on with etcd and so on, perhaps we can discuss here or in another thread?
Could you open an issue at https://github.com/ubuntu/microk8s/issues attaching the microk8s.inspect tarball of each of your nodes (or only the affected) and also discuss how the cluster feels heavy (is it high CPU which process. Some other resource is affected, eg memory usage)?
Hi. Should microk8s clustering work for multiple machines all using windows 10 WSL2 (ubuntu 20.04) on the same network? Also, can I add machines with microk8s running on mac os to this cluster?