Create a MicroK8s cluster

Although MicroK8s is designed as an ultra-lightweight implementation of Kubernetes, it is still possible, and useful to be able to create a MicroK8s cluster. This Page explains how to add and remove nodes and what is required to make the cluster highly available.

Note: Each node on a MicroK8s cluster requires its own environment to work in, whether that is a separate VM or container on a single machine or a different machine on the same network. Note that, as with almost all networked services, it is also important that these instances have the correct time (e.g. updated from an ntp server) for inter-node communication to work.

Adding a node

To create a cluster out of two or more already-running MicroK8s instances, use the microk8s add-node command. The MicroK8s instance on which this command is run will be the master of the cluster and will host the Kubernetes control plane:

microk8s add-node

This will return some joining instructions which should be executed on the MicroK8s instance that you wish to join to the cluster (NOT THE NODE YOU RAN add-node FROM)

From the node you wish to join to this cluster, run the following:
microk8s join 192.168.1.230:25000/92b2db237428470dc4fcfc4ebbd9dc81/2c0cb3284b05

Use the '--worker' flag to join a node as a worker not running the control plane, eg:
microk8s join 192.168.1.230:25000/92b2db237428470dc4fcfc4ebbd9dc81/2c0cb3284b05 --worker

If the node you are adding is not reachable through the default interface you can use one of the following:
microk8s join 192.168.1.230:25000/92b2db237428470dc4fcfc4ebbd9dc81/2c0cb3284b05
microk8s join 10.23.209.1:25000/92b2db237428470dc4fcfc4ebbd9dc81/2c0cb3284b05
microk8s join 172.17.0.1:25000/92b2db237428470dc4fcfc4ebbd9dc81/2c0cb3284b05

Joining a node to the cluster should only take a few seconds. Afterwards
you should be able to see the node has joined:

microk8s kubectl get no

…will return output similar to:

NAME               STATUS   ROLES    AGE   VERSION
10.22.254.79       Ready    <none>   27s   v1.15.3
ip-172-31-20-243   Ready    <none>   53s   v1.15.3

Removing a node

First, on the node you want to remove, run microk8s leave. MicroK8s on the departing node
will restart its own control plane and resume operations as a full single node cluster:

microk8s leave

To complete the node removal, call microk8s remove-node from the remaining nodes to
indicate that the departing (unreachable now) node should be removed permanently:

microk8s remove-node 10.22.254.79

Storage

If you are using the simple storage provided by the hostpath storage add-on, note that this will only be available to the nodes it has been enabled on. For clustered storage, you should set up alternative storage. For example, see the guide on using NFS.

High Availability

From the 1.19 release of MicroK8s, HA is enabled by default. If your cluster consists of three or more nodes, the datastore will be replicated across the nodes and it will be resilient to a single failure (if one node develops a problem, workloads will continue to run without interruption).

The microk8s status now includes information about the HA state. For example:

microk8s is running
high-availability: yes
  datastore master nodes: 10.128.63.86:19001 10.128.63.166:19001 10.128.63.43:19001
  datastore standby nodes: none

For more information about how HA works, and how to manage an HA cluster, please see the High Availability page.

Worker nodes

Starting from the 1.23 release a node can join the cluster as a worker node. Worker nodes are able to host workloads but they do not run the Kubernetes control plane and therefore they do not add to the availability (HA) of the cluster. Worker nodes are ideal for low-end devices as they consume fewer resources. They also make sense in large clusters with enough control plane nodes to ensure HA. To add a worker node use the --worker flag when running the microk8s join command:

microk8s join 192.168.1.230:25000/92b2db237428470dc4fcfc4ebbd9dc81/2c0cb3284b05 --worker

A worker node runs a local API server proxy that takes care of the communication between the local services (kubelet, kube-proxy) and the API servers running on multiple control plane nodes. When adding a worker node, MicroK8s attempts to detect all API server endpoints in the cluster and configure the new node accordingly. The list of API servers is stored in/var/snap/microk8s/current/args/traefik/provider.yaml.

The API server proxy will automatically check for updates when the control plane nodes of the cluster are changed (e.g. a new control plane node is added, an old one is removed) and update the list of known API server endpoints.

If you already have a load balancer in front of the API server, you can configure the load balancer address manually in /var/snap/microk8s/current/args/traefik/provider.yaml. In this case, make sure to also disable the automatic refresh of the control plane endpoints by setting --refresh-interval 0 in /var/snap/microk8s/current/args/apiserver-proxy.

It will be worth adding in the docs the support for user providing their token when using add-node command.
Ex.
microk8s add-node --token [32 chars] --token-ttl [token time to live in seconds]
This greatly helps when using automation to bootstrap multi node MicroK8s cluster.

1 Like

hi,
i have follow message when command - microk8s status
microk8s is running
high-availability: yes
datastore master nodes: 95.217.177.194:19001 10.44.0.3:19001 10.44.0.4:19001

I think that 95.217.177.194:19001 is whrong.
How can i change master’s ip to 10.44.0.1?
thx

Can we add a limitation notice about the storage add-on. I just wasted lots of hours not realizing it only works on a single node.

This will return some joining instructions, such as:

microk8s add-node

is repeated 2 times

1 Like

I added a worker node from a machine on the same private network but the cluster IP is not recognized.

Hit error connecting to datastore - retry error=Get “https://10.152.183.1:443/api/v1/nodes/foo”: dial tcp 10.152.183.1:44 ││ 3: i/o timeout

Perhaps it would be fine to add to this tutorial that the hosts that are part of the cluster should be reachable by his hostnames by configuring it on /etc/hosts file.
Following the tutorial without any configuration on /etc/hosts leads to an error when trying to add nodes to the cluster.
I inserted one record on that file pointing to every node in the cluster and with that, nodes could be added to the cluster without any issue.
I know that with a search in google or another search engine, this issue could be solved, but I think this tutorial could be improved with this addition.
Anyway, thanks for this tutorial. I find it useful

2 Likes