Kubernetes pre-releases available with MicroK8s

If you take a look at MicroK8s’ channel information with snap info microk8s you will notice that there are now pre-releases available:

  stable:         v1.14.1         2019-04-18 (522) 214MB classic
  candidate:      v1.14.1         2019-04-15 (522) 214MB classic
  beta:           v1.14.1         2019-04-15 (522) 214MB classic
  edge:           v1.14.1         2019-05-10 (587) 217MB classic
  1.15/stable:    –                                      
  1.15/candidate: –                                      
  1.15/beta:      –                                      
  1.15/edge:      v1.15.0-alpha.3 2019-05-08 (578) 215MB classic

To test your work against the alpha 1.15 release simply do:

sudo snap install microk8s --classic --channel=1.15/edge

We are committed to shipping MicroK8s with upstream pre-releases for tracks that do not have a stable release yet. The following scheme will be followed:

  • The edge channel (eg 1.15/edge) holds the alpha upstream releases.
  • The beta channel (eg 1.15/beta) holds the beta upstream releases.
  • The candidate channel (eg 1.15/candidate) holds the release candidate upstream releases.

Pre-releases will be available the same day they are released upstream.

How to develop K8s core services with MicroK8s
One of the purposes of shipping pre-releases is to assist K8s core service developers in their task. Let’s see how we can hook a local build of kubelet to a MicroK8s deployment.

Following the build instructions for Kubernetes we:

git clone https://github.com/kubernetes/kubernetes
cd kubernetes
build/run.sh make kubelet

The kubelet binary should soon be available under: _output/dockerized/bin/linux/amd64/kubelet

Let’s grab a MicroK8s deployment:

sudo snap install microk8s --classic --channel=1.15/edge

To see what arguments the kubelet is running with we:

> ps -ef | grep kubelet
root     24184     1  2 17:28 ?        00:00:54 /snap/microk8s/578/kubelet --kubeconfig=/snap/microk8s/578/configs/kubelet.config --cert-dir=/var/snap/microk8s/578/certs --client-ca-file=/var/snap/microk8s/578/certs/ca.crt --anonymous-auth=false --network-plugin=kubenet --root-dir=/var/snap/microk8s/common/var/lib/kubelet --fail-swap-on=false --pod-cidr= --non-masquerade-cidr= --cni-bin-dir=/snap/microk8s/578/opt/cni/bin/ --feature-gates=DevicePlugins=true --eviction-hard=memory.available<100Mi,nodefs.available<1Gi,imagefs.available<1Gi --container-runtime=remote --container-runtime-endpoint=/var/snap/microk8s/common/run/containerd.sock --node-labels=microk8s.io/cluster=true

We now need to stop the kubelet that comes with MicroK8s and start our own build:

> sudo systemctl stop snap.microk8s.daemon-kubelet.service
> sudo _output/dockerized/bin/linux/amd64/kubelet --kubeconfig=/snap/microk8s/578/configs/kubelet.config --cert-dir=/var/snap/microk8s/578/certs --clit-ca-file=/var/snap/microk8s/578/certs/ca.crt  --anonymous-auth=false --network-plugin=kubenet --root-dir=/var/snap/microk8s/common/var/lib/kubelet --fail-swap-on=false --pod-cidr= --container-runtime=remote --container-runtime-endpoint=/var/snap/microk8s/common/run/containerd.sock --node-labels=microk8s.io/cluster=true  --eviction-hard='memory.available<100Mi,nodefs.available<1Gi,imagefs.available<1Gi'

That’s it! Your kubelet now runs in place of the one in MicroK8s! You have to admit it is as simple as it gets.

What you should be aware is that some microk8s commands will restart services through systemd. For example, microk8s.enable dns will initiate a services restart including the kubelet shipped with MicroK8s.

Happy coding!

Find out more at https://microk8s.io/ and drop us a line with any feedback and comments you may have.