In this document we describe the default MicroK8s CNI configuration, along with steps to follow in case a custom setup is required.
Common
Kubernetes requires a CNI to configure the network namespaces and interfaces of pods. CNI configuration comprises of two things:
- One (or more)
conflist
files (in JSON format) describing the CNI configuration. This configuration is typically found at/etc/cni/net.d
. In a MicroK8s cluster, the configuration path is instead/var/snap/microk8s/current/args/cni-network
. - CNI binaries, which implement the CNI specification. These binaries are typically found at
/opt/cni/bin
. In a MicroK8s cluster, the binaries are instead installed under/var/snap/microk8s/current/opt/cni/bin
.
The default CIDR for pods is 10.1.0.0/16
. All pods are assigned an IP address in that range.
The default service CIDR is 10.152.183.0/24
. 10.152.183.1
will typically be reserved for the Kubernetes API, and 10.152.183.10
will be used by CoreDNS.
Calico
Starting from version 1.19, MicroK8s clusters use the Calico CNI by default, configured with the vxlan backend. Calico itself runs in pods inside the cluster. Two components are deployed:
daemonset/calico-node
which is the calico node processdeployment/calico-kube-controllers
which is a supporting service
The manifest that is used to deploy Calico is placed under /var/snap/microk8s/current/args/cni-network/cni.yaml
. You can see the source manifest for the latest MicroK8s version on GitHub, or for a specific version, e.g. 1.26 from the respective branch
The manifest contains RBAC rules, ServiceAccounts, CustomResourceDefinitions, the calico-node
daemonset and the calico-kube-controllers
deployment.
Configure Calico
For most of the cases below, the way to change Calico configuration is to patch the deployed cni.yaml
and then re-apply it to the cluster with:
microk8s kubectl apply -f /var/snap/microk8s/current/args/cni-network/cni.yaml
For MicroK8s clusters, it is recommended that the file is updated on all cluster nodes.
Configure Calico IP autodetection method
Edit /var/snap/microk8s/current/args/cni-network/cni.yaml
and change the following section:
# in daemonset/calico-node/containers[calico-node]/env
- name: IP_AUTODETECTION_METHOD
value: "first-found" # change "first-found" to desired value
Example values for the autodetection method may be. Refer to the Calico docs for more details:
interface=eth0
kubernetes-internal-ip
can-reach=10.10.10.10
When you create a cluster with microk8s add-node
and microk8s join
and the IP autodetection method is first-found
, MicroK8s will automatically change it to can-reach=$ipaddress
, where $ipaddress
is the IP address used in the microk8s join
command.
Configure Calico in BGP mode
Edit /var/snap/microk8s/current/args/cni-network/cni.yaml
and change the following sections:
# in configmap/calico-config/data
calico_backend: "vxlan" # Change "vxlan" to "bird"
# in daemonset/calico-node/containers[calico-node]/env
- name: CALICO_IPV4POOL_VXLAN # Change "CALICO_IPV4POOL_VXLAN" to "CALICO_IPV4POOL_IPIP"
value: "Always"
# in daemonset/calico-node/livenessProbe/command
- -felix-live # Change "-felix-live" to "-bird-live"
# in daemonset/calico-node/readinessProbe/command
- -felix-ready # Change "-felix-ready" to "-bird-ready"
Then re-apply the Calico manifest. If Calico has already started and created a default IPPool, you might have to delete it with:
microk8s kubectl delete ippools default-ipv4-ippool
microk8s kubectl rollout restart daemonset/calico-node
See also Calico docs for migrating to a different IP pool.
Configure Pod CIDR
The default pod CIDR is 10.1.0.0/16
. Assuming we want to change that to 10.100.0.0/16
, the following steps are needed
Edit /var/snap/microk8s/current/args/cni-network/cni.yaml
and change the following sections:
# in daemonset/calico-node/containers[calico-node]/env
- name: CALICO_IPV4POOL_CIDR
value: "10.1.0.0/16" # Change "10.1.0.0/16" to "10.100.0.0/16"
Also, edit /var/snap/microk8s/current/args/kube-proxy
and set the --cluster-cidr
accordingly:
--cluster-cidr=10.1.0.0/16 # Change "10.1.0.0/16" to "10.100.0.0/16"
Then, restart MicroK8s and re-apply Calico with:
microk8s kubectl apply -f /var/snap/microk8s/current/args/cni-network/cni.yaml
sudo snap restart microk8s
If Calico has already started and created a default IPPool, you might have to delete it with:
microk8s kubectl delete ippools default-ipv4-pool
microk8s kubectl rollout restart daemonset/calico-node
Upgrade Calico
Upgrading a MicroK8s cluster will not upgrade the deployed version of Calico on the cluster. This is by design, so that potentially unwanted disruptions are prevented. Currently, the process to upgrade Calico is described below.
Note: Executing the commands below in one of the cluster nodes will suffice, but it is recommended that you update
cni.yaml
on all nodes to prevent drifts in the configuration between the nodes.
# keep a backup of the existing cni.yaml
cp /var/snap/microk8s/current/args/cni-network/cni.yaml /var/snap/microk8s/current/args/cni-network/cni.yaml.backup
# copy calico manifest from snap
cp /snap/microk8s/current/upgrade-scripts/000-switch-to-calico/resources/calico.yaml /var/snap/microk8s/current/args/cni-network/cni.yaml
# (manual step), you can often skip if you have not done any changes
# compare cni.yaml and cni.yaml.backup, and make sure to carry any changes from cni.yaml.backup (e.g. ip autodetection method)
vim /var/snap/microk8s/current/args/cni-network/cni.yaml
# upgrade to the new Calico version
microk8s kubectl apply -f /var/snap/microk8s/current/args/cni-network/cni.yaml
Flannel
Flannel is included in MicroK8s and is historically used for non-HA clusters. In this scenario, the flanneld daemon is running on each node as a system service. If you see that Calico has a negative impact on the performance or memory usage of your cluster, you can switch to flannel with the following command:
Warning: This is a destructive operation, as it will also delete all resources from your cluster. You will have to re-deploy any applications you were running and re-enable all addons you had previously configured. This action is supposed to be performed in new clusters only.
microk8s disable ha-cluster --force
In this setup, flannel is using the etcd data store from the single control plane node in the cluster.
Flannel with Kubernetes store
Starting from MicroK8s 1.27, it is possible to configure the flanneld service with the Kubernetes data store instead (Kubernetes Subnet Manager). This allows having HA MicroK8s clusters that are using the flannel CNI instead of Calico.
For this to work, configuration for a number of services must be adjusted. The configuration you need for each node is shown below. Change 10.1.0.0/16
to the pod CIDR you want to use:
# configure kubernetes to allocate podCIDR per node
echo '
--cluster-cidr=10.1.0.0/16
--allocate-node-cidrs=true
' | sudo tee -a /var/snap/microk8s/current/args/kube-controller-manager
# restart kubelite for changes to take effect
sudo snap restart microk8s.daemon-kubelite
# configure Flannel arguments and environment
echo 'NODE_NAME=$(hostname)' | sudo tee /var/snap/microk8s/current/args/flanneld-env
echo '{"Network": "10.1.0.0/16", "Backend": {"Type": "vxlan"}}' | sudo tee /var/snap/microk8s/current/args/flannel-network-mgr-config
echo '
--iface=""
--subnet-file=$SNAP_COMMON/run/flannel/subnet.env
--ip-masq=true
--kube-subnet-mgr=true
--kubeconfig-file=$SNAP_DATA/credentials/kubelet.config
--net-config-path=$SNAP_DATA/args/flannel-network-mgr-config
' | sudo tee /var/snap/microk8s/current/args/flanneld
# remove calico
touch /var/snap/microk8s/current/var/lock/cni-loaded
sudo microk8s kubectl delete -f /var/snap/microk8s/current/args/cni-network/cni.yaml
rm /var/snap/microk8s/current/args/cni-network/*
# enable flanneld and restart
rm $SNAP_DATA/var/lock/no-flanneld
sudo snap restart microk8s.daemon-containerd microk8s.daemon-flanneld
Upgrade Flannel
Flannel and flanneld are included in the MicroK8s and are automatically updated when the snap refreshes.
Kube OVN
Starting from MicroK8s 1.25, it is possible to replace Calico with KubeOVN using the kube-ovn
addon:
microk8s enable kube-ovn --force
For more details, refer to the KubeOVN addon documentation