kube-ovnaddon is available starting from MicroK8s version 1.25
By default, MicroK8s comes with the Calico CNI installed. It is possible to change the CNI to KubeOVN, using the KubeOVN addon in order to take advantage of advanced enterprise grade features such as Namespaced subnets, Subnet isolation, VLAN support, Dynamic QoS, Multi-cluster networking, Traffic Mirroring and others.
The kube-ovn addon is standalone, which means that both the ovn control plane and the data plane services are running inside the MicroK8s cluster.
The kube-ovn addon supports both single-node as well as multi-node MicroK8s clusters.
For a single-node MicroK8s, enable the KubeOVN addon with a single command:
sudo microk8s enable kube-ovn
This will remove the calico CNI, deploy the OVN data plane services on the cluster, and then configure the KubeOVN CNI for the cluster.
Ensure that the KubeOVN services have deployed correctly by running
microk8s kubectl get pod -A
The list ouptput should contain the following pods:
NAMESPACE NAME READY STATUS RESTARTS AGE kube-system kube-ovn-monitor-58bf876d55-tkw2p 1/1 Running 1 (79s ago) 113s kube-system ovs-ovn-cw85s 1/1 Running 1 (79s ago) 113s kube-system kube-ovn-controller-675c89db9d-88bf6 1/1 Running 1 (79s ago) 113s kube-system ovn-central-556dc7ff44-tccpt 1/1 Running 1 (79s ago) 113s kube-system kube-ovn-pinger-p7r2z 1/1 Running 0 113s kube-system kube-ovn-cni-8qv7n 1/1 Running 1 (81s ago) 113s
You can verify external connectivity by checking the logs of the
kube-ovn-pinger pod with
microk8s kubectl logs -n kube-system ds/kube-ovn-pinger, for example:
I0720 20:55:13.251446 2845059 ping.go:259] start to check apiserver connectivity I0720 20:55:13.255030 2845059 ping.go:268] connect to apiserver success in 3.54ms I0720 20:55:13.255114 2845059 ping.go:129] start to check pod connectivity I0720 20:55:13.373137 2845059 ping.go:159] ping pod: kube-ovn-pinger-pxpx2 10.1.0.4, count: 3, loss count 0, average rtt 0.11ms I0720 20:55:13.373278 2845059 ping.go:83] start to check node connectivity I0720 20:55:13.681367 2845059 ping.go:108] ping node: dev 10.0.3.180, count: 3, loss count 0, average rtt 0.51ms I0720 20:55:13.681478 2845059 ping.go:223] start to check dns connectivity I0720 20:55:13.685835 2845059 ping.go:236] resolve dns kubernetes.default to [10.152.183.1] in 4.32ms I0720 20:55:13.685881 2845059 ping.go:241] start to check dns connectivity I0720 20:55:13.744725 2845059 ping.go:254] resolve dns canonical.com to [220.127.116.11 18.104.22.168 22.214.171.124 2620:2d:4000:1::27 2620:2d:4000:1::28 2620:2d:4000:1::26] in 58.81ms I0720 20:55:13.744815 2845059 ping.go:192] start to check ping external to 126.96.36.199 I0720 20:55:13.850048 2845059 ping.go:205] ping external address: 188.8.131.52, total count: 3, loss count 0, average rtt 2.59ms
Test that the KubeOVN CNI has been installed correctly by creating a simple Nginx deployment, and ensuring that the pod gets an IP address:
microk8s kubectl create deploy nginx --image nginx
Retrieve the list of pods with:
microk8s kubectl get pod -o wide
…and confirm that the nginx pod has been assigned an IP address from KubeOVN:
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-8f458dc5b-8m644 1/1 Running 0 16s 10.1.0.5 dev <none> <none>
For MicroK8s clusters, ensure that the KubeOVN addon is enabled on all nodes before joining the cluster:
microk8s enable kube-ovn microk8s join ...
By default, the OVN central database is only running on the first control plane of the cluster. It is possible to change the OVN database to run in HA mode. In this scenario, it is highly recommended that an odd number of nodes is used (e.g. 3 nodes).
See also High availability for ovn db.
Let’s assume that we have the following cluster, where
t1 is currently the ovn-central database, and we want the database to also run in
t3. First, see the nodes, their internal IPs and the existing ovn-central pod with the following commands:
# microk8s kubectl get pod -l app=ovn-central -A -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-system ovn-central-857797dd58-4slfm 1/1 Running 0 44s 10.75.170.40 t1 <none> <none> # microk8s kubectl get node -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME t1 Ready <none> 2m24s v1.24.3-2+f5237103dbf48f 10.75.170.40 <none> Ubuntu 22.04 LTS 5.15.0-1012-kvm containerd://1.6.6 t2 Ready <none> 95s v1.24.3-2+f5237103dbf48f 10.75.170.186 <none> Ubuntu 22.04 LTS 5.15.0-1012-kvm containerd://1.6.6 t3 Ready <none> 93s v1.24.3-2+f5237103dbf48f 10.75.170.27 <none> Ubuntu 22.04 LTS 5.15.0-1012-kvm containerd://1.6.6
In the output above, we are interested in 2 things:
- The internal IPs of our nodes are
- The ovn-central is currently running on node
Label the nodes
microk8s kubectl label node t2 kube-ovn/role=master microk8s kubectl label node t3 kube-ovn/role=master
microk8s kubectl edit -n kube-system ovn-central
… and configure to have 3 replicas, and set the value of the
NODE_IPSenvironment variable to the list of IP addresses of the nodes:
# ... spec: template: replicas: 3 # change to 3 replicas containers: - name: ovn-central env: - name: NODE_IPS value: 10.75.170.40, 10.75.170.186, 10.75.170.27 # internal IPs of 3 nodes
Save and close the editor, which will update the deployment on the cluster. Shortly,
ovn-centralwill be running in HA mode:
# microk8s kubectl get pod -l app=ovn-central -A -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-system ovn-central-699f4fd898-xl5q5 1/1 Running 0 5m33s 10.75.170.40 t1 <none> <none> kube-system ovn-central-699f4fd898-fncq8 1/1 Running 0 5m42s 10.75.170.186 t2 <none> <none> kube-system ovn-central-699f4fd898-pjcl4 1/1 Running 0 5m37s 10.75.170.27 t3 <none> <none>