Changing the pods CIDR in a MicroK8s cluster

By default MicroK8s v1.19+ will use the 10.1.0.0/16 network to place its pods.

To change the pods CIDR you need to configure kube-proxy (edit /var/snap/microk8s/current/args/kube-proxy) and tell the calico CNI what the new CIDR is (edit and apply /var/snap/microk8s/current/args/cni-network/cni.yaml).

Configuration steps

  • Remove the current CNI configuration with :
microk8s kubectl delete -f /var/snap/microk8s/current/args/cni-network/cni.yaml
  • Edit /var/snap/microk8s/current/args/kube-proxy and update the --cluster-cidr=10.1.0.0/16 argument with the new CIDR.

  • Restart MicroK8s with:

microk8s stop
microk8s start
  • Edit /var/snap/microk8s/current/args/cni-network/cni.yaml and replace the new IP range in. For example to switch to 10.2.0.0/16 update the CALICO_IPV4POOL_CIDR with:
 - name: CALICO_IPV4POOL_CIDR
   value: "10.2.0.0/16"
  • Apply the new CNI manifest:
microk8s kubectl apply -f /var/snap/microk8s/current/args/cni-network/cni.yaml

Verify the new configuration

At this point new pods are placed on the updated CIDR. To check the update worked try deploying some pods:

microk8s enable dns
microk8s enable dashboard

…then check the allocated IP addresses:

microk8s kubectl get po -A -o wide
NAMESPACE     NAME                                             READY   STATUS    RESTARTS   AGE     IP             NODE     NOMINATED NODE   READINESS GATES
kube-system   pod/calico-node-rdkz6                            1/1     Running   0          4m34s   192.168.1.23   aurora   <none>           <none>
kube-system   pod/calico-kube-controllers-847c8c99d-rjfd4      1/1     Running   0          4m34s   10.2.180.193   aurora   <none>           <none>
kube-system   pod/metrics-server-8bbfb4bdb-wqjxs               1/1     Running   0          3m2s    10.2.180.195   aurora   <none>           <none>
kube-system   pod/coredns-86f78bb79c-cppgt                     1/1     Running   0          3m12s   10.2.180.194   aurora   <none>           <none>
kube-system   pod/kubernetes-dashboard-7ffd448895-2l7xn        1/1     Running   0          2m52s   10.2.180.196   aurora   <none>           <none>
kube-system   pod/dashboard-metrics-scraper-6c4568dc68-5nn7p   1/1     Running   0          2m52s   10.2.180.197   aurora   <none>           <none>

You can also check the IPtable rules:

 sudo iptables -t nat -nL |grep "10\.2\."
KUBE-MARK-MASQ  all  --  10.2.180.194         0.0.0.0/0            /* kube-system/kube-dns:dns-tcp */
DNAT       tcp  --  0.0.0.0/0            0.0.0.0/0            /* kube-system/kube-dns:dns-tcp */ tcp to:10.2.180.194:53
KUBE-MARK-MASQ  all  --  10.2.180.194         0.0.0.0/0            /* kube-system/kube-dns:metrics */
DNAT       tcp  --  0.0.0.0/0            0.0.0.0/0            /* kube-system/kube-dns:metrics */ tcp to:10.2.180.194:9153
KUBE-MARK-MASQ  all  --  10.2.180.194         0.0.0.0/0            /* kube-system/kube-dns:dns */
DNAT       udp  --  0.0.0.0/0            0.0.0.0/0            /* kube-system/kube-dns:dns */ udp to:10.2.180.194:53
KUBE-MARK-MASQ  all  --  10.2.180.195         0.0.0.0/0            /* kube-system/metrics-server */
DNAT       tcp  --  0.0.0.0/0            0.0.0.0/0            /* kube-system/metrics-server */ tcp to:10.2.180.195:4443
KUBE-MARK-MASQ  tcp  -- !10.2.0.0/16          10.152.183.178       /* kube-system/metrics-server cluster IP */ tcp dpt:443
KUBE-MARK-MASQ  tcp  -- !10.2.0.0/16          10.152.183.1         /* default/kubernetes:https cluster IP */ tcp dpt:443
KUBE-MARK-MASQ  udp  -- !10.2.0.0/16          10.152.183.10        /* kube-system/kube-dns:dns cluster IP */ udp dpt:53
KUBE-MARK-MASQ  tcp  -- !10.2.0.0/16          10.152.183.10        /* kube-system/kube-dns:dns-tcp cluster IP */ tcp dpt:53
KUBE-MARK-MASQ  tcp  -- !10.2.0.0/16          10.152.183.10        /* kube-system/kube-dns:metrics cluster IP */ tcp dpt:9153

Behind a proxy

Remember: If you are also setting up a proxy, you will also need to update /var/snap/microk8s/current/args/containerd-env with the respective IP ranges.

I think that the application of the new CNI manifest should be with create/apply, not delete. Am I wrong?

microk8s.kubectl create -f /var/snap/microk8s/current/args/cni-network/cni.yaml

Yes, you are right. Thank you.

It works. Confirmed. Thanks

Instead of:

The instruction should say:
microk8s.enable dns dns

1 Like

What is equivalent of /var/snap/microk8s/ on macOS? I am unable to find any files mentioned in this document.

Will it work for other networks (eg 172.16.0.0/16)? There is a problem accessing to cluster 10.152.183.1, dns 10.152.183.10 etc…

Was installing microk8s inside a customer datacenter which uses the 10.1 space internally and running into some conflicts accessing other services in the datacenter, so wanted to move the pod CIDR to 10.2.

Followed these instructions without a hitch, but my pods lost all connectivity beyond the node. I found legacy and non-legacy iptables FORWARD rules which seem to have been installed with microk8s like this:

iptables -A FORWARD -s 10.1.0.0/16 -m comment --comment "generated for MicroK8s pods" -j ACCEPT
iptables -A FORWARD -d 10.1.0.0/16 -m comment --comment "generated for MicroK8s pods" -j ACCEPT
iptables-legacy -A FORWARD -s 10.1.0.0/16 -m comment --comment "generated for MicroK8s pods" -j ACCEPT
iptables-legacy -A FORWARD -d 10.1.0.0/16 -m comment --comment "generated for MicroK8s pods" -j ACCEPT

These were no longer needed (contributing to the conflicts) and 10.2 rules needed instead to restore egress from the pods. I deleted the rules by rule number and added them back for 10.2 with my own comments, got everything working fine, but I’m vaguely worried whatever installed these will someday come to reinstall them.

I feel like this documentation page should at least include a heads up about needing to modify these rules to effect a CIDR change, if they came in with microk8s somewhere - and some instruction (which I do not know) as to how to avoid the issue or correct it at its root.

@kjackal how would one adjust the service CIDR? (10.152.183.0/24)?

Dear Admin
OS : CentOS 8
Version: Microk8s. 1.27.2
Environment : AWS EC2 VM
Comments:
I installed microk8s with EC2 instance not EKS.
I changed the POD CIDR 10.222.0.0/16 instead of default CIDR (10.1.0.0/16).
The pods ip changed from 10.1.0.0/16 to 10.222.0.0/16.
I refered to this URL : https://microk8s.io/docs/change-cidr.

But It couldn’t be reached to coredns when I excecute nslookup kubernetes with dnsutils POD on default namespace.

[centos@ip-172-34-72-202 ~]$ microk8s kubectl exec -it dnsutils – nslookup dnsutils
;; connection timed out; no servers could be reached

command terminated with exit code 1.

The a pod seems not to connect for resolving with coredns.

When I rollback to default CIDR, the nslookup command worked for resolving well.

Is there extra configuration job ?

I resolve it.
When you change the POD CIDR, you can check IPtables FORWARD policy is DROP.
If it is DROP, you should enable with: sudo iptables -P FORWARD ACCEPT

1 Like

@kjackal @gerardo-garcia are you able to explain what this changes in the cluster (microk8s.enable dns dns) and why 2 times dns?

I’m asking because I had an issue with intermittent loss of external connectivity, after enabling dns that issue stopped

EDIT: Nevermind it still happens but less often, I’ll try rebooting all the nodes in the cluster to check if it helps

❯ kubectl exec -it pod/api-7856d5f99d-nccpf -- curl -kvvvL www.google.com
*   Trying 216.58.210.164:80...
*   Trying [2a00:1450:4026:808::2004]:80...
* Immediate connect fail for 2a00:1450:4026:808::2004: Network is unreachable