How to configure network Dual-stack

To ensure that pods and services do not end up on unintended network subnets, it is crucial to perform network configuration at the early stages of cluster or node installation.

From version 1.28, MicroK8s introduced launch configuration options which allow the configuration of network CIDRs for the cluster in both single and dual-stack.Defining the desired network settings in this way prevents any inadvertent leakage of pods or services into inappropriate subnets. By addressing network configuration from the start, you can establish a secure and well-organised networking environment for your Kubernetes cluster.

Note: On pre-1.28 MicroK8s deployments please try the manual configuration steps described in the MicroK8s IPv6 DualStack HOW-TO.

Customise the network in your deployment

To configure your network customise the following launch configuration to match your needs:

---
version: 0.1.0
extraCNIEnv:
  IPv4_SUPPORT: true
  IPv4_CLUSTER_CIDR: 10.3.0.0/16
  IPv4_SERVICE_CIDR: 10.153.183.0/24
  IPv6_SUPPORT: true
  IPv6_CLUSTER_CIDR: fd02::/64
  IPv6_SERVICE_CIDR: fd99::/108
extraSANs:
  - 10.153.183.1

Most of the fields are self explanatory:
IPv4_SUPPORT and IPv6_SUPPORT mark the support for IPv4 and IPv6 respectively. IPv4_CLUSTER_CIDR and IPv6_CLUSTER_CIDR are the CIDRs from where pods will get their IPs.
IPv4_SERVICE_CIDR and IPv6_SERVICE_CIDR is where the services will get their IPs.
In the extraSANs section you want to have the IP of the Kubernetes service itself which is going to be the first IP in the IPv4_CLUSTER_CIDR range.

Place the launch configuration in a location where it can be picked up by MicroK8s during installation. Here we assume the /var/tmp/lc.yaml includes your network customisation:

sudo mkdir -p /var/snap/microk8s/common/
sudo cp /var/tmp/lc.yaml /var/snap/microk8s/common/.microk8s.yaml

Install MicroK8s from a channel newer or equal to 1.28:

sudo snap install microk8s --classic --channel 1.28

Note: All nodes joining a cluster need to be pre-configured with the same network configuration. The network configuration process above needs to be repeated on every node joining the cluster.

Verify dual-stack is configured correctly

To test that the cluster is configured with dual-stack, apply the following manifest that creates a service with ipFamilyPolicy: RequireDualStack:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginxdualstack
spec:
  selector:
    matchLabels:
      run: nginxdualstack
  replicas: 1
  template:
    metadata:
      labels:
        run: nginxdualstack
    spec:
      containers:
      - name: nginxdualstack
        image: rocks.canonical.com/cdk/diverdane/nginxdualstack:1.0.0
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx6
  labels:
    run: nginxdualstack
spec:
  type: NodePort
  ipFamilies:
  - IPv6
  ipFamilyPolicy: RequireDualStack
  ports:
  - port: 80
    protocol: TCP
  selector:
    run: nginxdualstack

Get the nginx6 service IPv6 endpoint with microk8s kubectl get svc and query it with something similar to:

curl http://[fd99::d4ce]/ 

The curl output should look like:

<!DOCTYPE html>
<html>
<head>
<title>Kubernetes IPv6 nginx</title> 
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx on <span style="color:  #C70039">IPv6</span> Kubernetes!</h1>
<p>Pod: nginxdualstack-56fccfd475-hccqq</p>
</body>
</html>

Further Reading

1 Like

How do I configure dual-stack on an existing microk8s cluster?

I included this yaml file on an existing microk8s cluster that I recently upgraded to version 1.28.1, but after a microk8s reset, and even a server reboot, IPv6 and dual-stack are stil not enabled on this cluster.
I am running a single-node microk8s cluster on a raspberry pi.

If you already have a cluster maybe you need to look into this howto.

Changing the network setup in an already running cluster may have side-effects (eg pods that need to be restarted to see the second interface) so which it would be best dual-stack gets enabled at the very beginning of the k8s setup.

Since this method you described is based on launch configuration I thought I try installing this launch configuration via snap set as described on this page: MicroK8s - How to use launch configurations

It then says that “After a while, the configuration is applied to the local node.”. But how long is a while in this context? It is now more than 24 hrs that I set this launch configuration (I can see with snap get that it has been set), but still no dual-stack.

Or do I have to wait until a new version of microk8s is installed?

I may be wrong but I think the configuration probably has been changed but not acted on by the running pods. Depending on what has changed, it may be necessary to restart the pods. You might want to try just doing a ‘microk8s stop’ and ‘microk8s start’.

@evilnick all running pods have no IPv6 or dual-stack capabilities yet (they are IPv4 only), so for those nothing changes.
To test if my cluster is IPv6 and dual-stack ready I test with the nginxdualstack manifest from the article above.

All I get is:

The Service “nginx6” is invalid:

  • spec.ipFamilyPolicy: Invalid value: “RequireDualStack”: this cluster is not configured for dual-stack services
  • spec.ipFamilies[0]: Invalid value: “IPv6”: not configured on this cluster

Microk8s stop/start does not work, microk8s reset does not work, setting the launch configuration with snap set does not work, setting the configuration in /var/snap/microk8s/common/.microk8s.yaml does not work, and any combination of all this does not work either.

It seems it is just not possible to configure dual-stack and IPv6 on an already running cluster other than with the all-manual method.
This makes the use of launch configuration rather limited I think.

I set this configuration using snap set:

yaml复制代码version: 0.1.0  extraCNIEnv:    IPv4_SUPPORT: true    IPv4_CLUSTER_CIDR: 10.3.0.0/16    IPv4_SERVICE_CIDR: 10.153.183.0/24    IPv6_SUPPORT: true    IPv6_CLUSTER_CIDR: fd02::/64    IPv6_SERVICE_CIDR: fd99::/108

After restarting microk8s, when I tried to enable dns again using microk8s enable dns, I encountered this error.

root@homeubuntu22:~# microk8s enable dns
Traceback (most recent call last):
File “/snap/microk8s/6750/scripts/wrappers/enable.py”, line 41, in
enable(prog_name=“microk8s enable”)
File “/snap/microk8s/6750/usr/lib/python3/dist-packages/click/core.py”, line 764, in call
return self.main(*args, **kwargs)
File “/snap/microk8s/6750/usr/lib/python3/dist-packages/click/core.py”, line 717, in main
rv = self.invoke(ctx)
File “/snap/microk8s/6750/usr/lib/python3/dist-packages/click/core.py”, line 956, in invoke
return ctx.invoke(self.callback, **ctx.params)
File “/snap/microk8s/6750/usr/lib/python3/dist-packages/click/core.py”, line 555, in invoke
return callback(*args, **kwargs)
File “/snap/microk8s/6750/scripts/wrappers/enable.py”, line 37, in enable
xable(“enable”, addons)
File “/snap/microk8s/6750/scripts/wrappers/common/utils.py”, line 470, in xable
protected_xable(action, addon_args)
File “/snap/microk8s/6750/scripts/wrappers/common/utils.py”, line 498, in protected_xable
unprotected_xable(action, addon_args)
File “/snap/microk8s/6750/scripts/wrappers/common/utils.py”, line 514, in unprotected_xable
enabled_addons_info, disabled_addons_info = get_status(available_addons_info, True)
File “/snap/microk8s/6750/scripts/wrappers/common/utils.py”, line 566, in get_status
kube_output = kubectl_get(“all,ingress”)
File “/snap/microk8s/6750/scripts/wrappers/common/utils.py”, line 248, in kubectl_get
return run(KUBECTL, “get”, cmd, “–all-namespaces”, die=False)
File “/snap/microk8s/6750/scripts/wrappers/common/utils.py”, line 69, in run
result.check_returncode()
File “/snap/microk8s/6750/usr/lib/python3.8/subprocess.py”, line 448, in check_returncode
raise CalledProcessError(self.returncode, self.args, self.stdout,
subprocess.CalledProcessError: Command ‘(’/snap/microk8s/6750/microk8s-kubectl.wrapper’, ‘get’, ‘all,ingress’, ‘–all-namespaces’)’ returned non-zero exit status 1.