To ensure that pods and services do not end up on unintended network subnets, it is crucial to perform network configuration at the early stages of cluster or node installation.
From version 1.28, MicroK8s introduced launch configuration options which allow the configuration of network CIDRs for the cluster in both single and dual-stack.Defining the desired network settings in this way prevents any inadvertent leakage of pods or services into inappropriate subnets. By addressing network configuration from the start, you can establish a secure and well-organised networking environment for your Kubernetes cluster.
ⓘ Note: On pre-1.28 MicroK8s deployments please try the manual configuration steps described in the MicroK8s IPv6 DualStack HOW-TO.
Customise the network in your deployment
To configure your network customise the following launch configuration to match your needs:
---
version: 0.1.0
extraCNIEnv:
IPv4_SUPPORT: true
IPv4_CLUSTER_CIDR: 10.3.0.0/16
IPv4_SERVICE_CIDR: 10.153.183.0/24
IPv6_SUPPORT: true
IPv6_CLUSTER_CIDR: fd02::/64
IPv6_SERVICE_CIDR: fd99::/108
extraSANs:
- 10.153.183.1
Most of the fields are self explanatory:
IPv4_SUPPORT
and IPv6_SUPPORT
mark the support for IPv4 and IPv6 respectively. IPv4_CLUSTER_CIDR
and IPv6_CLUSTER_CIDR
are the CIDRs from where pods will get their IPs.
IPv4_SERVICE_CIDR
and IPv6_SERVICE_CIDR
is where the services will get their IPs.
In the extraSANs
section you want to have the IP of the Kubernetes service itself which is going to be the first IP in the IPv4_CLUSTER_CIDR
range.
Place the launch configuration in a location where it can be picked up by MicroK8s during installation. Here we assume the /var/tmp/lc.yaml
includes your network customisation:
sudo mkdir -p /var/snap/microk8s/common/
sudo cp /var/tmp/lc.yaml /var/snap/microk8s/common/.microk8s.yaml
Install MicroK8s from a channel newer or equal to 1.28:
sudo snap install microk8s --classic --channel 1.28
ⓘ Note: All nodes joining a cluster need to be pre-configured with the same network configuration. The network configuration process above needs to be repeated on every node joining the cluster.
Verify dual-stack is configured correctly
To test that the cluster is configured with dual-stack, apply the following manifest that creates a service with ipFamilyPolicy: RequireDualStack
:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginxdualstack
spec:
selector:
matchLabels:
run: nginxdualstack
replicas: 1
template:
metadata:
labels:
run: nginxdualstack
spec:
containers:
- name: nginxdualstack
image: rocks.canonical.com/cdk/diverdane/nginxdualstack:1.0.0
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx6
labels:
run: nginxdualstack
spec:
type: NodePort
ipFamilies:
- IPv6
ipFamilyPolicy: RequireDualStack
ports:
- port: 80
protocol: TCP
selector:
run: nginxdualstack
Get the nginx6
service IPv6 endpoint with microk8s kubectl get svc
and query it with something similar to:
curl http://[fd99::d4ce]/
The curl output should look like:
<!DOCTYPE html>
<html>
<head>
<title>Kubernetes IPv6 nginx</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx on <span style="color: #C70039">IPv6</span> Kubernetes!</h1>
<p>Pod: nginxdualstack-56fccfd475-hccqq</p>
</body>
</html>
Further Reading
- Upstream IPv4/IPv6 dual-stack documentation: IPv4/IPv6 dual-stack | Kubernetes
- MicroK8s IPv6 DualStack on pre-1.28 releases How to page