Using a non-default interface on a cluster node

Hello.

I have a 2-node cluster setup, where the master and secondary share a wireguard VPN connection between them. If I do ‘add-node’ with the public IP address of the secondary, everything works. If I use the VPN address, some pods fail to deploy, with a message similar to:

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "83e0e9fc8d5e03b08a61c6441de8e21eae93c782f46576215baf288d73b82ceb": error getting ClusterInformation: Get https://[10.152.183.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default: dial tcp 10.152.183.1:443: i/o timeout

I’ve added the hostnames with private IPs inside /etc/hosts, on both nodes. All communication between the private IPs is allowed through ufw with no restrictions.

As I’ve seen, the “microk8s config” reports the IP of the public interface. Is there any way to force clustering on private IPs? I assume that’s the issue, since with public IPs it worked.

What version of microk8s are you using? This may have something to do with wireguard and calico. From v1.22 we have upgraded calico to a version which supports wireguard.

But i have no experience on setting this up. :frowning:

It’s v1.4.4 on the master, v1.5.2 on the secondary:

$ microk8s ctr version
Client:
  Version:  v1.4.4
  Revision: 05f951a3781f4f2c1911b05e61c160e9c30eaa8e
  Go version: go1.15.15

Server:
  Version:  v1.4.4
  Revision: 05f951a3781f4f2c1911b05e61c160e9c30eaa8e
  UUID: 65e5fd87-c991-4672-8e29-1250795e7760
$ microk8s status
microk8s is running
high-availability: no
  datastore master nodes: [vpn ip]:19001
  datastore standby nodes: none
addons:
  enabled:
    dashboard            # The Kubernetes dashboard
    dns                  # CoreDNS
    ha-cluster           # Configure high availability on the current node
    ingress              # Ingress controller for external access
    istio                # Core Istio service mesh services
    metrics-server       # K8s Metrics Server for API access to service metrics
  disabled:
    ambassador           # Ambassador API Gateway and Ingress
    cilium               # SDN, fast with full network policy
    fluentd              # Elasticsearch-Fluentd-Kibana logging and monitoring
    gpu                  # Automatic enablement of Nvidia CUDA
    helm                 # Helm 2 - the package manager for Kubernetes
    helm3                # Helm 3 - Kubernetes package manager
    host-access          # Allow Pods connecting to Host services smoothly
    jaeger               # Kubernetes Jaeger operator with its simple config
    keda                 # Kubernetes-based Event Driven Autoscaling
    knative              # The Knative framework on Kubernetes.
    kubeflow             # Kubeflow for easy ML deployments
    linkerd              # Linkerd is a service mesh for Kubernetes and other frameworks
    metallb              # Loadbalancer for your Kubernetes cluster
    multus               # Multus CNI enables attaching multiple network interfaces to pods
    openebs              # OpenEBS is the open-source storage solution for Kubernetes
    openfaas             # openfaas serverless framework
    portainer            # Portainer UI for your Kubernetes cluster
    prometheus           # Prometheus operator for monitoring and logging
    rbac                 # Role-Based Access Control for authorisation
    registry             # Private image registry exposed on localhost:32000
    storage              # Storage class; allocates storage from host directory
    traefik              # traefik Ingress controller for external access
$ microk8s inspect
Inspecting Certificates
Inspecting services
  Service snap.microk8s.daemon-cluster-agent is running
  Service snap.microk8s.daemon-containerd is running
  Service snap.microk8s.daemon-apiserver-kicker is running
  Service snap.microk8s.daemon-kubelite is running
  Copy service arguments to the final report tarball
Inspecting AppArmor configuration
Gathering system information
  Copy processes list to the final report tarball
  Copy snap list to the final report tarball
  Copy VM name (or none) to the final report tarball
  Copy disk usage information to the final report tarball
  Copy memory usage information to the final report tarball
  Copy server uptime to the final report tarball
  Copy current linux distribution to the final report tarball
  Copy openSSL information to the final report tarball
  Copy network configuration to the final report tarball
Inspecting kubernetes cluster
  Inspect kubernetes cluster
Inspecting juju
  Inspect Juju
Inspecting kubeflow
  Inspect Kubeflow

WARNING:  Docker is installed.
...

All I’m trying to achieve is to encrypt the communication between nodes, thought there would be a simple way to do it. The alternative I’m trying, is to use the VPN connection, since that is already set up anyway.

Ok so Microk8s v1.21 ships containerd version 1.4.4 while MicroK8s v1.22 ships containerd 1.5.2.
I think the calico that is shipped on Microk8s 1.22 supports wireguard to encrypt the connection between pods. This is what i understand you want to achieve.

Calico’s documentation may help you.

Remember calico network in already part of Microk8s.

You may also want to bring the 2 node’s version to 1.22

Thank you for the reply.

I updated Microk8s to 1.22 but it didn’t change the behavior. I’m suspecting it has to do with the default interface since it works if the node is added on the primary (public) IP address and not on the VPN one.
I’m using Wireguard, but on the system level, and so far it has been transparent to all software, just like a normal Ethernet connection.

Here’s another tell-tale sign that there is an issue with which interface Microk8s uses, the node was added with a private IP address but the master is still trying to contact it using the public one:

$ microk8s enable dashboard
Enabling Kubernetes Dashboard
Enabling Metrics-Server
serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
clusterrolebinding.rbac.authorization.k8s.io/microk8s-admin created
Adding argument --authentication-token-webhook to nodes.
Configuring node [publicIP]
Failed to reach node.
HTTPSConnectionPool(host='[publicIP]', port=25000): Max retries exceeded with url: /cluster/api/v1.0/configure (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7f69b3f69f28>: Failed to establish a new connection: [Errno 110] Connection timed out',))

I will try then calico with Wireguard, on public addresses just need to figure out which ports I need to open between the machines.

Just as an update, I was not able to get the encryption running at that time. For example I haven’t found any tutorial that tells me what I need to configure for microk8s and calico. After installing the plugin, do I need to go into each individual pod and configure the wireguard module?

It is very confusing for someone like me who doesn’t know the inner workings of kubernetes and calico since it’s not very clear how the two of them need to be configured to work together. For example, as of this moment, the configuration help page for calico is completely empty: https://projectcalico.docs.tigera.io/getting-started/clis/calicoctl/configure/

It would have been much easier to be able to specify the default clustering interface and to make sure that microk8s sticks to that interface and does not try to use the ‘default’ one. I would say it’s also a security risk if the nodes communicate to each other over what interfaces they can find.

So many people have this issue:

I solved with this:

sudo vim.tiny /var/snap/microk8s/current/args/kubelet
Add this to bottom: --node-ip=

sudo vim.tiny /var/snap/microk8s/current/args/kube-apiserver
Add this to bottom: --advertise-address=

1 Like