Install microk8s in an offline environment with registry mirrors

I’m testing an installtion of microk8s in a offline environment. To setup microk8s i use snap download (at a different client) to get the installation sources and moved it to my offline client. There i can successfull install microk8s with snap.
After editing containerd-template.toml with my mirrored registries (for docker.io, k8s.gcr.io, e.g.) i restarted the microk8s to apply this changes. Within the first start microk8s will pull some container from k8s.gcr.io/pause:3.1 and calico from docker.io. At this time every registry mirror from containerd-template.toml will be ignored.
Is there a way to use also the configured mirrors to start microk8s or is the another way to install and start microk8s on ubuntu 20.04?

Did you change this in your containerd.template.yaml

sandbox_image = "k8s.gcr.io/pause:3.1"

Yes i changed the sandbox image path to the whole path of my k8s.gcr.io mirror.

sandbox_image = "mymirror/k8s.gcr.io/pause:3.1"

After restarting microk8s i could not see any errors to pull this sandbox image - but pulling calico does not work. If i trie to pull this image "microk8s.ctr image pull mymirror/calico/cni:v3.13.2
" this will work but microk8s does not pulling it through his own startup process.

1 Like

I can’t recall if there is a way to override docker.io. There is one thing that you can do.

Go to the file /var/snap/microk8s/current/args/cni-network/cni.yaml. And modify the image name then apply it to the cluster see if that works.

I tried to replace the image path of calico-node, calico-kube-controllers and flexvol-driver to my mirror. Same procedure as the sandbox_image from containerd-template.toml. I test the pull with microk8s.ctr image pull mymirror/image:version ← test could pull all images.
After restart microk8s the would download den calico images from https://registry-1.docker.io/v2/calico/cni/manifests/v3.13.2

U changed the file i mentioned above?

Yes i replaced it in /var/snap/microk8s/current/args/cni-network/cni.yaml .

You applied it to the cluster? Like kubectl apply -f cni.yaml?

No i doesn’t applied it to the cluster because i think it would automatically applied if the cluster starts.

Now i have applied it with the following output:

 microk8s kubectl apply -f /var/snap/microk8s/current/args/cni-network/cni.yaml
configmap/calico-config unchanged
Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org unchanged
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org unchanged
error: error parsing /var/snap/microk8s/current/args/cni-network/cni.yaml: error converting YAML to JSON: yaml: line 6: did not find expected key

Yes!!!
I found the error in my cni.yml. Some blank spaces was inserted in this file. Now the yml could be applied.

Now i can apply this cni.yaml and microk8s status print “running”. If i look in syslog there are several messages like:

microk8s.daemon-containerd[103584]: time="2021-07-14T14:13:22.801718540+02:00" level=error msg="PullImage \"mymirror/calico/node:v3.13.2\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"mymirror/calico/node:v3.13.2\": failed to resolve reference \"mymirror/calico/node:v3.13.2\": mymirror/calico/node:v3.13.2: not found"

If i use

microk8s.ctr image pull mymirror/calico/node:v3.13.2

it works - so the image is pulled.

How does it look like when you do kubectl get pods -A and kubectl get no if calico is running then it should be ok.

Thats my output:

root@mysystem:~# microk8s kubectl get pods -A
NAMESPACE     NAME                                      READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-7fcfc7c4f-7l4gg   1/1     Running   1          175m
kube-system   calico-node-sd6tn                         1/1     Running   1          175m
root@mysystem:~# microk8s kubectl get no
NAME               STATUS   ROLES    AGE    VERSION
mysystem   Ready    <none>   2d2h   v1.21.1-3+ba118484dd39df

The first command looks okay, so calico is running. What should about the second command?

One question again.

If i want to enable dns, storage and ingress the feature would be enabled but if i’m running “microk8s kubectl get pods -A” i get

kube-system   hostpath-provisioner-5c65fbdb4f-p24wn     0/1     ImagePullBackOff   0          14m
kube-system   coredns-7f9c69c78c-4bhqz                  0/1     ImagePullBackOff   0          13m
ingress       nginx-ingress-microk8s-controller-fjvqj   0/1     ImagePullBackOff   0          12m

I think it is the same problem, so he want to pull this images from docker.io. Do you know where i can edit the path to this images?

Can you show your containerd-template.toml file?
Thanks

Thats my containerd-template.toml

# Use config version 2 to enable new configuration fields.
version = 2
oom_score = 0

[grpc]
  uid = 0
  gid = 0
  max_recv_message_size = 16777216
  max_send_message_size = 16777216

[debug]
  address = ""
  uid = 0
  gid = 0

[metrics]
  address = "127.0.0.1:1338"
  grpc_histogram = false

[cgroup]
  path = ""


# The 'plugins."io.containerd.grpc.v1.cri"' table contains all of the server options.
[plugins."io.containerd.grpc.v1.cri"]

  stream_server_address = "127.0.0.1"
  stream_server_port = "0"
  enable_selinux = false
  sandbox_image = "mymirror/k8s.gcr/pause:3.1"
  stats_collect_period = 10
  enable_tls_streaming = false
  max_container_log_line_size = 16384

  # 'plugins."io.containerd.grpc.v1.cri".containerd' contains config related to containerd
  [plugins."io.containerd.grpc.v1.cri".containerd]

    # snapshotter is the snapshotter used by containerd.
    snapshotter = "${SNAPSHOTTER}"

    # no_pivot disables pivot-root (linux only), required when running a container in a RamDisk with runc.
    # This only works for runtime type "io.containerd.runtime.v1.linux".
    no_pivot = false

    # default_runtime_name is the default runtime name to use.
    default_runtime_name = "${RUNTIME}"

    # 'plugins."io.containerd.grpc.v1.cri".containerd.runtimes' is a map from CRI RuntimeHandler strings, which specify types
    # of runtime configurations, to the matching configurations.
    # In this example, 'runc' is the RuntimeHandler string to match.
    [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
      # runtime_type is the runtime type to use in containerd e.g. io.containerd.runtime.v1.linux
      runtime_type = "io.containerd.runc.v1"

    [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.nvidia-container-runtime]
      # runtime_type is the runtime type to use in containerd e.g. io.containerd.runtime.v1.linux
      runtime_type = "io.containerd.runc.v1"

      [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.nvidia-container-runtime.options]
        BinaryName = "nvidia-container-runtime"

  # 'plugins."io.containerd.grpc.v1.cri".cni' contains config related to cni
  [plugins."io.containerd.grpc.v1.cri".cni]
    # bin_dir is the directory in which the binaries for the plugin is kept.
    bin_dir = "${SNAP_DATA}/opt/cni/bin"

    # conf_dir is the directory in which the admin places a CNI conf.
    conf_dir = "${SNAP_DATA}/args/cni-network"

  # 'plugins."io.containerd.grpc.v1.cri".registry' contains config related to the registry
  [plugins."io.containerd.grpc.v1.cri".registry]

    # 'plugins."io.containerd.grpc.v1.cri".registry.mirrors' are namespace to mirror mapping for all namespaces.
    [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
      [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
        endpoint = ["https://mymirror/docker", ]
      [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"]
        endpoint = ["https://mymirror/quay", ]
      [plugins."io.containerd.grpc.v1.cri".registry.mirrors."k8s.gcr.io"]
        endpoint = ["https://mymirror/k8s.gcr", ]
      [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"]
        endpoint = ["https://mymirror/gcr", ]
	  [plugins."io.containerd.grpc.v1.cri".registry.mirrors."*"]
        endpoint = ["https://mymirror/docker", ]
      [plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:32000"]
        endpoint = ["http://localhost:32000"]

I think ive seen before where changing docker.io doesn’t take it into effect. You can check with the containerd folks how to configure it.

Today i changed the debug loglevel to debug. My syslog is showing some requests to my mirror, so the configuration of my mirrors should be okay. One point in my syslog is a little bit confused.

Jul 16 14:53:52 mysystem microk8s.daemon-containerd[691]: time="2021-07-16T14:53:52.659835918+02:00" level=debug msg="fetch response received" host=mymirror.local response.header.connection=keep-alive response.header.content-length=162 response.header.content-type=text/html response.header.date="Fri, 16 Jul 2021 12:53:53 GMT" response.header.server=nginx response.status="404 Not Found" url="https://mymirror.local/docker/coredns/coredns/manifests/1.8.0?ns=docker.io"

Containerd get a result of HTTP 404 but if i try

microk8s.ctr image pull https://mymirror.local/docker/coredns/coredns/manifests/1.8.0

it works. I noticed that the sysloged url has a suffix “?ns=docker.io”. A second point is that my mirror server get no requests from microk8s to pull the image, but a manual pull works fine.

Hi,

I still am unable to make this work. I’ve edited /var/snap/microk8s/current/args/containerd-template.toml as follows, but still when stop and then start microk8s, it tries to go to docker.io:

# Use config version 2 to enable new configuration fields.
version = 2
oom_score = 0

[grpc]
  uid = 0
  gid = 0
  max_recv_message_size = 16777216
  max_send_message_size = 16777216

[debug]
  address = ""
  uid = 0
  gid = 0

[metrics]
  address = "127.0.0.1:1338"
  grpc_histogram = false

[cgroup]
  path = ""


# The 'plugins."io.containerd.grpc.v1.cri"' table contains all of the server options.
[plugins."io.containerd.grpc.v1.cri"]

  stream_server_address = "127.0.0.1"
  stream_server_port = "0"
  enable_selinux = false
  sandbox_image = "docker.intranet/pause:3.1"
  stats_collect_period = 10
  enable_tls_streaming = false
  max_container_log_line_size = 16384

  # 'plugins."io.containerd.grpc.v1.cri".containerd' contains config related to containerd
  [plugins."io.containerd.grpc.v1.cri".containerd]

    # snapshotter is the snapshotter used by containerd.
    snapshotter = "${SNAPSHOTTER}"

    # no_pivot disables pivot-root (linux only), required when running a container in a RamDisk with runc.
    # This only works for runtime type "io.containerd.runtime.v1.linux".
    no_pivot = false

    # default_runtime_name is the default runtime name to use.
    default_runtime_name = "${RUNTIME}"

    # 'plugins."io.containerd.grpc.v1.cri".containerd.runtimes' is a map from CRI RuntimeHandler strings, which specify types
    # of runtime configurations, to the matching configurations.
    # In this example, 'runc' is the RuntimeHandler string to match.
    [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
      # runtime_type is the runtime type to use in containerd e.g. io.containerd.runtime.v1.linux
      runtime_type = "io.containerd.runc.v1"

    [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.nvidia-container-runtime]
      # runtime_type is the runtime type to use in containerd e.g. io.containerd.runtime.v1.linux
      runtime_type = "io.containerd.runc.v1"

      [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.nvidia-container-runtime.options]
        BinaryName = "nvidia-container-runtime"

  # 'plugins."io.containerd.grpc.v1.cri".cni' contains config related to cni
  [plugins."io.containerd.grpc.v1.cri".cni]
    # bin_dir is the directory in which the binaries for the plugin is kept.
    bin_dir = "${SNAP_DATA}/opt/cni/bin"

    # conf_dir is the directory in which the admin places a CNI conf.
    conf_dir = "${SNAP_DATA}/args/cni-network"

  # 'plugins."io.containerd.grpc.v1.cri".registry' contains config related to the registry
  [plugins."io.containerd.grpc.v1.cri".registry]

    # 'plugins."io.containerd.grpc.v1.cri".registry.mirrors' are namespace to mirror mapping for all namespaces.
    [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
      [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
        endpoint = ["https://docker.intranet", ]
      [plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:32000"]
        endpoint = ["http://localhost:32000"]

I see this in systemctl -xe:

Aug 20 13:00:39 myvm microk8s.daemon-kubelite[29592]: E0820 13:00:39.645270 29592 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"upgrade-ipam\" with ErrImagePull: \"rpc error: code = Unknown desc = failed to pull and unpack image \\\"docker.io/calico/cni:v3.13.2\\\": failed to copy: httpReaderSeeker: failed open: failed to do request: Get \\\"https://registry-1.docker.io/v2/calico/cni/blobs/sha256:a89faaa1676a9f32898aff8fc9882464d1f1d4f1471addb2b97847b4f2d4eab2\\\": dial tcp: lookup registry-1.docker.io on 10.1.1.4:53: no such host\"" pod="kube-system/calico-node-l6688" podUID=50285d19-dc2f-4533-ade7-6033ba83bf38