How to work with a private registry

Often organisations have their own private registry to assist collaboration and accelerate development. Kubernetes (and thus MicroK8s) need to be aware of the registry endpoints before being able to pull container images.

Insecure registry

Pushing from Docker

Let’s assume the private insecure registry is at 10.141.241.175 on port 32000. The images we build need to be tagged with the registry endpoint:

docker build . -t 10.141.241.175:32000/mynginx:registry

Pushing the mynginx image at this point will fail because the local Docker does not trust the private insecure registry. The docker daemon used for building images should be configured to trust the private insecure registry. This is done by marking the registry endpoint in /etc/docker/daemon.json:

{
  "insecure-registries" : ["10.141.241.175:32000"]
}

Restart the Docker daemon on the host to load the new configuration:

sudo systemctl restart docker

Now running

docker push  10.141.241.175:32000/mynginx

…should succeed in uploading the image to the registry.

Configuring MicroK8s

Attempting to pull an image in MicroK8s at this point will result in an error like this:

  Warning  Failed             1s (x2 over 16s)  kubelet, jackal-vgn-fz11m  Failed to pull image "10.141.241.175:32000/mynginx:registry": rpc error: code = Unknown desc = failed to resolve image "10.141.241.175:32000/mynginx:registry": no available registry endpoint: failed to do request: Head https://10.141.241.175:32000/v2/mynginx/manifests/registry: http: server gave HTTP response to HTTPS client

For MicroK8s version 1.23 or newer

MicroK8s 1.23 and newer versions use separate hosts.toml files for each image registry. For registry http://10.141.241.175:32000, this would be at /var/snap/microk8s/current/args/certs.d/10.141.241.175:32000/hosts.toml. First, create the directory if it does not exist:

sudo mkdir -p /var/snap/microk8s/current/args/certs.d/10.141.241.175:32000
sudo touch /var/snap/microk8s/current/args/certs.d/10.141.241.175:32000/hosts.toml

Then, edit the file we just created and make sure the contents are as follows:

# /var/snap/microk8s/current/args/certs.d/10.141.241.175:32000/hosts.toml
server = "http://10.141.241.175:32000"

[host."http://10.141.241.175:32000"]
capabilities = ["pull", "resolve"]

Restart MicroK8s to have the new configuration loaded:

microk8s stop
microk8s start

For MicroK8s version 1.22 or older

We need to edit /var/snap/microk8s/current/args/containerd-template.toml and add the following under [plugins."io.containerd.grpc.v1.cri".registry.mirrors]:

[plugins."io.containerd.grpc.v1.cri".registry.mirrors."10.141.241.175:32000"]
endpoint = ["http://10.141.241.175:32000"]

See the full file here.

Restart MicroK8s to have the new configuration loaded:

microk8s stop
microk8s start

The image can now be deployed with:

microk8s kubectl create deployment nginx --image=10.141.241.175:32000/mynginx:registry

Note that the image is referenced with 10.141.241.175:32000/mynginx:registry.

Secure registry

There are a lot of ways to setup a private secure registry that may slightly change the way you interact with it. Instead of diving into the specifics of each setup we provide here two pointers on how you can approach the integration with Kubernetes.

  • In the official Kubernetes documentation a method is described for creating a secret from the Docker login credentials and using this to access the secure registry. To achieve this, imagePullSecrets is used as part of the container spec.

  • MicroK8s v1.14 and onwards uses containerd. As described here, users should be aware of the secure registry and the credentials needed to access it.

    It is possible to configure default credentials in the configuration of containerd, so that they are used automatically when pulling images from your private registry, without users having to specify an image pull secret manually for each container.

    To do this, you have to edit /var/snap/microk8s/current/args/containerd-template.toml. If the private registry at 10.141.241.175:32000 needs authentication with username my-secret-user and password my-safe-password, add the following section (the configuration is in TOML format, so indentation does not matter):

    # containerd-template.toml
    
    [plugins."io.containerd.grpc.v1.cri".registry.configs."10.141.241.175:32000".auth]
    username = "my-secret-user"
    password = "my-safe-password"
    

Configure registry mirrors

Under specific circumstances (e.g. geographical restrictions, network firewalls), certain image registries may not be available. For example, for Chinese mainland users k8s.gcr.io is not available, and a mirror such as registry.cn-hangzhou.aliyuncs.com/google_containers can be used instead.

In order to configure a registry mirror for registry.k8s.io and have it point to registry.cn-hangzhou.aliyuncs.com/google_containers, the following configuration is required:

# create a directory with the registry name
sudo mkdir -p /var/snap/microk8s/current/args/certs.d/registry.k8s.io

# create the hosts.toml file pointing to the mirror
echo '
server = "registry.k8s.io"

[host."https://registry.aliyuncs.com/v2/google_containers"]
  capabilities = ["pull", "resolve"]
  override_path = true
' | sudo tee -a /var/snap/microk8s/current/args/certs.d/registry.k8s.io/hosts.toml

A restart of the containerd daemon helps but is not required, since changes should take effect immediately.

sudo snap restart microk8s

Using a custom CA

For internal registries where TLS with a custom CA is used (e.g. in enterprise environments), containerd will fail to fetch images unless the CA is explicitly specified.

In our previous example, if the registry was instead at https://10.141.241.175:32000, the configuration should be changed to the following:

# /var/snap/microk8s/current/args/certs.d/10.141.241.175:32000/hosts.toml
server = "https://10.141.241.175:32000"

[host."https://10.141.241.175:32000"]
capabilities = ["pull", "resolve"]
ca = "/var/snap/microk8s/current/args/certs.d/10.141.241.175:32000/ca.crt"

Also make sure to add the CA certificate under /var/snap/microk8s/current/args/certs.d/10.141.241.175:32000/ca.crt:

# /var/snap/microk8s/current/args/certs.d/10.141.241.175:32000/ca.crt
-----BEGIN CERTIFICATE------
.....
-----END CERTIFICATE--------
2 Likes

I am not following " Secure registry" section.

  • first bullet says official kubernetes does it this way.
    • How does this relate to microk8s’ kubernetes approach?
    • Can I follow instructions for any kubernetes when using microk8s? I tried and it did not work.
  • second bullet says internally microk8 uses containerd. Not sure how microk8s uses containerd.
    • how does dockerhub.com relate to the label plugin.“io.containerd.grpc.v1.cri”.registry.configs.“gcr.io”.auth?

I want to use public and private images stored in dockerhub. Is this a common use case?
If so, what do I need to configure in microk8s?

Hi @John_Grabner

In /var/snap/microk8s/current/args/containerd-template.toml you will have to add the registry you want to use. Have a look at [1] on entering the registry credentials. You would also want to microk8s.stop; microk8s.start after editing the containerd-template.toml file.

[1] https://github.com/containerd/cri/blob/master/docs/registry.md#configure-registry-credentials

This solution doesn’t work for me. I think this might be caused by disabled “cri” plugin in /etc/containerd/

even after added this additional config options

[plugin.“io.containerd.grpc.v1.cri”.registry.configs]
[plugin.“io.containerd.grpc.v1.cri”.registry.configs.“custom.registry.com:8091”.auth]
auth = “…”

I’ve solved the problem - this configuration has a bug - there should be:
[plugins.“io.containerd.grpc.v1.cri”.registry.configs]

The syntax for containerd config changed after 1.3 but yes, it should now be plugins. i don’t think the containerd doc is particularly clear on this.

Should be available to bootstrap a node doing this ? My vm does not have access to docker or gcr.io everything is through a nexus. But so far i have fail to bootstrap the node

[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
  [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
    endpoint = ["https://nexus.hub.cu", ]
  [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"]
    endpoint = ["https://nexus.hub.cu", ]

I don’t think you need the https. Also I think you need a config section. I’ll follow up with an example later when I am at a computer.

Thank you, looking forward to it.

Looks like the https might be necessary. The 2nd part to this is to set insecure_skip_verify though. It’s in this section of the containerd github.

[plugins."io.containerd.grpc.v1.cri".registry.mirrors."registry.somesite.tld"]
    endpoint = ["http://registry.somesite.tld"]
[plugins."io.containerd.grpc.v1.cri".registry.configs."registry.somesite.tld".tls]
  insecure_skip_verify = true

No luck, this is how it look.

[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
  [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
    endpoint = ["https://nexus.hub.cu", ]
  [plugins."io.containerd.grpc.v1.cri".registry.configs]
    [plugins."io.containerd.grpc.v1.cri".registry.configs."docker.io".tls]
      insecure_skip_verify = true
  [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"]
    endpoint = ["https://nexus.hub.cu", ]
  [plugins."io.containerd.grpc.v1.cri".registry.configs]
    [plugins."io.containerd.grpc.v1.cri".registry.configs."gcr.io".tls]
      insecure_skip_verify = true

sorry @protosam there is a limit of reply:
to your question:
`nope, gcr.io and docker.io are both only reachable through the nexus.

Currently on my phone. Though I think gcr.io != nexus.hub.cu?

I think you probably need something like this.

[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
  [plugins."io.containerd.grpc.v1.cri".registry.mirrors."nexus.hub.cu"]
    endpoint = ["https://nexus.hub.cu", ]
  [plugins."io.containerd.grpc.v1.cri".registry.configs]
    [plugins."io.containerd.grpc.v1.cri".registry.configs."nexus.hub.cu".tls]
      insecure_skip_verify = true

Just mentioning this here as I don’t know where else to add it, and it isn’t mentioned anywhere yet: instead of k8s.gcr.io, this should now be registry.k8s.io.

1 Like

Hello,

i Am struggeling to pull images from private registry Harbor witch is running in a conainer on my microk8s cluster (6 nodes).

Hardware: 6 raspberry pi’s
OS: ubuntu lunar 23.04
microk8s: 1.26.4

Here are my relevant config files:

#/var/snap/microk8s/current/args/containerd-template.toml:

Use config version 2 to enable new configuration fields.

version = 2
oom_score = 0

[grpc]
uid = 0
gid = 0
max_recv_message_size = 16777216
max_send_message_size = 16777216

[debug]
address = “”
uid = 0
gid = 0

[metrics]
address = “127.0.0.1:1338”
grpc_histogram = false

[cgroup]
path = “”

The ‘plugins.“io.containerd.grpc.v1.cri”’ table contains all of the server options.

[plugins.“io.containerd.grpc.v1.cri”]

stream_server_address = “127.0.0.1”
stream_server_port = “0”
enable_selinux = false
sandbox_image = “registry.k8s.io/pause:3.7
stats_collect_period = 10
enable_tls_streaming = false
max_container_log_line_size = 16384

‘plugins.“io.containerd.grpc.v1.cri”.containerd’ contains config related to containerd

[plugins.“io.containerd.grpc.v1.cri”.containerd]

# snapshotter is the snapshotter used by containerd.
snapshotter = "${SNAPSHOTTER}"

# no_pivot disables pivot-root (linux only), required when running a container in a RamDisk with runc.
# This only works for runtime type "io.containerd.runtime.v1.linux".
no_pivot = false

# default_runtime_name is the default runtime name to use.
default_runtime_name = "${RUNTIME}"

# 'plugins."io.containerd.grpc.v1.cri".containerd.runtimes' is a map from CRI RuntimeHandler strings, which specify types
# of runtime configurations, to the matching configurations.
# In this example, 'runc' is the RuntimeHandler string to match.
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
  # runtime_type is the runtime type to use in containerd e.g. io.containerd.runtime.v1.linux
  runtime_type = "${RUNTIME_TYPE}"

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.nvidia-container-runtime]
  # runtime_type is the runtime type to use in containerd e.g. io.containerd.runtime.v1.linux
  runtime_type = "${RUNTIME_TYPE}"

  [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.nvidia-container-runtime.options]
    BinaryName = "nvidia-container-runtime"

[plugins.“io.containerd.grpc.v1.cri”.containerd.runtimes.kata]
runtime_type = “io.containerd.kata.v2”
[plugins.“io.containerd.grpc.v1.cri”.containerd.runtimes.kata.options]
BinaryName = “kata-runtime”

‘plugins.“io.containerd.grpc.v1.cri”.cni’ contains config related to cni

[plugins.“io.containerd.grpc.v1.cri”.cni]
# bin_dir is the directory in which the binaries for the plugin is kept.
bin_dir = “${SNAP_DATA}/opt/cni/bin”

# conf_dir is the directory in which the admin places a CNI conf.
conf_dir = "${SNAP_DATA}/args/cni-network"

‘plugins.“io.containerd.grpc.v1.cri”.registry’ contains config related to the registry

[plugins.“io.containerd.grpc.v1.cri”.registry]
config_path = “${SNAP_DATA}/args/certs.d”

[plugins.“io.containerd.grpc.v1.cri”.registry.configs.“core.harbor.alldcs.nl”.auth]
username = “xxxxxxxxx”
password = “xxxxxxxxx”

#/var/snap/microk8s/current/args/certs.d/core.harbor.alldcs.nl/hosts.toml

server = “https://core.harbor.alldcs.nl

[host.“https://core.harbor.alldcs.nl”]
capabilities = [“pull”, “resolve”]
ca = “/var/snap/microk8s/current/args/certs.d/core.harbor.alldcs.nl/ca.crt”

###################################################

When I try pulling an image form harbor I still get the error:

kubectl run openliberty --image=core.harbor.alldcs.nl/library/openliberty:arm64v8
pod/openliberty created

ubuntu@pisvrwsv01:~/containers/kubernetes$ kubectl describe pod openliberty
Name: openliberty
Namespace: default
Priority: 0
Service Account: default
Node: pisvrwsv04/192.168.40.104
Start Time: Wed, 17 May 2023 13:32:26 +0200
Labels: run=openliberty
Annotations: cni.projectcalico.org/containerID: f2e3510fc4cb80a1a71ab75136adbd908ec0cd9bc8f4b3ac7c38d89f84147772
cni.projectcalico.org/podIP: 10.1.172.99/32
cni.projectcalico.org/podIPs: 10.1.172.99/32
Status: Pending
IP: 10.1.172.99
IPs:
IP: 10.1.172.99
Containers:
openliberty:
Container ID:
Image: core.harbor.alldcs.nl/library/openliberty:arm64v8
Image ID:
Port:
Host Port:
State: Waiting
Reason: ErrImagePull
Ready: False
Restart Count: 0
Environment:
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tg2fw (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-tg2fw:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional:
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message


Normal Scheduled 28s default-scheduler Successfully assigned default/openliberty to pisvrwsv04
Normal Pulling 14s (x2 over 26s) kubelet Pulling image “core.harbor.alldcs.nl/library/openliberty:arm64v8”
Warning Failed 14s (x2 over 25s) kubelet Failed to pull image “core.harbor.alldcs.nl/library/openliberty:arm64v8”: rpc error: code = Unknown desc = failed to pull and unpack image “core.harbor.alldcs.nl/library/openliberty:arm64v8”: failed to resolve reference “core.harbor.alldcs.nl/library/openliberty:arm64v8”: failed to do request: Head “https://core.harbor.alldcs.nl/v2/library/openliberty/manifests/arm64v8”: x509: certificate signed by unknown authority (possibly because of “crypto/rsa: verification error” while trying to verify candidate authority certificate “harbor-ca”)
Warning Failed 14s (x2 over 25s) kubelet Error: ErrImagePull
Normal BackOff 3s (x2 over 25s) kubelet Back-off pulling image “core.harbor.alldcs.nl/library/openliberty:arm64v8”
Warning Failed 3s (x2 over 25s) kubelet Error: ImagePullBackOff

Does anybody have any tip for me?

Are ther instructions how to do this right?

Help appreciated!