Weave ImagePullBackOff timeout

Hello everyone ,
when I use sudo for image pull I receive the timeout error
without sudo I can pull the image
otherwise for all kubectl commands, without sudo, it doesn’t work
so I don’t know how I can unlock this sudo point to load the image normally.

I found this error when trying to run the weave-net daemonset which displays the pullImmage error.

[root@kubcongadm ~]# sudo ctr i pull docker.io/weaveworks/weave-kube:latest
docker.io/weaveworks/weave-kube:latest: resolving      |--------------------------------------|
elapsed: 29.9s                          total:   0.0 B (0.0 B/s)
INFO[0030] trying next host                              error="failed to do request: Head \"https://registry-1.docker.io/v2/weaveworks/weave-kube/manifests/latest\": dial tcp 34.205.13.154:443: i/o timeout" host=registry-1.docker.io
ERRO[0030] active check failed                           error="context canceled"
ctr: failed to resolve reference "docker.io/weaveworks/weave-kube:latest": failed to do request: Head "https://registry-1.docker.io/v2/weaveworks/weave-kube/manifests/latest": dial tcp 34.205.13.154:443: i/o timeout
 
 
 
 
[root@kubcongadm ~]# ctr i pull docker.io/weaveworks/weave-kube:latest
docker.io/weaveworks/weave-kube:latest:                                           resolved       |++++++++++++++++++++++++++++++++++++++|
index-sha256:35827a9c549c095f0e9d1cf8b35d8f27ae2c76e31bc6f7f3c0bc95911d5accea:    exists         |++++++++++++++++++++++++++++++++++++++|
manifest-sha256:c7e98ecdcaba3b116013e12e0cdfd9b28f5247fe6492bb85d04852b1896a7158: exists         |++++++++++++++++++++++++++++++++++++++|
layer-sha256:1df68628584ee3a72ff74c60f030893de92194f4582668a84583333b2f62bfd2:    done           |++++++++++++++++++++++++++++++++++++++|
config-sha256:62fea85d605224a5222af10d8bf06670304985271610a7844fa5f17d92de69b5:   done           |++++++++++++++++++++++++++++++++++++++|
layer-sha256:21c83c5242199776c232920ddb58cfa2a46b17e42ed831ca9001c8dbc532d22d:    done           |++++++++++++++++++++++++++++++++++++++|
layer-sha256:02ec35b6f6277d197e41cd0912dc3cdbef3f56f8d53dcc6a6689fe6b8067b882:    done           |++++++++++++++++++++++++++++++++++++++|
layer-sha256:c40f141adde90ffd6c439914c3c879af0e8f5d250567c68db80fc92dc1ee3146:    done           |++++++++++++++++++++++++++++++++++++++|
layer-sha256:a63db11be47654e5fbb8f7f9f484c792461c4ebce67af3c87f270ecd061bc4f5:    done           |++++++++++++++++++++++++++++++++++++++|
layer-sha256:e8d3a1b4fb091e9523f94f086a68570861af6381e043d4b68eb86ee78cfcc7ea:    done           |++++++++++++++++++++++++++++++++++++++|
elapsed: 1.0 s                                                                    total:   0.0 B (0.0 B/s)
unpacking linux/amd64 sha256:35827a9c549c095f0e9d1cf8b35d8f27ae2c76e31bc6f7f3c0bc95911d5accea...
done: 14.122999ms

describe pod weave:

Events:
  Type     Reason     Age                    From               Message
  ----     ------     ----                   ----               -------
  Normal   Scheduled  5m11s                  default-scheduler  Successfully assigned kube-system/weave-net-n6llx to kubcongadm
  Warning  Failed     4m41s                  kubelet            Failed to pull image "weaveworks/weave-kube:latest": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/weaveworks/weave-kube:latest": failed to resolve reference "docker.io/weaveworks/weave-kube:latest": failed to do request: Head "https://registry-1.docker.io/v2/weaveworks/weave-kube/manifests/latest": dial tcp 44.205.64.79:443: i/o timeout
  Warning  Failed     3m59s                  kubelet            Failed to pull image "weaveworks/weave-kube:latest": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/weaveworks/weave-kube:latest": failed to resolve reference "docker.io/weaveworks/weave-kube:latest": failed to do request: Head "https://registry-1.docker.io/v2/weaveworks/weave-kube/manifests/latest": dial tcp 3.216.34.172:443: i/o timeout
  Warning  Failed     3m3s                   kubelet            Failed to pull image "weaveworks/weave-kube:latest": rpc error: code = DeadlineExceeded desc = failed to pull and unpack image "docker.io/weaveworks/weave-kube:latest": failed to resolve reference "docker.io/weaveworks/weave-kube:latest": failed to do request: Head "https://registry-1.docker.io/v2/weaveworks/weave-kube/manifests/latest": dial tcp 44.205.64.79:443: i/o timeout
  Normal   Pulling    2m15s (x4 over 5m11s)  kubelet            Pulling image "weaveworks/weave-kube:latest"
  Warning  Failed     105s (x4 over 4m41s)   kubelet            Error: ErrImagePull
  Warning  Failed     105s                   kubelet            Failed to pull image "weaveworks/weave-kube:latest": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/weaveworks/weave-kube:latest": failed to resolve reference "docker.io/weaveworks/weave-kube:latest": failed to do request: Head "https://registry-1.docker.io/v2/weaveworks/weave-kube/manifests/latest": dial tcp 34.205.13.154:443: i/o timeout
  Warning  Failed     92s (x6 over 4m40s)    kubelet            Error: ImagePullBackOff
  Normal   BackOff    79s (x7 over 4m40s)    kubelet            Back-off pulling image "weaveworks/weave-kube:latest"

I think the way your ctr is configured, its probably using an instance of docker outside microk8s containerd.

thanks balchua1 for your reply,

i m usisng conatinerd as cni for k8,

config.toml:
[root@kubcongadm containerd]# cat config.toml
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
  [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
    SystemdCgroup = true

systemctl status containerd active:
containerd.service - containerd container runtime
   Loaded: loaded (/usr/lib/systemd/system/containerd.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2023-11-10 00:45:39 +01; 8min ago
     Docs: https://containerd.io
  Process: 70058 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
 Main PID: 70059 (containerd)
    Tasks: 124
   Memory: 1.0G
   CGroup: /system.slice/containerd.service
           ├─ 2742 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id 369f445da5755d8d77bfb1f0967ac74c02d3fcd233a764a598b2cc1368cc8f5e -address /run/co>
           ├─ 2743 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id ce05892619228e7714f709bbac107d1b56b0837db1ab4c143584454229814b26 -address /run/co>
           ├─ 2745 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id 432a724d267b739df6f68d0f999c6fef95fb65c6126e2956727bf170e34bc712 -address /run/co>
           ├─ 2784 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id 99f27c0fc26c3850ba381b294b63937b4201ada84d5e3c08b77f144f68e7ab85 -address /run/co>
           ├─ 3045 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id 19a9201a1ff0ff3e830fa5615fbc536a49192475755cc1899dc4d1249985349e -address /run/co>
           ├─70059 /usr/bin/containerd
           ├─kubepods-besteffort-pod63482a2d_417d_4474_aa4a_f5e3211e81d5.slice:cri-containerd:1289ac9d4bce0a0f97a2dccf5d1b34d52149ccaf13f15e759110f32c1ebe4>
           │ └─3094 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=kubcongadm
           ├─kubepods-besteffort-pod63482a2d_417d_4474_aa4a_f5e3211e81d5.slice:cri-containerd:19a9201a1ff0ff3e830fa5615fbc536a49192475755cc1899dc4d12499853>
           │ └─3066 /pause
           ├─kubepods-burstable-pod0999cc702176d87e11cd27437f1aad5d.slice:cri-containerd:10ee0ddbb0415c7d42590dbc3f5a4e91438264716f3648bd4e26b71c18750e70
           │ └─2947 kube-scheduler --authentication-kubeconfig=/etc/kubernetes/scheduler.conf --authorization-kubeconfig=/etc/kubernetes/scheduler.conf --b>
           ├─kubepods-burstable-pod0999cc702176d87e11cd27437f1aad5d.slice:cri-containerd:99f27c0fc26c3850ba381b294b63937b4201ada84d5e3c08b77f144f68e7ab85
           │ └─2849 /pause
           ├─kubepods-burstable-pod75374fab9a1466e174560c16497220cc.slice:cri-containerd:369f445da5755d8d77bfb1f0967ac74c02d3fcd233a764a598b2cc1368cc8f5e
           │ └─2842 /pause
           ├─kubepods-burstable-pod75374fab9a1466e174560c16497220cc.slice:cri-containerd:5baf83dc728a992128eaaf3148d0be865d52f9c725b817d019c542ed7459dbfa
           │ └─2956 kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf --authorization->
           ├─kubepods-burstable-pod9f743bbe88720aca3ed4294f6754a857.slice:cri-containerd:20599d15b5885505cc81d853f4f27dfae4851c4251f01bcadcd2f2c050739cc0
           │ └─2967 etcd --advertise-client-urls=https://10.128.242.13:2379 --cert-file=/etc/kubernetes/pki/etcd/server.crt --client-cert-auth=true --data->
           ├─kubepods-burstable-pod9f743bbe88720aca3ed4294f6754a857.slice:cri-containerd:ce05892619228e7714f709bbac107d1b56b0837db1ab4c143584454229814b26
           │ └─2828 /pause
           ├─kubepods-burstable-podb4579a6f0f90a619da3eea41d4ee0c99.slice:cri-containerd:432a724d267b739df6f68d0f999c6fef95fb65c6126e2956727bf170e34bc712
           │ └─2832 /pause
           └─kubepods-burstable-podb4579a6f0f90a619da3eea41d4ee0c99.slice:cri-containerd:f8d8fee802f184d7e3350f2c54abf7da25c330a488b9c62cd6c7ea58dd5ab553
             └─2958 kube-apiserver --advertise-address=10.128.242.13 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernete>

Nov 10 00:45:39 kubcongadm containerd[70059]: time="2023-11-10T00:45:39.805705354+01:00" level=info msg="Start subscribing containerd event"
Nov 10 00:45:39 kubcongadm containerd[70059]: time="2023-11-10T00:45:39.805788562+01:00" level=info msg="Start recovering state"
Nov 10 00:45:39 kubcongadm containerd[70059]: time="2023-11-10T00:45:39.805914793+01:00" level=info msg=serving... address=/run/containerd/containerd.sock.>
Nov 10 00:45:39 kubcongadm containerd[70059]: time="2023-11-10T00:45:39.806020762+01:00" level=info msg=serving... address=/run/containerd/containerd.sock
lines 1-40

there is any config to share with you to check ? thanks in advance

To be sure you can use microk8s ctr . . . to get the image.

with sudo dosent work and without sudo it’s ok

[root@kubcongadm ~]# sudo ctr i  pull docker.io/weaveworks/weave-kube:latest
docker.io/weaveworks/weave-kube:latest: resolving      |--------------------------------------|
elapsed: 29.9s                          total:   0.0 B (0.0 B/s)
INFO[0030] trying next host                              error="failed to do request: Head \"https://registry-1.docker.io/v2/weaveworks/weave-kube/manifests/latest\": dial tcp 44.205.64.79:443: i/o timeout" host=registry-1.docker.io
ERRO[0030] active check failed                           error="context canceled"
ctr: failed to resolve reference "docker.io/weaveworks/weave-kube:latest": failed to do request: Head "https://registry-1.docker.io/v2/weaveworks/weave-kube/manifests/latest": dial tcp 44.205.64.79:443: i/o timeout
[root@kubcongadm ~]# ctr i  pull docker.io/weaveworks/weave-kube:latest
docker.io/weaveworks/weave-kube:latest:                                           resolved       |++++++++++++++++++++++++++++++++++++++|
index-sha256:35827a9c549c095f0e9d1cf8b35d8f27ae2c76e31bc6f7f3c0bc95911d5accea:    exists         |++++++++++++++++++++++++++++++++++++++|
manifest-sha256:c7e98ecdcaba3b116013e12e0cdfd9b28f5247fe6492bb85d04852b1896a7158: exists         |++++++++++++++++++++++++++++++++++++++|
layer-sha256:1df68628584ee3a72ff74c60f030893de92194f4582668a84583333b2f62bfd2:    done           |++++++++++++++++++++++++++++++++++++++|
config-sha256:62fea85d605224a5222af10d8bf06670304985271610a7844fa5f17d92de69b5:   done           |++++++++++++++++++++++++++++++++++++++|
layer-sha256:21c83c5242199776c232920ddb58cfa2a46b17e42ed831ca9001c8dbc532d22d:    done           |++++++++++++++++++++++++++++++++++++++|
layer-sha256:02ec35b6f6277d197e41cd0912dc3cdbef3f56f8d53dcc6a6689fe6b8067b882:    done           |++++++++++++++++++++++++++++++++++++++|
layer-sha256:c40f141adde90ffd6c439914c3c879af0e8f5d250567c68db80fc92dc1ee3146:    done           |++++++++++++++++++++++++++++++++++++++|
layer-sha256:a63db11be47654e5fbb8f7f9f484c792461c4ebce67af3c87f270ecd061bc4f5:    done           |++++++++++++++++++++++++++++++++++++++|
layer-sha256:e8d3a1b4fb091e9523f94f086a68570861af6381e043d4b68eb86ee78cfcc7ea:    done           |++++++++++++++++++++++++++++++++++++++|
elapsed: 1.4 s                                                                    total:   0.0 B (0.0 B/s)
unpacking linux/amd64 sha256:35827a9c549c095f0e9d1cf8b35d8f27ae2c76e31bc6f7f3c0bc95911d5accea...
done: 13.750179ms

Strange, do you happen to define proxies? Like env variables such as HTTPS_PROXY, HTTP_PROXY and NO_PROXY or its small letters equivalent?

small letters
http_proxy ans https_proxy

Perhaps this document can help you.

https://microk8s.io/docs/install-proxy