Nodes joined to master but not ready

Asking for help? Comment out what you need so we can get more information to help you!

Cluster information:

1 master and 2 worker nodes

Kubernetes version:
Cloud being used: bare-metal
Installation method: Ansible using kubeadm
Host OS: Rocky 8.7
CNI and version: Flannel 0.3.1
CRI and version: Containerd 1.6.20

I am pretty new to kubernetes. I have built a cluster before in a cloud with internet access. This is my first time trying to do it in an air gapped environment without internet. To accomplish this, I setup a private registry.

I am able to join my nodes to the master without issue. However, the nodes are not in a ready state. After doing some troubleshooting, I believe my issue might be related to where the sandbox image is being pulled from. It is trying to pull from a public registry. My system needs to pull from my private registry. How do I modify my nodes to get them to pull from my private registry? See the troubleshooting steps I have taken below. It is also possible I am going down a wrong path, if so set me straight!!! Thanks.

[root@gsil-kube01 kubelet.service.d]# kubectl get nodes
NAME                        STATUS     ROLES           AGE    VERSION
gsil-kube01.idm.example.org   Ready      control-plane   34d    v1.25.1
gsil-kube02.idm.example.org   NotReady   <none>          3d2h   v1.25.1
gsil-kube03.idm.example.org   NotReady   <none>          3d2h   v1.25.1
[root@gsil-kube02 .kube]# kubectl get pod -o wide --all-namespaces
NAMESPACE      NAME                                                READY   STATUS              RESTARTS   AGE     IP           NODE                        NOMINATED NODE   READINESS GATES
kube-flannel   kube-flannel-ds-9q4bw                               0/1     Init:0/2            0          3d1h    x.x.8.34   gsil-kube03.idm.example.org   <none>           <none>
kube-flannel   kube-flannel-ds-r4znc                               1/1     Running             0          3d18h   x.x.8.32   gsil-kube01.idm.example.org   <none>           <none>
kube-flannel   kube-flannel-ds-v2xtq                               0/1     Init:0/2            0          3d2h    x.x.8.33   gsil-kube02.idm.example.org   <none>           <none>
kube-system    coredns-58789d7b88-gjf64                            1/1     Running             0          34d     10.244.0.2   gsil-kube01.idm.example.org   <none>           <none>
kube-system    coredns-58789d7b88-hf45c                            1/1     Running             0          34d     10.244.0.3   gsil-kube01.idm.example.org   <none>           <none>
kube-system    etcd-gsil-kube01.idm.example.org                      1/1     Running             2          34d     x.x.8.32   gsil-kube01.idm.example.org   <none>           <none>
kube-system    kube-apiserver-gsil-kube01.idm.example.org            1/1     Running             2          34d     x.x.8.32   gsil-kube01.idm.example.org   <none>           <none>
kube-system    kube-controller-manager-gsil-kube01.idm.example.org   1/1     Running             3          34d     x.x.8.32   gsil-kube01.idm.example.org   <none>           <none>
kube-system    kube-proxy-jps94                                    0/1     ContainerCreating   0          3d1h    x.x.8.34   gsil-kube03.idm.example.org   <none>           <none>
kube-system    kube-proxy-p2jnb                                    0/1     ContainerCreating   0          3d2h    x.x.8.33   gsil-kube02.idm.example.org   <none>           <none>
kube-system    kube-proxy-xjdsc                                    1/1     Running             0          34d     x.x.8.32   gsil-kube01.idm.example.org   <none>           <none>
kube-system    kube-scheduler-gsil-kube01.idm.example.org            1/1     Running             3          34d     x.x.8.32   gsil-kube01.idm.example.org   <none>           <none>

Here’s a sample of my kubelet logs:

-- Logs begin at Mon 2023-06-05 08:55:31 CDT, end at Mon 2023-06-05 09:30:00 CDT. --
Jun 05 09:28:27 gsil-kube02.idm.example.org systemd[1]: Started kubelet: The Kubernetes Node Agent.
-- Subject: Unit kubelet.service has finished start-up
-- Defined-By: systemd
-- Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- 
-- Unit kubelet.service has finished starting up.
-- 
-- The start-up result is done.
Jun 05 09:28:27 gsil-kube02.idm.example.org kubelet[14271]: Flag --container-runtime has been deprecated, will be removed in 1.27 as the only valid value is 'remote'
Jun 05 09:28:27 gsil-kube02.idm.example.org kubelet[14271]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI.
Jun 05 09:28:27 gsil-kube02.idm.example.org kubelet[14271]: I0605 09:28:27.254200   14271 server.go:200] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Jun 05 09:28:27 gsil-kube02.idm.example.org kubelet[14271]: Flag --container-runtime has been deprecated, will be removed in 1.27 as the only valid value is 'remote'
Jun 05 09:28:27 gsil-kube02.idm.example.org kubelet[14271]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI.
Jun 05 09:28:27 gsil-kube02.idm.example.org kubelet[14271]: I0605 09:28:27.262587   14271 server.go:413] "Kubelet version" kubeletVersion="v1.25.1"
Jun 05 09:28:27 gsil-kube02.idm.example.org kubelet[14271]: I0605 09:28:27.262611   14271 server.go:415] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Jun 05 09:28:27 gsil-kube02.idm.example.org kubelet[14271]: I0605 09:28:27.262820   14271 server.go:825] "Client rotation is on, will bootstrap in background"
Jun 05 09:28:27 gsil-kube02.idm.example.org kubelet[14271]: I0605 09:28:27.264763   14271 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Jun 05 09:28:27 gsil-kube02.idm.example.org kubelet[14271]: I0605 09:28:27.265837   14271 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Jun 05 09:28:27 gsil-kube02.idm.example.org kubelet[14271]: I0605 09:28:27.270855   14271 server.go:660] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Jun 05 09:28:27 gsil-kube02.idm.example.org kubelet[14271]: I0605 09:28:27.271185   14271 container_manager_linux.go:262] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Jun 05 09:28:27 gsil-kube02.idm.example.org kubelet[14271]: I0605 09:28:27.271244   14271 container_manager_linux.go:267] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
Jun 05 09:28:27 gsil-kube02.idm.example.org kubelet[14271]: I0605 09:28:27.271271   14271 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container"
Jun 05 09:28:27 gsil-kube02.idm.example.org kubelet[14271]: I0605 09:28:27.271283   14271 container_manager_linux.go:302] "Creating device plugin manager" devicePluginEnabled=true
Jun 05 09:28:27 gsil-kube02.idm.example.org kubelet[14271]: I0605 09:28:27.271323   14271 state_mem.go:36] "Initialized new in-memory state store"
Jun 05 09:28:27 gsil-kube02.idm.example.org kubelet[14271]: I0605 09:28:27.274034   14271 kubelet.go:381] "Attempting to sync node with API server"
Jun 05 09:28:27 gsil-kube02.idm.example.org kubelet[14271]: I0605 09:28:27.274052   14271 kubelet.go:270] "Adding static pod path" path="/etc/kubernetes/manifests"
Jun 05 09:28:27 gsil-kube02.idm.example.org kubelet[14271]: I0605 09:28:27.274077   14271 kubelet.go:281] "Adding apiserver pod source"
Jun 05 09:28:27 gsil-kube02.idm.example.org kubelet[14271]: I0605 09:28:27.274091   14271 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Jun 05 09:28:27 gsil-kube02.idm.example.org kubelet[14271]: I0605 09:28:27.274918   14271 kuberuntime_manager.go:240] "Container runtime initialized" containerRuntime="containerd" version="1.6.21" apiVersion="v1"
Jun 05 09:28:27 gsil-kube02.idm.example.org kubelet[14271]: I0605 09:28:27.275387   14271 server.go:1175] "Started kubelet"
Jun 05 09:28:27 gsil-kube02.idm.example.org kubelet[14271]: I0605 09:28:27.276619   14271 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Jun 05 09:28:27 gsil-kube02.idm.example.org kubelet[14271]: E0605 09:28:27.277495   14271 cri_stats_provider.go:452] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs"
Jun 05 09:28:27 gsil-kube02.idm.example.org kubelet[14271]: E0605 09:28:27.277519   14271 kubelet.go:1317] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Jun 05 09:28:27 gsil-kube02.idm.example.org kubelet[14271]: I0605 09:28:27.277552   14271 volume_manager.go:293] "Starting Kubelet Volume Manager"
Jun 05 09:28:27 gsil-kube02.idm.example.org kubelet[14271]: I0605 09:28:27.277591   14271 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
Jun 05 09:28:27 gsil-kube02.idm.example.org kubelet[14271]: I0605 09:28:27.277828   14271 server.go:155] "Starting to listen" address="0.0.0.0" port=10250
Jun 05 09:28:27 gsil-kube02.idm.example.org kubelet[14271]: I0605 09:28:27.278803   14271 server.go:438] "Adding debug handlers to kubelet server"
Jun 05 09:28:27 gsil-kube02.idm.example.org kubelet[14271]: E0605 09:28:27.281552   14271 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Jun 05 09:28:27 gsil-kube02.idm.example.org kubelet[14271]: I0605 09:28:27.336835   14271 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4
Jun 05 09:28:27 gsil-kube02.idm.example.org kubelet[14271]: I0605 09:28:27.346575   14271 cpu_manager.go:213] "Starting CPU manager" policy="none"
Jun 05 09:28:27 gsil-kube02.idm.example.org kubelet[14271]: I0605 09:28:27.346594   14271 cpu_manager.go:214] "Reconciling" reconcilePeriod="10s"
Jun 05 09:28:27 gsil-kube02.idm.example.org kubelet[14271]: I0605 09:28:27.346613   14271 state_mem.go:36] "Initialized new in-memory state store"
Jun 05 09:28:27 gsil-kube02.idm.example.org kubelet[14271]: I0605 09:28:27.347165   14271 state_mem.go:88] "Updated default CPUSet" cpuSet=""
Jun 05 09:28:27 gsil-kube02.idm.example.org kubelet[14271]: I0605 09:28:27.347186   14271 state_mem.go:96] "Updated CPUSet assignments" assignments=map[]
Jun 05 09:28:27 gsil-kube02.idm.example.org kubelet[14271]: I0605 09:28:27.347197   14271 policy_none.go:49] "None policy: Start"
Jun 05 09:28:27 gsil-kube02.idm.example.org kubelet[14271]: I0605 09:28:27.347655   14271 memory_manager.go:168] "Starting memorymanager" policy="None"
Jun 05 09:28:27 gsil-kube02.idm.example.org kubelet[14271]: I0605 09:28:27.347686   14271 state_mem.go:35] "Initializing new in-memory state store"
Jun 05 09:28:27 gsil-kube02.idm.example.org kubelet[14271]: I0605 09:28:27.347905   14271 state_mem.go:75] "Updated machine memory state"
Jun 05 09:28:27 gsil-kube02.idm.example.org kubelet[14271]: I0605 09:28:27.357903   14271 manager.go:447] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Jun 05 09:28:27 gsil-kube02.idm.example.org kubelet[14271]: I0605 09:28:27.358210   14271 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Jun 05 09:28:27 gsil-kube02.idm.example.org kubelet[14271]: I0605 09:28:27.378447   14271 kuberuntime_manager.go:1050] "Updating runtime config through cri with podcidr" CIDR="10.244.1.0/24"
Jun 05 09:28:27 gsil-kube02.idm.example.org kubelet[14271]: I0605 09:28:27.379143   14271 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.1.0/24"
Jun 05 09:28:27 gsil-kube02.idm.example.org kubelet[14271]: E0605 09:28:27.379384   14271 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Jun 05 09:28:27 gsil-kube02.idm.example.org kubelet[14271]: I0605 09:28:27.380516   14271 kubelet_node_status.go:70] "Attempting to register node" node="gsil-kube02.idm.example.org"
Jun 05 09:28:27 gsil-kube02.idm.example.org kubelet[14271]: I0605 09:28:27.382311   14271 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6
Jun 05 09:28:27 gsil-kube02.idm.example.org kubelet[14271]: I0605 09:28:27.382336   14271 status_manager.go:161] "Starting to sync pod status with apiserver"
Jun 05 09:28:27 gsil-kube02.idm.example.org kubelet[14271]: I0605 09:28:27.382350   14271 kubelet.go:2010] "Starting kubelet main sync loop"
Jun 05 09:28:27 gsil-kube02.idm.example.org kubelet[14271]: E0605 09:28:27.382456   14271 kubelet.go:2034] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
Jun 05 09:28:27 gsil-kube02.idm.example.org kubelet[14271]: I0605 09:28:27.387839   14271 kubelet_node_status.go:108] "Node was previously registered" node="gsil-kube02.idm.example.org"
Jun 05 09:28:27 gsil-kube02.idm.example.org kubelet[14271]: I0605 09:28:27.387911   14271 kubelet_node_status.go:73] "Successfully registered node" node="gsil-kube02.idm.example.org"
Jun 05 09:28:28 gsil-kube02.idm.example.org kubelet[14271]: I0605 09:28:28.274788   14271 apiserver.go:52] "Watching apiserver"
Jun 05 09:28:28 gsil-kube02.idm.example.org kubelet[14271]: I0605 09:28:28.276608   14271 topology_manager.go:205] "Topology Admit Handler"
Jun 05 09:28:28 gsil-kube02.idm.example.org kubelet[14271]: I0605 09:28:28.276700   14271 topology_manager.go:205] "Topology Admit Handler"
Jun 05 09:28:28 gsil-kube02.idm.example.org kubelet[14271]: I0605 09:28:28.284076   14271 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0d74320c-fed8-45fd-8e02-b24064ac45a1-xtables-lock\") pod \"kube-proxy-p2jnb\" (UID: \"0d74320c-fed8-45fd-8e02-b24064ac45a1\") " pod="kube-system/kube-proxy-p2jnb"
Jun 05 09:28:28 gsil-kube02.idm.example.org kubelet[14271]: I0605 09:28:28.284110   14271 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/cccbf0ba-c10d-4442-bb71-4a4bace8f0ab-cni-plugin\") pod \"kube-flannel-ds-v2xtq\" (UID: \"cccbf0ba-c10d-4442-bb71-4a4bace8f0ab\") " pod="kube-flannel/kube-flannel-ds-v2xtq"
Jun 05 09:28:28 gsil-kube02.idm.example.org kubelet[14271]: I0605 09:28:28.284132   14271 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/cccbf0ba-c10d-4442-bb71-4a4bace8f0ab-cni\") pod \"kube-flannel-ds-v2xtq\" (UID: \"cccbf0ba-c10d-4442-bb71-4a4bace8f0ab\") " pod="kube-flannel/kube-flannel-ds-v2xtq"
Jun 05 09:28:28 gsil-kube02.idm.example.org kubelet[14271]: I0605 09:28:28.284154   14271 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgbcb\" (UniqueName: \"kubernetes.io/projected/cccbf0ba-c10d-4442-bb71-4a4bace8f0ab-kube-api-access-pgbcb\") pod \"kube-flannel-ds-v2xtq\" (UID: \"cccbf0ba-c10d-4442-bb71-4a4bace8f0ab\") " pod="kube-flannel/kube-flannel-ds-v2xtq"
Jun 05 09:28:28 gsil-kube02.idm.example.org kubelet[14271]: I0605 09:28:28.284192   14271 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/cccbf0ba-c10d-4442-bb71-4a4bace8f0ab-flannel-cfg\") pod \"kube-flannel-ds-v2xtq\" (UID: \"cccbf0ba-c10d-4442-bb71-4a4bace8f0ab\") " pod="kube-flannel/kube-flannel-ds-v2xtq"
Jun 05 09:28:28 gsil-kube02.idm.example.org kubelet[14271]: I0605 09:28:28.284229   14271 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cccbf0ba-c10d-4442-bb71-4a4bace8f0ab-xtables-lock\") pod \"kube-flannel-ds-v2xtq\" (UID: \"cccbf0ba-c10d-4442-bb71-4a4bace8f0ab\") " pod="kube-flannel/kube-flannel-ds-v2xtq"
Jun 05 09:28:28 gsil-kube02.idm.example.org kubelet[14271]: I0605 09:28:28.284253   14271 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0d74320c-fed8-45fd-8e02-b24064ac45a1-kube-proxy\") pod \"kube-proxy-p2jnb\" (UID: \"0d74320c-fed8-45fd-8e02-b24064ac45a1\") " pod="kube-system/kube-proxy-p2jnb"
Jun 05 09:28:28 gsil-kube02.idm.example.org kubelet[14271]: I0605 09:28:28.284273   14271 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0d74320c-fed8-45fd-8e02-b24064ac45a1-lib-modules\") pod \"kube-proxy-p2jnb\" (UID: \"0d74320c-fed8-45fd-8e02-b24064ac45a1\") " pod="kube-system/kube-proxy-p2jnb"
Jun 05 09:28:28 gsil-kube02.idm.example.org kubelet[14271]: I0605 09:28:28.284300   14271 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnqxb\" (UniqueName: \"kubernetes.io/projected/0d74320c-fed8-45fd-8e02-b24064ac45a1-kube-api-access-tnqxb\") pod \"kube-proxy-p2jnb\" (UID: \"0d74320c-fed8-45fd-8e02-b24064ac45a1\") " pod="kube-system/kube-proxy-p2jnb"
Jun 05 09:28:28 gsil-kube02.idm.example.org kubelet[14271]: I0605 09:28:28.284336   14271 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/cccbf0ba-c10d-4442-bb71-4a4bace8f0ab-run\") pod \"kube-flannel-ds-v2xtq\" (UID: \"cccbf0ba-c10d-4442-bb71-4a4bace8f0ab\") " pod="kube-flannel/kube-flannel-ds-v2xtq"
Jun 05 09:28:28 gsil-kube02.idm.example.org kubelet[14271]: I0605 09:28:28.284354   14271 reconciler.go:169] "Reconciler: start to sync state"
Jun 05 09:28:31 gsil-kube02.idm.example.org kubelet[14271]: E0605 09:28:31.767169   14271 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to get sandbox image \"registry.k8s.io/pause:3.6\": failed to pull image \"registry.k8s.io/pause:3.6\": failed to pull and unpack image \"registry.k8s.io/pause:3.6\": failed to resolve reference \"registry.k8s.io/pause:3.6\": failed to do request: Head \"https://registry.k8s.io/v2/pause/manifests/3.6\": dial tcp: lookup registry.k8s.io on x.x.8.16:53: server misbehaving"
Jun 05 09:28:31 gsil-kube02.idm.example.org kubelet[14271]: E0605 09:28:31.767229   14271 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to get sandbox image \"registry.k8s.io/pause:3.6\": failed to pull image \"registry.k8s.io/pause:3.6\": failed to pull and unpack image \"registry.k8s.io/pause:3.6\": failed to resolve reference \"registry.k8s.io/pause:3.6\": failed to do request: Head \"https://registry.k8s.io/v2/pause/manifests/3.6\": dial tcp: lookup registry.k8s.io on x.x.8.16:53: server misbehaving" pod="kube-flannel/kube-flannel-ds-v2xtq"
Jun 05 09:28:31 gsil-kube02.idm.example.org kubelet[14271]: E0605 09:28:31.767251   14271 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to get sandbox image \"registry.k8s.io/pause:3.6\": failed to pull image \"registry.k8s.io/pause:3.6\": failed to pull and unpack image \"registry.k8s.io/pause:3.6\": failed to resolve reference \"registry.k8s.io/pause:3.6\": failed to do request: Head \"https://registry.k8s.io/v2/pause/manifests/3.6\": dial tcp: lookup registry.k8s.io on x.x.8.16:53: server misbehaving" pod="kube-flannel/kube-flannel-ds-v2xtq"
Jun 05 09:28:31 gsil-kube02.idm.example.org kubelet[14271]: E0605 09:28:31.767310   14271 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-flannel-ds-v2xtq_kube-flannel(cccbf0ba-c10d-4442-bb71-4a4bace8f0ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-flannel-ds-v2xtq_kube-flannel(cccbf0ba-c10d-4442-bb71-4a4bace8f0ab)\\\": rpc error: code = Unknown desc = failed to get sandbox image \\\"registry.k8s.io/pause:3.6\\\": failed to pull image \\\"registry.k8s.io/pause:3.6\\\": failed to pull and unpack image \\\"registry.k8s.io/pause:3.6\\\": failed to resolve reference \\\"registry.k8s.io/pause:3.6\\\": failed to do request: Head \\\"https://registry.k8s.io/v2/pause/manifests/3.6\\\": dial tcp: lookup registry.k8s.io on x.x.8.16:53: server misbehaving\"" pod="kube-flannel/kube-flannel-ds-v2xtq" podUID=cccbf0ba-c10d-4442-bb71-4a4bace8f0ab
Jun 05 09:28:31 gsil-kube02.idm.example.org kubelet[14271]: E0605 09:28:31.768242   14271 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to get sandbox image \"registry.k8s.io/pause:3.6\": failed to pull image \"registry.k8s.io/pause:3.6\": failed to pull and unpack image \"registry.k8s.io/pause:3.6\": failed to resolve reference \"registry.k8s.io/pause:3.6\": failed to do request: Head \"https://registry.k8s.io/v2/pause/manifests/3.6\": dial tcp: lookup registry.k8s.io on x.x.8.16:53: server misbehaving"
Jun 05 09:28:31 gsil-kube02.idm.example.org kubelet[14271]: E0605 09:28:31.768282   14271 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to get sandbox image \"registry.k8s.io/pause:3.6\": failed to pull image \"registry.k8s.io/pause:3.6\": failed to pull and unpack image \"registry.k8s.io/pause:3.6\": failed to resolve reference \"registry.k8s.io/pause:3.6\": failed to do request: Head \"https://registry.k8s.io/v2/pause/manifests/3.6\": dial tcp: lookup registry.k8s.io on x.x.8.16:53: server misbehaving" pod="kube-system/kube-proxy-p2jnb"
Jun 05 09:28:31 gsil-kube02.idm.example.org kubelet[14271]: E0605 09:28:31.768313   14271 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to get sandbox image \"registry.k8s.io/pause:3.6\": failed to pull image \"registry.k8s.io/pause:3.6\": failed to pull and unpack image \"registry.k8s.io/pause:3.6\": failed to resolve reference \"registry.k8s.io/pause:3.6\": failed to do request: Head \"https://registry.k8s.io/v2/pause/manifests/3.6\": dial tcp: lookup registry.k8s.io on x.x.8.16:53: server misbehaving" pod="kube-system/kube-proxy-p2jnb"
Jun 05 09:28:31 gsil-kube02.idm.example.org kubelet[14271]: E0605 09:28:31.768362   14271 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-proxy-p2jnb_kube-system(0d74320c-fed8-45fd-8e02-b24064ac45a1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-proxy-p2jnb_kube-system(0d74320c-fed8-45fd-8e02-b24064ac45a1)\\\": rpc error: code = Unknown desc = failed to get sandbox image \\\"registry.k8s.io/pause:3.6\\\": failed to pull image \\\"registry.k8s.io/pause:3.6\\\": failed to pull and unpack image \\\"registry.k8s.io/pause:3.6\\\": failed to resolve reference \\\"registry.k8s.io/pause:3.6\\\": failed to do request: Head \\\"https://registry.k8s.io/v2/pause/manifests/3.6\\\": dial tcp: lookup registry.k8s.io on x.x.8.16:53: server misbehaving\"" pod="kube-system/kube-proxy-p2jnb" podUID=0d74320c-fed8-45fd-8e02-b24064ac45a1
Jun 05 09:28:32 gsil-kube02.idm.example.org kubelet[14271]: E0605 09:28:32.359401   14271 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Jun 05 09:28:37 gsil-kube02.idm.example.org kubelet[14271]: E0605 09:28:37.360454   14271 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Jun 05 09:28:42 gsil-kube02.idm.example.org kubelet[14271]: E0605 09:28:42.361949   14271 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Jun 05 09:28:47 gsil-kube02.idm.example.org kubelet[14271]: E0605 09:28:47.362432   14271 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Jun 05 09:28:52 gsil-kube02.idm.example.org kubelet[14271]: E0605 09:28:52.362953   14271 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Jun 05 09:28:53 gsil-kube02.idm.example.org kubelet[14271]: E0605 09:28:53.390190   14271 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to get sandbox image \"registry.k8s.io/pause:3.6\": failed to pull image \"registry.k8s.io/pause:3.6\": failed to pull and unpack image \"registry.k8s.io/pause:3.6\": failed to resolve reference \"registry.k8s.io/pause:3.6\": failed to do request: Head \"https://registry.k8s.io/v2/pause/manifests/3.6\": dial tcp: lookup registry.k8s.io on x.x.8.16:53: read udp x.x.8.33:34016->x.x.8.16:53: i/o timeout"
Jun 05 09:28:53 gsil-kube02.idm.example.org kubelet[14271]: E0605 09:28:53.390238   14271 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to get sandbox image \"registry.k8s.io/pause:3.6\": failed to pull image \"registry.k8s.io/pause:3.6\": failed to pull and unpack image \"registry.k8s.io/pause:3.6\": failed to resolve reference \"registry.k8s.io/pause:3.6\": failed to do request: Head \"https://registry.k8s.io/v2/pause/manifests/3.6\": dial tcp: lookup registry.k8s.io on x.x.8.16:53: read udp x.x.8.33:34016->x.x.8.16:53: i/o timeout" pod="kube-flannel/kube-flannel-ds-v2xtq"
Jun 05 09:28:53 gsil-kube02.idm.example.org kubelet[14271]: E0605 09:28:53.390281   14271 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to get sandbox image \"registry.k8s.io/pause:3.6\": failed to pull image \"registry.k8s.io/pause:3.6\": failed to pull and unpack image \"registry.k8s.io/pause:3.6\": failed to resolve reference \"registry.k8s.io/pause:3.6\": failed to do request: Head \"https://registry.k8s.io/v2/pause/manifests/3.6\": dial tcp: lookup registry.k8s.io on x.x.8.16:53: read udp x.x.8.33:34016->x.x.8.16:53: i/o timeout" pod="kube-flannel/kube-flannel-ds-v2xtq"
Jun 05 09:28:53 gsil-kube02.idm.example.org kubelet[14271]: E0605 09:28:53.390339   14271 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-flannel-ds-v2xtq_kube-flannel(cccbf0ba-c10d-4442-bb71-4a4bace8f0ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-flannel-ds-v2xtq_kube-flannel(cccbf0ba-c10d-4442-bb71-4a4bace8f0ab)\\\": rpc error: code = Unknown desc = failed to get sandbox image \\\"registry.k8s.io/pause:3.6\\\": failed to pull image \\\"registry.k8s.io/pause:3.6\\\": failed to pull and unpack image \\\"registry.k8s.io/pause:3.6\\\": failed to resolve reference \\\"registry.k8s.io/pause:3.6\\\": failed to do request: Head \\\"https://registry.k8s.io/v2/pause/manifests/3.6\\\": dial tcp: lookup registry.k8s.io on x.x.8.16:53: read udp x.x.8.33:34016->x.x.8.16:53: i/o timeout\"" pod="kube-flannel/kube-flannel-ds-v2xtq" podUID=cccbf0ba-c10d-4442-bb71-4a4bace8f0ab
Jun 05 09:28:53 gsil-kube02.idm.example.org kubelet[14271]: E0605 09:28:53.391213   14271 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to get sandbox image \"registry.k8s.io/pause:3.6\": failed to pull image \"registry.k8s.io/pause:3.6\": failed to pull and unpack image \"registry.k8s.io/pause:3.6\": failed to resolve reference \"registry.k8s.io/pause:3.6\": failed to do request: Head \"https://registry.k8s.io/v2/pause/manifests/3.6\": dial tcp: lookup registry.k8s.io on x.x.8.16:53: read udp x.x.8.33:34016->x.x.8.16:53: i/o timeout"
Jun 05 09:28:53 gsil-kube02.idm.example.org kubelet[14271]: E0605 09:28:53.391306   14271 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to get sandbox image \"registry.k8s.io/pause:3.6\": failed to pull image \"registry.k8s.io/pause:3.6\": failed to pull and unpack image \"registry.k8s.io/pause:3.6\": failed to resolve reference \"registry.k8s.io/pause:3.6\": failed to do request: Head \"https://registry.k8s.io/v2/pause/manifests/3.6\": dial tcp: lookup registry.k8s.io on x.x.8.16:53: read udp x.x.8.33:34016->x.x.8.16:53: i/o timeout" pod="kube-system/kube-proxy-p2jnb"
Jun 05 09:28:53 gsil-kube02.idm.example.org kubelet[14271]: E0605 09:28:53.391337   14271 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to get sandbox image \"registry.k8s.io/pause:3.6\": failed to pull image \"registry.k8s.io/pause:3.6\": failed to pull and unpack image \"registry.k8s.io/pause:3.6\": failed to resolve reference \"registry.k8s.io/pause:3.6\": failed to do request: Head \"https://registry.k8s.io/v2/pause/manifests/3.6\": dial tcp: lookup registry.k8s.io on x.x.8.16:53: read udp x.x.8.33:34016->x.x.8.16:53: i/o timeout" pod="kube-system/kube-proxy-p2jnb"
Jun 05 09:28:53 gsil-kube02.idm.example.org kubelet[14271]: E0605 09:28:53.391399   14271 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-proxy-p2jnb_kube-system(0d74320c-fed8-45fd-8e02-b24064ac45a1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-proxy-p2jnb_kube-system(0d74320c-fed8-45fd-8e02-b24064ac45a1)\\\": rpc error: code = Unknown desc = failed to get sandbox image \\\"registry.k8s.io/pause:3.6\\\": failed to pull image \\\"registry.k8s.io/pause:3.6\\\": failed to pull and unpack image \\\"registry.k8s.io/pause:3.6\\\": failed to resolve reference \\\"registry.k8s.io/pause:3.6\\\": failed to do request: Head \\\"https://registry.k8s.io/v2/pause/manifests/3.6\\\": dial tcp: lookup registry.k8s.io on x.x.8.16:53: read udp x.x.8.33:34016->x.x.8.16:53: i/o timeout\"" pod="kube-system/kube-proxy-p2jnb" podUID=0d74320c-fed8-45fd-8e02-b24064ac45a1
Jun 05 09:28:57 gsil-kube02.idm.example.org kubelet[14271]: E0605 09:28:57.363552   14271 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Jun 05 09:29:02 gsil-kube02.idm.example.org kubelet[14271]: E0605 09:29:02.364990   14271 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Jun 05 09:29:07 gsil-kube02.idm.example.org kubelet[14271]: E0605 09:29:07.366307   14271 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Jun 05 09:29:12 gsil-kube02.idm.example.org kubelet[14271]: E0605 09:29:12.366781   14271 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Jun 05 09:29:16 gsil-kube02.idm.example.org kubelet[14271]: E0605 09:29:16.389503   14271 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to get sandbox image \"registry.k8s.io/pause:3.6\": failed to pull image \"registry.k8s.io/pause:3.6\": failed to pull and unpack image \"registry.k8s.io/pause:3.6\": failed to resolve reference \"registry.k8s.io/pause:3.6\": failed to do request: Head \"https://registry.k8s.io/v2/pause/manifests/3.6\": dial tcp: lookup registry.k8s.io on x.x.8.16:53: server misbehaving"
Jun 05 09:29:16 gsil-kube02.idm.example.org kubelet[14271]: E0605 09:29:16.389557   14271 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to get sandbox image \"registry.k8s.io/pause:3.6\": failed to pull image \"registry.k8s.io/pause:3.6\": failed to pull and unpack image \"registry.k8s.io/pause:3.6\": failed to resolve reference \"registry.k8s.io/pause:3.6\": failed to do request: Head \"https://registry.k8s.io/v2/pause/manifests/3.6\": dial tcp: lookup registry.k8s.io on x.x.8.16:53: server misbehaving" pod="kube-system/kube-proxy-p2jnb"
Jun 05 09:29:16 gsil-kube02.idm.example.org kubelet[14271]: E0605 09:29:16.389585   14271 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to get sandbox image \"registry.k8s.io/pause:3.6\": failed to pull image \"registry.k8s.io/pause:3.6\": failed to pull and unpack image \"registry.k8s.io/pause:3.6\": failed to resolve reference \"registry.k8s.io/pause:3.6\": failed to do request: Head \"https://registry.k8s.io/v2/pause/manifests/3.6\": dial tcp: lookup registry.k8s.io on x.x.8.16:53: server misbehaving" pod="kube-system/kube-proxy-p2jnb"
Jun 05 09:29:16 gsil-kube02.idm.example.org kubelet[14271]: E0605 09:29:16.389641   14271 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-proxy-p2jnb_kube-system(0d74320c-fed8-45fd-8e02-b24064ac45a1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-proxy-p2jnb_kube-system(0d74320c-fed8-45fd-8e02-b24064ac45a1)\\\": rpc error: code = Unknown desc = failed to get sandbox image \\\"registry.k8s.io/pause:3.6\\\": failed to pull image \\\"registry.k8s.io/pause:3.6\\\": failed to pull and unpack image \\\"registry.k8s.io/pause:3.6\\\": failed to resolve reference \\\"registry.k8s.io/pause:3.6\\\": failed to do request: Head \\\"https://registry.k8s.io/v2/pause/manifests/3.6\\\": dial tcp: lookup registry.k8s.io on x.x.8.16:53: server misbehaving\"" pod="kube-system/kube-proxy-p2jnb" podUID=0d74320c-fed8-45fd-8e02-b24064ac45a1
Jun 05 09:29:16 gsil-kube02.idm.example.org kubelet[14271]: E0605 09:29:16.390513   14271 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to get sandbox image \"registry.k8s.io/pause:3.6\": failed to pull image \"registry.k8s.io/pause:3.6\": failed to pull and unpack image \"registry.k8s.io/pause:3.6\": failed to resolve reference \"registry.k8s.io/pause:3.6\": failed to do request: Head \"https://registry.k8s.io/v2/pause/manifests/3.6\": dial tcp: lookup registry.k8s.io on x.x.8.16:53: server misbehaving"
Jun 05 09:29:16 gsil-kube02.idm.example.org kubelet[14271]: E0605 09:29:16.390555   14271 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to get sandbox image \"registry.k8s.io/pause:3.6\": failed to pull image \"registry.k8s.io/pause:3.6\": failed to pull and unpack image \"registry.k8s.io/pause:3.6\": failed to resolve reference \"registry.k8s.io/pause:3.6\": failed to do request: Head \"https://registry.k8s.io/v2/pause/manifests/3.6\": dial tcp: lookup registry.k8s.io on x.x.8.16:53: server misbehaving" pod="kube-flannel/kube-flannel-ds-v2xtq"
Jun 05 09:29:16 gsil-kube02.idm.example.org kubelet[14271]: E0605 09:29:16.390576   14271 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to get sandbox image \"registry.k8s.io/pause:3.6\": failed to pull image \"registry.k8s.io/pause:3.6\": failed to pull and unpack image \"registry.k8s.io/pause:3.6\": failed to resolve reference \"registry.k8s.io/pause:3.6\": failed to do request: Head \"https://registry.k8s.io/v2/pause/manifests/3.6\": dial tcp: lookup registry.k8s.io on x.x.8.16:53: server misbehaving" pod="kube-flannel/kube-flannel-ds-v2xtq"
Jun 05 09:29:16 gsil-kube02.idm.example.org kubelet[14271]: E0605 09:29:16.390621   14271 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-flannel-ds-v2xtq_kube-flannel(cccbf0ba-c10d-4442-bb71-4a4bace8f0ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-flannel-ds-v2xtq_kube-flannel(cccbf0ba-c10d-4442-bb71-4a4bace8f0ab)\\\": rpc error: code = Unknown desc = failed to get sandbox image \\\"registry.k8s.io/pause:3.6\\\": failed to pull image \\\"registry.k8s.io/pause:3.6\\\": failed to pull and unpack image \\\"registry.k8s.io/pause:3.6\\\": failed to resolve reference \\\"registry.k8s.io/pause:3.6\\\": failed to do request: Head \\\"https://registry.k8s.io/v2/pause/manifests/3.6\\\": dial tcp: lookup registry.k8s.io on x.x.8.16:53: server misbehaving\"" pod="kube-flannel/kube-flannel-ds-v2xtq" podUID=cccbf0ba-c10d-4442-bb71-4a4bace8f0ab
Jun 05 09:29:17 gsil-kube02.idm.example.org kubelet[14271]: E0605 09:29:17.367470   14271 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Jun 05 09:29:22 gsil-kube02.idm.example.org kubelet[14271]: E0605 09:29:22.368128   14271 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Jun 05 09:29:27 gsil-kube02.idm.example.org kubelet[14271]: E0605 09:29:27.368835   14271 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Jun 05 09:29:32 gsil-kube02.idm.example.org kubelet[14271]: E0605 09:29:32.369497   14271 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Jun 05 09:29:37 gsil-kube02.idm.example.org kubelet[14271]: E0605 09:29:37.370475   14271 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Jun 05 09:29:37 gsil-kube02.idm.example.org kubelet[14271]: E0605 09:29:37.765747   14271 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to get sandbox image \"registry.k8s.io/pause:3.6\": failed to pull image \"registry.k8s.io/pause:3.6\": failed to pull and unpack image \"registry.k8s.io/pause:3.6\": failed to resolve reference \"registry.k8s.io/pause:3.6\": failed to do request: Head \"https://registry.k8s.io/v2/pause/manifests/3.6\": dial tcp: lookup registry.k8s.io on x.x.8.16:53: server misbehaving"
Jun 05 09:29:37 gsil-kube02.idm.example.org kubelet[14271]: E0605 09:29:37.765810   14271 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to get sandbox image \"registry.k8s.io/pause:3.6\": failed to pull image \"registry.k8s.io/pause:3.6\": failed to pull and unpack image \"registry.k8s.io/pause:3.6\": failed to resolve reference \"registry.k8s.io/pause:3.6\": failed to do request: Head \"https://registry.k8s.io/v2/pause/manifests/3.6\": dial tcp: lookup registry.k8s.io on x.x.8.16:53: server misbehaving" pod="kube-flannel/kube-flannel-ds-v2xtq"
Jun 05 09:29:37 gsil-kube02.idm.example.org kubelet[14271]: E0605 09:29:37.765834   14271 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to get sandbox image \"registry.k8s.io/pause:3.6\": failed to pull image \"registry.k8s.io/pause:3.6\": failed to pull and unpack image \"registry.k8s.io/pause:3.6\": failed to resolve reference \"registry.k8s.io/pause:3.6\": failed to do request: Head \"https://registry.k8s.io/v2/pause/manifests/3.6\": dial tcp: lookup registry.k8s.io on x.x.8.16:53: server misbehaving" pod="kube-flannel/kube-flannel-ds-v2xtq"
Jun 05 09:29:37 gsil-kube02.idm.example.org kubelet[14271]: E0605 09:29:37.765892   14271 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-flannel-ds-v2xtq_kube-flannel(cccbf0ba-c10d-4442-bb71-4a4bace8f0ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-flannel-ds-v2xtq_kube-flannel(cccbf0ba-c10d-4442-bb71-4a4bace8f0ab)\\\": rpc error: code = Unknown desc = failed to get sandbox image \\\"registry.k8s.io/pause:3.6\\\": failed to pull image \\\"registry.k8s.io/pause:3.6\\\": failed to pull and unpack image \\\"registry.k8s.io/pause:3.6\\\": failed to resolve reference \\\"registry.k8s.io/pause:3.6\\\": failed to do request: Head \\\"https://registry.k8s.io/v2/pause/manifests/3.6\\\": dial tcp: lookup registry.k8s.io on x.x.8.16:53: server misbehaving\"" pod="kube-flannel/kube-flannel-ds-v2xtq" podUID=cccbf0ba-c10d-4442-bb71-4a4bace8f0ab
Jun 05 09:29:37 gsil-kube02.idm.example.org kubelet[14271]: E0605 09:29:37.766743   14271 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to get sandbox image \"registry.k8s.io/pause:3.6\": failed to pull image \"registry.k8s.io/pause:3.6\": failed to pull and unpack image \"registry.k8s.io/pause:3.6\": failed to resolve reference \"registry.k8s.io/pause:3.6\": failed to do request: Head \"https://registry.k8s.io/v2/pause/manifests/3.6\": dial tcp: lookup registry.k8s.io on x.x.8.16:53: server misbehaving"
Jun 05 09:29:37 gsil-kube02.idm.example.org kubelet[14271]: E0605 09:29:37.766791   14271 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to get sandbox image \"registry.k8s.io/pause:3.6\": failed to pull image \"registry.k8s.io/pause:3.6\": failed to pull and unpack image \"registry.k8s.io/pause:3.6\": failed to resolve reference \"registry.k8s.io/pause:3.6\": failed to do request: Head \"https://registry.k8s.io/v2/pause/manifests/3.6\": dial tcp: lookup registry.k8s.io on x.x.8.16:53: server misbehaving" pod="kube-system/kube-proxy-p2jnb"
Jun 05 09:29:37 gsil-kube02.idm.example.org kubelet[14271]: E0605 09:29:37.766814   14271 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to get sandbox image \"registry.k8s.io/pause:3.6\": failed to pull image \"registry.k8s.io/pause:3.6\": failed to pull and unpack image \"registry.k8s.io/pause:3.6\": failed to resolve reference \"registry.k8s.io/pause:3.6\": failed to do request: Head \"https://registry.k8s.io/v2/pause/manifests/3.6\": dial tcp: lookup registry.k8s.io on x.x.8.16:53: server misbehaving" pod="kube-system/kube-proxy-p2jnb"
Jun 05 09:29:37 gsil-kube02.idm.example.org kubelet[14271]: E0605 09:29:37.766865   14271 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-proxy-p2jnb_kube-system(0d74320c-fed8-45fd-8e02-b24064ac45a1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-proxy-p2jnb_kube-system(0d74320c-fed8-45fd-8e02-b24064ac45a1)\\\": rpc error: code = Unknown desc = failed to get sandbox image \\\"registry.k8s.io/pause:3.6\\\": failed to pull image \\\"registry.k8s.io/pause:3.6\\\": failed to pull and unpack image \\\"registry.k8s.io/pause:3.6\\\": failed to resolve reference \\\"registry.k8s.io/pause:3.6\\\": failed to do request: Head \\\"https://registry.k8s.io/v2/pause/manifests/3.6\\\": dial tcp: lookup registry.k8s.io on x.x.8.16:53: server misbehaving\"" pod="kube-system/kube-proxy-p2jnb" podUID=0d74320c-fed8-45fd-8e02-b24064ac45a1
Jun 05 09:29:42 gsil-kube02.idm.example.org kubelet[14271]: E0605 09:29:42.371066   14271 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Jun 05 09:29:47 gsil-kube02.idm.example.org kubelet[14271]: E0605 09:29:47.372369   14271 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Jun 05 09:29:52 gsil-kube02.idm.example.org kubelet[14271]: E0605 09:29:52.373519   14271 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Jun 05 09:29:57 gsil-kube02.idm.example.org kubelet[14271]: E0605 09:29:57.374706   14271 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Jun 05 09:30:00 gsil-kube02.idm.example.org kubelet[14271]: E0605 09:30:00.765514   14271 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to get sandbox image \"registry.k8s.io/pause:3.6\": failed to pull image \"registry.k8s.io/pause:3.6\": failed to pull and unpack image \"registry.k8s.io/pause:3.6\": failed to resolve reference \"registry.k8s.io/pause:3.6\": failed to do request: Head \"https://registry.k8s.io/v2/pause/manifests/3.6\": dial tcp: lookup registry.k8s.io on x.x.8.16:53: server misbehaving"
Jun 05 09:30:00 gsil-kube02.idm.example.org kubelet[14271]: E0605 09:30:00.765582   14271 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to get sandbox image \"registry.k8s.io/pause:3.6\": failed to pull image \"registry.k8s.io/pause:3.6\": failed to pull and unpack image \"registry.k8s.io/pause:3.6\": failed to resolve reference \"registry.k8s.io/pause:3.6\": failed to do request: Head \"https://registry.k8s.io/v2/pause/manifests/3.6\": dial tcp: lookup registry.k8s.io on x.x.8.16:53: server misbehaving" pod="kube-flannel/kube-flannel-ds-v2xtq"
Jun 05 09:30:00 gsil-kube02.idm.example.org kubelet[14271]: E0605 09:30:00.765605   14271 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to get sandbox image \"registry.k8s.io/pause:3.6\": failed to pull image \"registry.k8s.io/pause:3.6\": failed to pull and unpack image \"registry.k8s.io/pause:3.6\": failed to resolve reference \"registry.k8s.io/pause:3.6\": failed to do request: Head \"https://registry.k8s.io/v2/pause/manifests/3.6\": dial tcp: lookup registry.k8s.io on x.x.8.16:53: server misbehaving" pod="kube-flannel/kube-flannel-ds-v2xtq"
Jun 05 09:30:00 gsil-kube02.idm.example.org kubelet[14271]: E0605 09:30:00.765667   14271 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-flannel-ds-v2xtq_kube-flannel(cccbf0ba-c10d-4442-bb71-4a4bace8f0ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-flannel-ds-v2xtq_kube-flannel(cccbf0ba-c10d-4442-bb71-4a4bace8f0ab)\\\": rpc error: code = Unknown desc = failed to get sandbox image \\\"registry.k8s.io/pause:3.6\\\": failed to pull image \\\"registry.k8s.io/pause:3.6\\\": failed to pull and unpack image \\\"registry.k8s.io/pause:3.6\\\": failed to resolve reference \\\"registry.k8s.io/pause:3.6\\\": failed to do request: Head \\\"https://registry.k8s.io/v2/pause/manifests/3.6\\\": dial tcp: lookup registry.k8s.io on x.x.8.16:53: server misbehaving\"" pod="kube-flannel/kube-flannel-ds-v2xtq" podUID=cccbf0ba-c10d-4442-bb71-4a4bace8f0ab
Jun 05 09:30:00 gsil-kube02.idm.example.org kubelet[14271]: E0605 09:30:00.766697   14271 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to get sandbox image \"registry.k8s.io/pause:3.6\": failed to pull image \"registry.k8s.io/pause:3.6\": failed to pull and unpack image \"registry.k8s.io/pause:3.6\": failed to resolve reference \"registry.k8s.io/pause:3.6\": failed to do request: Head \"https://registry.k8s.io/v2/pause/manifests/3.6\": dial tcp: lookup registry.k8s.io on x.x.8.16:53: server misbehaving"
Jun 05 09:30:00 gsil-kube02.idm.example.org kubelet[14271]: E0605 09:30:00.766760   14271 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to get sandbox image \"registry.k8s.io/pause:3.6\": failed to pull image \"registry.k8s.io/pause:3.6\": failed to pull and unpack image \"registry.k8s.io/pause:3.6\": failed to resolve reference \"registry.k8s.io/pause:3.6\": failed to do request: Head \"https://registry.k8s.io/v2/pause/manifests/3.6\": dial tcp: lookup registry.k8s.io on x.x.8.16:53: server misbehaving" pod="kube-system/kube-proxy-p2jnb"
Jun 05 09:30:00 gsil-kube02.idm.example.org kubelet[14271]: E0605 09:30:00.766783   14271 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to get sandbox image \"registry.k8s.io/pause:3.6\": failed to pull image \"registry.k8s.io/pause:3.6\": failed to pull and unpack image \"registry.k8s.io/pause:3.6\": failed to resolve reference \"registry.k8s.io/pause:3.6\": failed to do request: Head \"https://registry.k8s.io/v2/pause/manifests/3.6\": dial tcp: lookup registry.k8s.io on x.x.8.16:53: server misbehaving" pod="kube-system/kube-proxy-p2jnb"
Jun 05 09:30:00 gsil-kube02.idm.example.org kubelet[14271]: E0605 09:30:00.766834   14271 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-proxy-p2jnb_kube-system(0d74320c-fed8-45fd-8e02-b24064ac45a1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-proxy-p2jnb_kube-system(0d74320c-fed8-45fd-8e02-b24064ac45a1)\\\": rpc error: code = Unknown desc = failed to get sandbox image \\\"registry.k8s.io/pause:3.6\\\": failed to pull image \\\"registry.k8s.io/pause:3.6\\\": failed to pull and unpack image \\\"registry.k8s.io/pause:3.6\\\": failed to resolve reference \\\"registry.k8s.io/pause:3.6\\\": failed to do request: Head \\\"https://registry.k8s.io/v2/pause/manifests/3.6\\\": dial tcp: lookup registry.k8s.io on x.x.8.16:53: server misbehaving\"" pod="kube-system/kube-proxy-p2jnb" podUID=0d74320c-fed8-45fd-8e02-b24064ac45a1

Here is the config.toml that is working as expected on my master:

enabled_plugins = ["cri"]
 [plugins."io.containerd.grpc.vi.cri".containerd]
  endpoint = "unix:///var/run/containerd/containerd.sock"

disabled_plugins = []
imports = []
oom_score = 0
plugin_dir = ""
required_plugins = []
root = "/var/lib/containerd"
state = "/run/containerd"
temp = ""
version = 2

[cgroup]
  path = ""

[debug]
  address = ""
  format = ""
  gid = 0
  level = ""
  uid = 0

[grpc]
  address = "/run/containerd/containerd.sock"
  gid = 0
  max_recv_message_size = 16777216
  max_send_message_size = 16777216
  tcp_address = ""
  tcp_tls_ca = ""
  tcp_tls_cert = ""
  tcp_tls_key = ""
  uid = 0

[metrics]
  address = ""
  grpc_histogram = false

[plugins]

  [plugins."io.containerd.gc.v1.scheduler"]
    deletion_threshold = 0
    mutation_threshold = 100
    pause_threshold = 0.02
    schedule_delay = "0s"
    startup_delay = "100ms"

  [plugins."io.containerd.grpc.v1.cri"]
    device_ownership_from_security_context = false
    disable_apparmor = false
    disable_cgroup = false
    disable_hugetlb_controller = true
    disable_proc_mount = false
    disable_tcp_service = true
    enable_selinux = false
    enable_tls_streaming = false
    enable_unprivileged_icmp = false
    enable_unprivileged_ports = false
    ignore_image_defined_volumes = false
    max_concurrent_downloads = 3
    max_container_log_line_size = 16384
    netns_mounts_under_state_dir = false
    restrict_oom_score_adj = false
    sandbox_image = "gsil-docker1.idm.example.org:5001/k8s.gcr.io/pause:3.8"
    selinux_category_range = 1024
    stats_collect_period = 10
    stream_idle_timeout = "4h0m0s"
    stream_server_address = "127.0.0.1"
    stream_server_port = "0"
    systemd_cgroup = false
    tolerate_missing_hugetlb_controller = true
    unset_seccomp_profile = ""

    [plugins."io.containerd.grpc.v1.cri".cni]
      bin_dir = "/opt/cni/bin"
      conf_dir = "/etc/cni/net.d"
      conf_template = ""
      ip_pref = ""
      max_conf_num = 1

    [plugins."io.containerd.grpc.v1.cri".containerd]
      default_runtime_name = "runc"
      disable_snapshot_annotations = true
      discard_unpacked_layers = false
      ignore_rdt_not_enabled_errors = false
      no_pivot = false
      snapshotter = "overlayfs"

      [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime]
        base_runtime_spec = ""
        cni_conf_dir = ""
        cni_max_conf_num = 0
        container_annotations = []
        pod_annotations = []
        privileged_without_host_devices = false
        runtime_engine = ""
        runtime_path = ""
        runtime_root = ""
        runtime_type = ""

        [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime.options]

      [plugins."io.containerd.grpc.v1.cri".containerd.runtimes]

        [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
          base_runtime_spec = ""
          cni_conf_dir = ""
          cni_max_conf_num = 0
          container_annotations = []
          pod_annotations = []
          privileged_without_host_devices = false
          runtime_engine = ""
          runtime_path = ""
          runtime_root = ""
          runtime_type = "io.containerd.runc.v2"

          [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
            BinaryName = ""
            CriuImagePath = ""
            CriuPath = ""
            CriuWorkPath = ""
            IoGid = 0
            IoUid = 0
            NoNewKeyring = false
            NoPivotRoot = false
            Root = ""
            ShimCgroup = ""
            SystemdCgroup = false

      [plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime]
        base_runtime_spec = ""
        cni_conf_dir = ""
        cni_max_conf_num = 0
        container_annotations = []
        pod_annotations = []
        privileged_without_host_devices = false
        runtime_engine = ""
        runtime_path = ""
        runtime_root = ""
        runtime_type = ""

        [plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime.options]

    [plugins."io.containerd.grpc.v1.cri".image_decryption]
      key_model = "node"

    [plugins."io.containerd.grpc.v1.cri".registry]
      config_path = ""

      [plugins."io.containerd.grpc.v1.cri".registry.auths]

      [plugins."io.containerd.grpc.v1.cri".registry.configs]

      [plugins."io.containerd.grpc.v1.cri".registry.headers]

      [plugins."io.containerd.grpc.v1.cri".registry.mirrors]

    [plugins."io.containerd.grpc.v1.cri".x509_key_pair_streaming]
      tls_cert_file = ""
      tls_key_file = ""

  [plugins."io.containerd.internal.v1.opt"]
    path = "/opt/containerd"

  [plugins."io.containerd.internal.v1.restart"]
    interval = "10s"

  [plugins."io.containerd.internal.v1.tracing"]
    sampling_ratio = 1.0
    service_name = "containerd"

  [plugins."io.containerd.metadata.v1.bolt"]
    content_sharing_policy = "shared"

  [plugins."io.containerd.monitor.v1.cgroups"]
    no_prometheus = false

  [plugins."io.containerd.runtime.v1.linux"]
    no_shim = false
    runtime = "runc"
    runtime_root = ""
    shim = "containerd-shim"
    shim_debug = false

  [plugins."io.containerd.runtime.v2.task"]
    platforms = ["linux/amd64"]
    sched_core = false

  [plugins."io.containerd.service.v1.diff-service"]
    default = ["walking"]

  [plugins."io.containerd.service.v1.tasks-service"]
    rdt_config_file = ""

  [plugins."io.containerd.snapshotter.v1.aufs"]
    root_path = ""

  [plugins."io.containerd.snapshotter.v1.devmapper"]
    async_remove = false
    base_image_size = ""
    discard_blocks = false
    fs_options = ""
    fs_type = ""
    pool_name = ""
    root_path = ""

  [plugins."io.containerd.snapshotter.v1.native"]
    root_path = ""

  [plugins."io.containerd.snapshotter.v1.overlayfs"]
    root_path = ""
    upperdir_label = false

  [plugins."io.containerd.snapshotter.v1.zfs"]
    root_path = ""

  [plugins."io.containerd.tracing.processor.v1.otlp"]
    endpoint = ""
    insecure = false
    protocol = ""

[proxy_plugins]

[stream_processors]

  [stream_processors."io.containerd.ocicrypt.decoder.v1.tar"]
    accepts = ["application/vnd.oci.image.layer.v1.tar+encrypted"]
    args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
    env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
    path = "ctd-decoder"
    returns = "application/vnd.oci.image.layer.v1.tar"

  [stream_processors."io.containerd.ocicrypt.decoder.v1.tar.gzip"]
    accepts = ["application/vnd.oci.image.layer.v1.tar+gzip+encrypted"]
    args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
    env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
    path = "ctd-decoder"
    returns = "application/vnd.oci.image.layer.v1.tar+gzip"

[timeouts]
  "io.containerd.timeout.bolt.open" = "0s"
  "io.containerd.timeout.shim.cleanup" = "5s"
  "io.containerd.timeout.shim.load" = "5s"
  "io.containerd.timeout.shim.shutdown" = "3s"
  "io.containerd.timeout.task.state" = "2s"

[ttrpc]
  address = ""
  gid = 0
  uid = 0

It turns out my /etc/containerd.config.toml didn’t match each other on all systems. Also, I had to run containerd config dump This showed me that my nodes were still trying to use the pause 3.6 image. I took the following steps to correct the issue:

  1. correct config.toml
  2. stop containerd
  3. stop kubelet
  4. start kubelet
  5. start containerd
  6. try to join the nodes. This time it work and everything is in a ready status

Here is my corrected config.toml

enabled_plugins = ["cri"]

disabled_plugins = []
imports = []
oom_score = 0
plugin_dir = ""
required_plugins = []
root = "/var/lib/containerd"
state = "/run/containerd"
temp = ""
version = 2

[cgroup]
  path = ""

[debug]
  address = ""
  format = ""
  gid = 0
  level = ""
  uid = 0

[grpc]
  address = "/run/containerd/containerd.sock"
  gid = 0
  max_recv_message_size = 16777216
  max_send_message_size = 16777216
  tcp_address = ""
  tcp_tls_ca = ""
  tcp_tls_cert = ""
  tcp_tls_key = ""
  uid = 0

[metrics]
  address = ""
  grpc_histogram = false

[plugins]

  [plugins."io.containerd.gc.v1.scheduler"]
    deletion_threshold = 0
    mutation_threshold = 100
    pause_threshold = 0.02
    schedule_delay = "0s"
    startup_delay = "100ms"

  [plugins."io.containerd.grpc.v1.cri"]
    device_ownership_from_security_context = false
    disable_apparmor = false
    disable_cgroup = false
    disable_hugetlb_controller = true
    disable_proc_mount = false
    disable_tcp_service = true
    enable_selinux = false
    enable_tls_streaming = false
    enable_unprivileged_icmp = false
    enable_unprivileged_ports = false
    endpoint = "unix:///var/run/containerd/containerd.sock"
    ignore_image_defined_volumes = false
    max_concurrent_downloads = 3
    max_container_log_line_size = 16384
    netns_mounts_under_state_dir = false
    restrict_oom_score_adj = false
    sandbox_image = "gsil-docker1.idm.example.org:5001/k8s.gcr.io/pause:3.8"
    selinux_category_range = 1024
    stats_collect_period = 10
    stream_idle_timeout = "4h0m0s"
    stream_server_address = "127.0.0.1"
    stream_server_port = "0"
    systemd_cgroup = false
    tolerate_missing_hugetlb_controller = true
    unset_seccomp_profile = ""

    [plugins."io.containerd.grpc.v1.cri".cni]
      bin_dir = "/opt/cni/bin"
      conf_dir = "/etc/cni/net.d"
      conf_template = ""
      ip_pref = ""
      max_conf_num = 1

    [plugins."io.containerd.grpc.v1.cri".containerd]
      default_runtime_name = "runc"
      disable_snapshot_annotations = true
      discard_unpacked_layers = false
      ignore_rdt_not_enabled_errors = false
      no_pivot = false
      snapshotter = "overlayfs"

      [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime]
        base_runtime_spec = ""
        cni_conf_dir = ""
        cni_max_conf_num = 0
        container_annotations = []
        pod_annotations = []
        privileged_without_host_devices = false
        runtime_engine = ""
        runtime_path = ""
        runtime_root = ""
        runtime_type = ""

        [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime.options]

      [plugins."io.containerd.grpc.v1.cri".containerd.runtimes]

        [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
          base_runtime_spec = ""
          cni_conf_dir = ""
          cni_max_conf_num = 0
          container_annotations = []
          pod_annotations = []
          privileged_without_host_devices = false
          runtime_engine = ""
          runtime_path = ""
          runtime_root = ""
          runtime_type = "io.containerd.runc.v2"

          [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
            BinaryName = ""
            CriuImagePath = ""
            CriuPath = ""
            CriuWorkPath = ""
            IoGid = 0
            IoUid = 0
            NoNewKeyring = false
            NoPivotRoot = false
            Root = ""
            ShimCgroup = ""
            SystemdCgroup = false

      [plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime]
        base_runtime_spec = ""
        cni_conf_dir = ""
        cni_max_conf_num = 0
        container_annotations = []
        pod_annotations = []
        privileged_without_host_devices = false
        runtime_engine = ""
        runtime_path = ""
        runtime_root = ""
        runtime_type = ""

        [plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime.options]

    [plugins."io.containerd.grpc.v1.cri".image_decryption]
      key_model = "node"

    [plugins."io.containerd.grpc.v1.cri".registry]
      config_path = ""

      [plugins."io.containerd.grpc.v1.cri".registry.auths]

      [plugins."io.containerd.grpc.v1.cri".registry.configs]

      [plugins."io.containerd.grpc.v1.cri".registry.headers]

      [plugins."io.containerd.grpc.v1.cri".registry.mirrors]

    [plugins."io.containerd.grpc.v1.cri".x509_key_pair_streaming]
      tls_cert_file = ""
      tls_key_file = ""

  [plugins."io.containerd.internal.v1.opt"]
    path = "/opt/containerd"

  [plugins."io.containerd.internal.v1.restart"]
    interval = "10s"

  [plugins."io.containerd.internal.v1.tracing"]
    sampling_ratio = 1.0
    service_name = "containerd"

  [plugins."io.containerd.metadata.v1.bolt"]
    content_sharing_policy = "shared"

  [plugins."io.containerd.monitor.v1.cgroups"]
    no_prometheus = false

  [plugins."io.containerd.runtime.v1.linux"]
    no_shim = false
    runtime = "runc"
    runtime_root = ""
    shim = "containerd-shim"
    shim_debug = false

  [plugins."io.containerd.runtime.v2.task"]
    platforms = ["linux/amd64"]
    sched_core = false

  [plugins."io.containerd.service.v1.diff-service"]
    default = ["walking"]

  [plugins."io.containerd.service.v1.tasks-service"]
    rdt_config_file = ""

  [plugins."io.containerd.snapshotter.v1.aufs"]
    root_path = ""

  [plugins."io.containerd.snapshotter.v1.devmapper"]
    async_remove = false
    base_image_size = ""
    discard_blocks = false
    fs_options = ""
    fs_type = ""
    pool_name = ""
    root_path = ""

  [plugins."io.containerd.snapshotter.v1.native"]
    root_path = ""

  [plugins."io.containerd.snapshotter.v1.overlayfs"]
    root_path = ""
    upperdir_label = false

  [plugins."io.containerd.snapshotter.v1.zfs"]
    root_path = ""

  [plugins."io.containerd.tracing.processor.v1.otlp"]
    endpoint = ""
    insecure = false
    protocol = ""

[proxy_plugins]

[stream_processors]

  [stream_processors."io.containerd.ocicrypt.decoder.v1.tar"]
    accepts = ["application/vnd.oci.image.layer.v1.tar+encrypted"]
    args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
    env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
    path = "ctd-decoder"
    returns = "application/vnd.oci.image.layer.v1.tar"

  [stream_processors."io.containerd.ocicrypt.decoder.v1.tar.gzip"]
    accepts = ["application/vnd.oci.image.layer.v1.tar+gzip+encrypted"]
    args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
    env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
    path = "ctd-decoder"
    returns = "application/vnd.oci.image.layer.v1.tar+gzip"

[timeouts]
  "io.containerd.timeout.bolt.open" = "0s"
  "io.containerd.timeout.shim.cleanup" = "5s"
  "io.containerd.timeout.shim.load" = "5s"
  "io.containerd.timeout.shim.shutdown" = "3s"
  "io.containerd.timeout.task.state" = "2s"

[ttrpc]
  address = ""
  gid = 0
  uid = 0