Kubeadm init: Everything crash after several CrashLoopBackOff

What keywords did you search in kubeadm issues before filing this one?

CrashLoopBackOff right after kubeadm init
rpc error: code = Unknown desc = malformed header: missing HTTP content-type

Is this a BUG REPORT or FEATURE REQUEST?

Bug report

Versions

  • kubeadm version:

kubeadm version: &version.Info{Major:“1”, Minor:“25”, GitVersion:“v1.25.3”, GitCommit:“434bfd82814af038ad94d62ebe59b133fcb50506”, GitTreeState:“clean”, BuildDate:“2022-10-12T10:55:36Z”, GoVersion:“go1.19.2”, Compiler:“gc”, Platform:“linux/amd64”}

  • kubernetes version:

WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:“1”, Minor:“25”, GitVersion:“v1.25.3”, GitCommit:“434bfd82814af038ad94d62ebe59b133fcb50506”, GitTreeState:“clean”, BuildDate:“2022-10-12T10:57:26Z”, GoVersion:“go1.19.2”, Compiler:“gc”, Platform:“linux/amd64”}
Kustomize Version: v4.5.7

  • hardware: core i3 64 go ram → tried on physical and lxd nested container with no restrictions: same results

  • filesystem: btrfs

  • os: ubuntu 22.04.1

  • kernel: Linux kube1 5.15.0-50-generic #56-Ubuntu SMP Tue Sep 20 13:23:26 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux

  • container runtime: containerd

  • Container networking plugin (CNI) (e.g. Calico, Cilium):

calico: but when installed it is worse, so i report installation without it

What happened?

i have tries to install dozens of times, it always finish the same way.

tried 1.251, 1.25.2, 1.25.3: same results

Several results steps:

some crashloopbackoff and after, everything crashes: below are several steps:

Every 10.0s: echo; kubectl cluster-info; echo; kubectl get all --all-namespaces                                                                                                     kube1: Sun Oct 23 21:26:47 2022


Kubernetes control plane is running at https://172.16.99.56:6443
CoreDNS is running at https://172.16.99.56:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

NAMESPACE     NAME                       READY   STATUS    RESTARTS   AGE
kube-system   pod/kube-apiserver-kube1   0/1     Pending   0          0s

NAMESPACE     NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
default       service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP                  38s
kube-system   service/kube-dns     ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   35s

NAMESPACE     NAME                        DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
kube-system   daemonset.apps/kube-proxy   0         0         0       0            0           kubernetes.io/os=linux   35s

NAMESPACE     NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   deployment.apps/coredns   0/2     0            0           35s

Every 10.0s: echo; kubectl cluster-info; echo; kubectl get all --all-namespaces                                                                                                     kube1: Sun Oct 23 21:27:19 2022


Kubernetes control plane is running at https://172.16.99.56:6443
CoreDNS is running at https://172.16.99.56:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

NAMESPACE     NAME                           READY   STATUS    RESTARTS      AGE
kube-system   pod/coredns-565d847f94-jnzxv   0/1     Pending   0             1s
kube-system   pod/coredns-565d847f94-qw55b   0/1     Pending   0             1s
kube-system   pod/kube-apiserver-kube1       1/1     Running   1 (35s ago)   30s
kube-system   pod/kube-proxy-lfjmh           1/1     Running   0             1s

NAMESPACE     NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
default       service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP                  68s
kube-system   service/kube-dns     ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   65s

NAMESPACE     NAME                        DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
kube-system   daemonset.apps/kube-proxy   1         1         1       1            1           kubernetes.io/os=linux   65s

NAMESPACE     NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   deployment.apps/coredns   0/2     2            0           65s

NAMESPACE     NAME                                 DESIRED   CURRENT   READY   AGE
kube-system   replicaset.apps/coredns-565d847f94   2         2         0       1s



Every 10.0s: echo; kubectl cluster-info; echo; kubectl get all --all-namespaces                                                                                                     kube1: Sun Oct 23 21:28:00 2022


Kubernetes control plane is running at https://172.16.99.56:6443
CoreDNS is running at https://172.16.99.56:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

NAMESPACE     NAME                                READY   STATUS             RESTARTS      AGE
kube-system   pod/coredns-565d847f94-jnzxv        0/1     Pending            0             42s
kube-system   pod/coredns-565d847f94-qw55b        0/1     Pending            0             42s
kube-system   pod/etcd-kube1                      0/1     Pending            0             3s
kube-system   pod/kube-apiserver-kube1            1/1     Running            1 (76s ago)   71s
kube-system   pod/kube-controller-manager-kube1   0/1     CrashLoopBackOff   1 (22s ago)   22s
kube-system   pod/kube-proxy-lfjmh                1/1     Running            1 (41s ago)   42s

NAMESPACE     NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
default       service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP                  109s
kube-system   service/kube-dns     ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   106s

NAMESPACE     NAME                        DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
kube-system   daemonset.apps/kube-proxy   1         1         1       1            1           kubernetes.io/os=linux   106s

NAMESPACE     NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   deployment.apps/coredns   0/2     2            0           106s

NAMESPACE     NAME                                 DESIRED   CURRENT   READY   AGE
kube-system   replicaset.apps/coredns-565d847f94   2         2         0       42s


Every 10.0s: echo; kubectl cluster-info; echo; kubectl get all --all-namespaces                                                                                                     kube1: Sun Oct 23 21:29:47 2022


Kubernetes control plane is running at https://172.16.99.56:6443
CoreDNS is running at https://172.16.99.56:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

NAMESPACE     NAME                                READY   STATUS    RESTARTS       AGE
kube-system   pod/coredns-565d847f94-jnzxv        0/1     Pending   0              2m30s
kube-system   pod/coredns-565d847f94-qw55b        0/1     Pending   0              2m30s
kube-system   pod/etcd-kube1                      1/1     Running   2 (50s ago)    111s
kube-system   pod/kube-apiserver-kube1            1/1     Running   1 (3m4s ago)   2m59s
kube-system   pod/kube-controller-manager-kube1   0/1     Running   4 (50s ago)    2m10s
kube-system   pod/kube-proxy-lfjmh                1/1     Running   2 (59s ago)    2m30s
kube-system   pod/kube-scheduler-kube1            1/1     Running   2 (40s ago)    106s

NAMESPACE     NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
default       service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP                  3m37s
kube-system   service/kube-dns     ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   3m34s

NAMESPACE     NAME                        DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
kube-system   daemonset.apps/kube-proxy   1         1         1       1            1           kubernetes.io/os=linux   3m34s

NAMESPACE     NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   deployment.apps/coredns   0/2     2            0           3m34s

NAMESPACE     NAME                                 DESIRED   CURRENT   READY   AGE
kube-system   replicaset.apps/coredns-565d847f94   2         2         0       2m30s


Every 10.0s: echo; kubectl cluster-info; echo; kubectl get all --all-namespaces                                                                                                     kube1: Sun Oct 23 21:30:18 2022


Kubernetes control plane is running at https://172.16.99.56:6443
CoreDNS is running at https://172.16.99.56:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

NAMESPACE     NAME                                READY   STATUS             RESTARTS        AGE
kube-system   pod/coredns-565d847f94-jnzxv        0/1     Pending            0               3m
kube-system   pod/coredns-565d847f94-qw55b        0/1     Pending            0               3m
kube-system   pod/etcd-kube1                      1/1     Running            2 (80s ago)     2m21s
kube-system   pod/kube-apiserver-kube1            1/1     Running            1 (3m34s ago)   3m29s
kube-system   pod/kube-controller-manager-kube1   1/1     Running            4 (80s ago)     2m40s
kube-system   pod/kube-proxy-lfjmh                0/1     CrashLoopBackOff   2 (7s ago)      3m
kube-system   pod/kube-scheduler-kube1            1/1     Running            2 (70s ago)     2m16s

NAMESPACE     NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
default       service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP                  4m7s
kube-system   service/kube-dns     ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   4m4s

NAMESPACE     NAME                        DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
kube-system   daemonset.apps/kube-proxy   1         1         0       1            0           kubernetes.io/os=linux   4m4s

NAMESPACE     NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   deployment.apps/coredns   0/2     2            0           4m4s

NAMESPACE     NAME                                 DESIRED   CURRENT   READY   AGE
kube-system   replicaset.apps/coredns-565d847f94   2         2         0       3m


Every 10.0s: echo; kubectl cluster-info; echo; kubectl get all --all-namespaces                                                                                                     kube1: Sun Oct 23 21:31:20 2022


Kubernetes control plane is running at https://172.16.99.56:6443
CoreDNS is running at https://172.16.99.56:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

NAMESPACE     NAME                                READY   STATUS             RESTARTS        AGE
kube-system   pod/coredns-565d847f94-jnzxv        0/1     Pending            0               4m2s
kube-system   pod/coredns-565d847f94-qw55b        0/1     Pending            0               4m2s
kube-system   pod/etcd-kube1                      1/1     Running            2 (2m22s ago)   3m23s
kube-system   pod/kube-apiserver-kube1            1/1     Running            1 (4m36s ago)   4m31s
kube-system   pod/kube-controller-manager-kube1   0/1     CrashLoopBackOff   4 (5s ago)      3m42s
kube-system   pod/kube-proxy-lfjmh                1/1     Running            3 (69s ago)     4m2s
kube-system   pod/kube-scheduler-kube1            0/1     Running            3 (35s ago)     3m18s

NAMESPACE     NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
default       service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP                  5m9s
kube-system   service/kube-dns     ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   5m6s

NAMESPACE     NAME                        DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
kube-system   daemonset.apps/kube-proxy   1         1         1       1            1           kubernetes.io/os=linux   5m6s

NAMESPACE     NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   deployment.apps/coredns   0/2     2            0           5m6s

NAMESPACE     NAME                                 DESIRED   CURRENT   READY   AGE
kube-system   replicaset.apps/coredns-565d847f94   2         2         0       4m2s


Every 10.0s: echo; kubectl cluster-info; echo; kubectl get all --all-namespaces                                                                                                     kube1: Sun Oct 23 21:32:11 2022



To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
The connection to the server 172.16.99.56:6443 was refused - did you specify the right host or port?

The connection to the server 172.16.99.56:6443 was refused - did you specify the right host or port?
The connection to the server 172.16.99.56:6443 was refused - did you specify the right host or port?
The connection to the server 172.16.99.56:6443 was refused - did you specify the right host or port?
The connection to the server 172.16.99.56:6443 was refused - did you specify the right host or port?
The connection to the server 172.16.99.56:6443 was refused - did you specify the right host or port?
The connection to the server 172.16.99.56:6443 was refused - did you specify the right host or port?
The connection to the server 172.16.99.56:6443 was refused - did you specify the right host or port?
The connection to the server 172.16.99.56:6443 was refused - did you specify the right host or port?
The connection to the server 172.16.99.56:6443 was refused - did you specify the right host or port?
The connection to the server 172.16.99.56:6443 was refused - did you specify the right host or port?

What you expected to happen?

all pods are running, coredns are in pending state , until network pods are installed

How to reproduce it (as minimally and precisely as possible)?

  1. Script to prepare apt database , images, and install tools:
#!/bin/bash


echo "remove old docker"

sudo apt-get remove docker docker-engine docker.io containerd runc -y

echo "update apt database"
sudo apt-get update

echo "install certif, curl, gnupg, lsb-release"
sudo apt-get install \
    ca-certificates \
    curl \
    gnupg \
    lsb-release -y



echo "installing containerd"
apt install containerd -y

echo "add google key and repo"
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list

echo
echo "update apt database"
sudo apt-get update

echo
echo "installing kube tools"
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl


echo "Pulling k8s images"
kubeadm config images pull

  1. script to install things:
#!/bin/bash

CIDR="172.16.50.0/16"
DEBUG="--v=5"
SLEEP_TIME=120


echo "Creating pod"

sudo kubeadm init   --pod-network-cidr=$CIDR  $DEBUG


export KUBECONFIG=/etc/kubernetes/admin.conf

echo "sleep $SLEEP_TIME"
sleep $SLEEP_TIME

#echo "installing calico tigera operator"
#kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.24.2/manifests/tigera-operator.yaml

#echo "sleep $SLEEP_TIME"
#sleep $SLEEP_TIME


#echo "installing calico custom conf"
#kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.24.2/manifests/custom-resources.yaml

  1. script to watch :
#!/bin/bash

export KUBECONFIG=/etc/kubernetes/admin.conf
watch -n 10 -c "echo; kubectl cluster-info; echo; kubectl get all --all-namespaces"

Anything else we need to know?

journalctl logs:

2022-10-23T21:21:49.369200+00:00 kube1 containerd[281]: time="2022-10-23T21:21:49.365641457Z" level=info msg="starting containerd" revision= version=1.5.9-0ubuntu3
2022-10-23T21:21:49.675744+00:00 kube1 containerd[281]: time="2022-10-23T21:21:49.675629144Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
2022-10-23T21:21:49.675918+00:00 kube1 containerd[281]: time="2022-10-23T21:21:49.675754561Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
2022-10-23T21:21:49.679272+00:00 kube1 containerd[281]: time="2022-10-23T21:21:49.679163273Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.0-50-generic\\n\"): skip plugin" type=io.containerd.snapshotter.v1
2022-10-23T21:21:49.679416+00:00 kube1 containerd[281]: time="2022-10-23T21:21:49.679220441Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
2022-10-23T21:21:49.679836+00:00 kube1 containerd[281]: time="2022-10-23T21:21:49.679761651Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
2022-10-23T21:21:49.679972+00:00 kube1 containerd[281]: time="2022-10-23T21:21:49.679839983Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
2022-10-23T21:21:49.680063+00:00 kube1 containerd[281]: time="2022-10-23T21:21:49.679869536Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
2022-10-23T21:21:49.680143+00:00 kube1 containerd[281]: time="2022-10-23T21:21:49.679941512Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
2022-10-23T21:21:49.680251+00:00 kube1 containerd[281]: time="2022-10-23T21:21:49.680139223Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
2022-10-23T21:21:49.680637+00:00 kube1 containerd[281]: time="2022-10-23T21:21:49.680569719Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
2022-10-23T21:21:49.680752+00:00 kube1 containerd[281]: time="2022-10-23T21:21:49.680607596Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
2022-10-23T21:21:49.680870+00:00 kube1 containerd[281]: time="2022-10-23T21:21:49.680653731Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
2022-10-23T21:21:49.680966+00:00 kube1 containerd[281]: time="2022-10-23T21:21:49.680673535Z" level=info msg="metadata content store policy set" policy=shared
2022-10-23T21:21:49.683262+00:00 kube1 containerd[281]: time="2022-10-23T21:21:49.683163384Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
2022-10-23T21:21:49.683549+00:00 kube1 containerd[281]: time="2022-10-23T21:21:49.683485769Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
2022-10-23T21:21:49.683673+00:00 kube1 containerd[281]: time="2022-10-23T21:21:49.683586585Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
2022-10-23T21:21:49.683817+00:00 kube1 containerd[281]: time="2022-10-23T21:21:49.683754112Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
2022-10-23T21:21:49.683933+00:00 kube1 containerd[281]: time="2022-10-23T21:21:49.683803951Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
2022-10-23T21:21:49.684012+00:00 kube1 containerd[281]: time="2022-10-23T21:21:49.683832590Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
2022-10-23T21:21:49.684104+00:00 kube1 containerd[281]: time="2022-10-23T21:21:49.683860710Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
2022-10-23T21:21:49.684208+00:00 kube1 containerd[281]: time="2022-10-23T21:21:49.683883800Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1
2022-10-23T21:21:49.684314+00:00 kube1 containerd[281]: time="2022-10-23T21:21:49.683914468Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
2022-10-23T21:21:49.684407+00:00 kube1 containerd[281]: time="2022-10-23T21:21:49.683944461Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
2022-10-23T21:21:49.684496+00:00 kube1 containerd[281]: time="2022-10-23T21:21:49.683972614Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
2022-10-23T21:21:49.684570+00:00 kube1 containerd[281]: time="2022-10-23T21:21:49.684077288Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
2022-10-23T21:21:49.684661+00:00 kube1 containerd[281]: time="2022-10-23T21:21:49.684181206Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
2022-10-23T21:21:49.697503+00:00 kube1 containerd[281]: time="2022-10-23T21:21:49.697441308Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
2022-10-23T21:21:49.697593+00:00 kube1 containerd[281]: time="2022-10-23T21:21:49.697497622Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
2022-10-23T21:21:49.697672+00:00 kube1 containerd[281]: time="2022-10-23T21:21:49.697570135Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
2022-10-23T21:21:49.697746+00:00 kube1 containerd[281]: time="2022-10-23T21:21:49.697596229Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
2022-10-23T21:21:49.697822+00:00 kube1 containerd[281]: time="2022-10-23T21:21:49.697619580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
2022-10-23T21:21:49.697915+00:00 kube1 containerd[281]: time="2022-10-23T21:21:49.697641428Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
2022-10-23T21:21:49.697996+00:00 kube1 containerd[281]: time="2022-10-23T21:21:49.697673199Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
2022-10-23T21:21:49.698129+00:00 kube1 containerd[281]: time="2022-10-23T21:21:49.697696599Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
2022-10-23T21:21:49.698206+00:00 kube1 containerd[281]: time="2022-10-23T21:21:49.697718084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
2022-10-23T21:21:49.698269+00:00 kube1 containerd[281]: time="2022-10-23T21:21:49.697742383Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
2022-10-23T21:21:49.701661+00:00 kube1 containerd[281]: time="2022-10-23T21:21:49.697763938Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
2022-10-23T21:21:49.701798+00:00 kube1 containerd[281]: time="2022-10-23T21:21:49.697885320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
2022-10-23T21:21:49.701963+00:00 kube1 containerd[281]: time="2022-10-23T21:21:49.697918643Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
2022-10-23T21:21:49.702037+00:00 kube1 containerd[281]: time="2022-10-23T21:21:49.697941842Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
2022-10-23T21:21:49.702090+00:00 kube1 containerd[281]: time="2022-10-23T21:21:49.697962626Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
2022-10-23T21:21:49.702384+00:00 kube1 containerd[281]: time="2022-10-23T21:21:49.698115362Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec:} UntrustedWorkloadRuntime:{Type: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[BinaryName: CriuImagePath: CriuPath: CriuWorkPath: IoGid:0 IoUid:0 NoNewKeyring:false NoPivotRoot:false Root: ShimCgroup: SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec:}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:k8s.gcr.io/pause:3.5 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true IgnoreImageDef
inedVolumes:false NetNSMountsUnderStateDir:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
2022-10-23T21:21:49.702631+00:00 kube1 containerd[281]: time="2022-10-23T21:21:49.698261345Z" level=info msg="Connect containerd service"
2022-10-23T21:21:49.702854+00:00 kube1 containerd[281]: time="2022-10-23T21:21:49.699817864Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
2022-10-23T21:21:49.702984+00:00 kube1 containerd[281]: time="2022-10-23T21:21:49.700496769Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
2022-10-23T21:21:49.703374+00:00 kube1 containerd[281]: time="2022-10-23T21:21:49.700558606Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
2022-10-23T21:21:49.703454+00:00 kube1 containerd[281]: time="2022-10-23T21:21:49.700906025Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
2022-10-23T21:21:49.703544+00:00 kube1 containerd[281]: time="2022-10-23T21:21:49.700977420Z" level=info msg=serving... address=/run/containerd/containerd.sock
2022-10-23T21:21:49.703678+00:00 kube1 containerd[281]: time="2022-10-23T21:21:49.701045776Z" level=info msg="containerd successfully booted in 0.337628s"
2022-10-23T21:21:49.711416+00:00 kube1 containerd[281]: time="2022-10-23T21:21:49.706512265Z" level=info msg="Start subscribing containerd event"
2022-10-23T21:21:49.711839+00:00 kube1 containerd[281]: time="2022-10-23T21:21:49.708348036Z" level=info msg="Start recovering state"
2022-10-23T21:21:49.855490+00:00 kube1 containerd[281]: time="2022-10-23T21:21:49.854854169Z" level=info msg="Start event monitor"
2022-10-23T21:21:49.855633+00:00 kube1 containerd[281]: time="2022-10-23T21:21:49.854925340Z" level=info msg="Start snapshots syncer"
2022-10-23T21:21:49.855713+00:00 kube1 containerd[281]: time="2022-10-23T21:21:49.854946395Z" level=info msg="Start cni network conf syncer"
2022-10-23T21:21:49.855783+00:00 kube1 containerd[281]: time="2022-10-23T21:21:49.854958746Z" level=info msg="Start streaming server"
2022-10-23T21:21:49.903515+00:00 kube1 kubelet[271]: E1023 21:21:49.903077     271 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml"

[ ..]

2022-10-23T21:25:56.389578+00:00 kube1 kubelet[761]: Flag --container-runtime has been deprecated, will be removed in 1.27 as the only valid value is 'remote'
2022-10-23T21:25:56.389752+00:00 kube1 kubelet[761]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI.
2022-10-23T21:25:56.389836+00:00 kube1 kubelet[761]: I1023 21:25:56.389322     761 server.go:200] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
2022-10-23T21:25:56.391256+00:00 kube1 kubelet[761]: Flag --container-runtime has been deprecated, will be removed in 1.27 as the only valid value is 'remote'
2022-10-23T21:25:56.391335+00:00 kube1 kubelet[761]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI.
2022-10-23T21:25:56.834094+00:00 kube1 kubelet[761]: I1023 21:25:56.834016     761 server.go:413] "Kubelet version" kubeletVersion="v1.25.3"
2022-10-23T21:25:56.834206+00:00 kube1 kubelet[761]: I1023 21:25:56.834052     761 server.go:415] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
2022-10-23T21:25:56.834495+00:00 kube1 kubelet[761]: I1023 21:25:56.834416     761 server.go:825] "Client rotation is on, will bootstrap in background"
2022-10-23T21:25:56.840023+00:00 kube1 kubelet[761]: I1023 21:25:56.839940     761 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
2022-10-23T21:25:56.840734+00:00 kube1 kubelet[761]: E1023 21:25:56.840668     761 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.16.99.56:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.16.99.56:6443: connect: connection refused
2022-10-23T21:25:56.903860+00:00 kube1 kubelet[761]: I1023 21:25:56.903743     761 server.go:660] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
2022-10-23T21:25:56.905162+00:00 kube1 kubelet[761]: I1023 21:25:56.905052     761 container_manager_linux.go:262] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
2022-10-23T21:25:56.905364+00:00 kube1 kubelet[761]: I1023 21:25:56.905179     761 container_manager_linux.go:267] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
2022-10-23T21:25:56.906515+00:00 kube1 kubelet[761]: I1023 21:25:56.906417     761 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container"
2022-10-23T21:25:56.906649+00:00 kube1 kubelet[761]: I1023 21:25:56.906462     761 container_manager_linux.go:302] "Creating device plugin manager" devicePluginEnabled=true
2022-10-23T21:25:56.906766+00:00 kube1 kubelet[761]: I1023 21:25:56.906618     761 state_mem.go:36] "Initialized new in-memory state store"
2022-10-23T21:25:56.911753+00:00 kube1 kubelet[761]: I1023 21:25:56.911678     761 kubelet.go:381] "Attempting to sync node with API server"
2022-10-23T21:25:56.911921+00:00 kube1 kubelet[761]: I1023 21:25:56.911711     761 kubelet.go:270] "Adding static pod path" path="/etc/kubernetes/manifests"
2022-10-23T21:25:56.912046+00:00 kube1 kubelet[761]: I1023 21:25:56.911744     761 kubelet.go:281] "Adding apiserver pod source"
2022-10-23T21:25:56.912129+00:00 kube1 kubelet[761]: I1023 21:25:56.911774     761 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
2022-10-23T21:25:56.912681+00:00 kube1 kubelet[761]: W1023 21:25:56.912570     761 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: Get "https://172.16.99.56:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkube1&limit=500&resourceVersion=0": dial tcp 172.16.99.56:6443: connect: connection refused
2022-10-23T21:25:56.912812+00:00 kube1 kubelet[761]: E1023 21:25:56.912674     761 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.16.99.56:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkube1&limit=500&resourceVersion=0": dial tcp 172.16.99.56:6443: connect: connection refused
2022-10-23T21:25:56.914392+00:00 kube1 kubelet[761]: W1023 21:25:56.914258     761 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: Get "https://172.16.99.56:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.16.99.56:6443: connect: connection refused
2022-10-23T21:25:56.914572+00:00 kube1 kubelet[761]: E1023 21:25:56.914354     761 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.16.99.56:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.16.99.56:6443: connect: connection refused
2022-10-23T21:25:56.914713+00:00 kube1 kubelet[761]: I1023 21:25:56.914616     761 kuberuntime_manager.go:240] "Container runtime initialized" containerRuntime="containerd" version="1.5.9-0ubuntu3" apiVersion="v1alpha2"
2022-10-23T21:25:56.915079+00:00 kube1 kubelet[761]: W1023 21:25:56.915022     761 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
2022-10-23T21:25:56.916082+00:00 kube1 kubelet[761]: I1023 21:25:56.916024     761 server.go:1175] "Started kubelet"
2022-10-23T21:25:56.918068+00:00 kube1 kubelet[761]: E1023 21:25:56.917868     761 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube1.1720cfd096b97fc5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kube1", UID:"kube1", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"kube1"}, FirstTimestamp:time.Date(2022, time.October, 23, 21, 25, 56, 915969989, time.Local), LastTimestamp:time.Date(2022, time.October, 23, 21, 25, 56, 915969989, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://172.16.99.56:6443/api/v1/namespaces/default/events": dial tcp 172.16.99.56:6443: connect: connection refused'(may retry after sleeping)
2022-10-23T21:25:56.918549+00:00 kube1 kubelet[761]: E1023 21:25:56.918491     761 cri_stats_provider.go:452] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs"
2022-10-23T21:25:56.918666+00:00 kube1 kubelet[761]: E1023 21:25:56.918538     761 kubelet.go:1317] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0


2022-10-23T21:25:56.918666+00:00 kube1 kubelet[761]: E1023 21:25:56.918538     761 kubelet.go:1317] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0
 on image filesystem"
2022-10-23T21:25:56.919201+00:00 kube1 kubelet[761]: I1023 21:25:56.919136     761 server.go:155] "Starting to listen" address="0.0.0.0" port=10250
2022-10-23T21:25:56.926558+00:00 kube1 kubelet[761]: I1023 21:25:56.926479     761 server.go:438] "Adding debug handlers to kubelet server"
2022-10-23T21:25:56.928274+00:00 kube1 kubelet[761]: I1023 21:25:56.928202     761 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
2022-10-23T21:25:56.928536+00:00 kube1 kubelet[761]: I1023 21:25:56.928474     761 volume_manager.go:293] "Starting Kubelet Volume Manager"
2022-10-23T21:25:56.928737+00:00 kube1 kubelet[761]: I1023 21:25:56.928687     761 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
2022-10-23T21:25:56.930017+00:00 kube1 kubelet[761]: W1023 21:25:56.929893     761 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: Get "https://172.16.99.56:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.16.99.56:6443: connect: connection refused
2022-10-23T21:25:56.930199+00:00 kube1 kubelet[761]: E1023 21:25:56.929987     761 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.16.99.56:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.16.99.56:6443: connect: connection refused
2022-10-23T21:25:56.931052+00:00 kube1 kubelet[761]: E1023 21:25:56.930608     761 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
2022-10-23T21:25:56.931583+00:00 kube1 kubelet[761]: E1023 21:25:56.931403     761 controller.go:144] failed to ensure lease exists, will retry in 200ms, error: Get "https://172.16.99.56:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kube1?timeout=10s": dial tcp 172.16.99.56:6443: connect: connection refused
2022-10-23T21:25:56.964610+00:00 kube1 kubelet[761]: I1023 21:25:56.964538     761 cpu_manager.go:213] "Starting CPU manager" policy="none"
2022-10-23T21:25:56.964778+00:00 kube1 kubelet[761]: I1023 21:25:56.964568     761 cpu_manager.go:214] "Reconciling" reconcilePeriod="10s"
2022-10-23T21:25:56.965040+00:00 kube1 kubelet[761]: I1023 21:25:56.964622     761 state_mem.go:36] "Initialized new in-memory state store"
2022-10-23T21:25:56.965993+00:00 kube1 kubelet[761]: I1023 21:25:56.965929     761 policy_none.go:49] "None policy: Start"
2022-10-23T21:25:56.966703+00:00 kube1 kubelet[761]: I1023 21:25:56.966629     761 memory_manager.go:168] "Starting memorymanager" policy="None"
2022-10-23T21:25:56.966984+00:00 kube1 kubelet[761]: I1023 21:25:56.966674     761 state_mem.go:35] "Initializing new in-memory state store"
2022-10-23T21:25:56.992343+00:00 kube1 kubelet[761]: I1023 21:25:56.992267     761 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4
2022-10-23T21:25:57.025979+00:00 kube1 kubelet[761]: I1023 21:25:57.025900     761 manager.go:447] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
2022-10-23T21:25:57.026338+00:00 kube1 kubelet[761]: I1023 21:25:57.026280     761 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
2022-10-23T21:25:57.027750+00:00 kube1 kubelet[761]: E1023 21:25:57.027686     761 eviction_manager.go:256] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"kube1\" not found"
2022-10-23T21:25:57.028091+00:00 kube1 kubelet[761]: I1023 21:25:57.028039     761 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6
2022-10-23T21:25:57.028238+00:00 kube1 kubelet[761]: I1023 21:25:57.028066     761 status_manager.go:161] "Starting to sync pod status with apiserver"
2022-10-23T21:25:57.028343+00:00 kube1 kubelet[761]: I1023 21:25:57.028088     761 kubelet.go:2010] "Starting kubelet main sync loop"
2022-10-23T21:25:57.028490+00:00 kube1 kubelet[761]: E1023 21:25:57.028183     761 kubelet.go:2034] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
2022-10-23T21:25:57.029334+00:00 kube1 kubelet[761]: W1023 21:25:57.029252     761 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: Get "https://172.16.99.56:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.16.99.56:6443: connect: connection refused
2022-10-23T21:25:57.029743+00:00 kube1 kubelet[761]: E1023 21:25:57.029320     761 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.16.99.56:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.16.99.56:6443: connect: connection refused
2022-10-23T21:25:57.029956+00:00 kube1 kubelet[761]: E1023 21:25:57.029853     761 kubelet.go:2448] "Error getting node" err="node \"kube1\" not found"
2022-10-23T21:25:57.031428+00:00 kube1 kubelet[761]: I1023 21:25:57.031357     761 kubelet_node_status.go:70] "Attempting to register node" node="kube1"
2022-10-23T21:25:57.031864+00:00 kube1 kubelet[761]: E1023 21:25:57.031802     761 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.16.99.56:6443/api/v1/nodes\": dial tcp 172.16.99.56:6443: connect: connection refused" node="kube1"
2022-10-23T21:25:57.129270+00:00 kube1 kubelet[761]: I1023 21:25:57.128998     761 topology_manager.go:205] "Topology Admit Handler"
2022-10-23T21:25:57.130982+00:00 kube1 kubelet[761]: E1023 21:25:57.130361     761 kubelet.go:2448] "Error getting node" err="node \"kube1\" not found"
2022-10-23T21:25:57.131416+00:00 kube1 kubelet[761]: I1023 21:25:57.130558     761 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d575e8b9e62a08d0e3eb5977328fd45a-kubeconfig\") pod \"kube-scheduler-kube1\" (UID: \"d575e8b9e62a08d0e3eb5977328fd45a\") " pod="kube-system/kube-scheduler-kube1"
2022-10-23T21:25:57.133202+00:00 kube1 kubelet[761]: E1023 21:25:57.132787     761 controller.go:144] failed to ensure lease exists, will retry in 400ms, error: Get "https://172.16.99.56:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kube1?timeout=10s": dial tcp 172.16.99.56:6443: connect: connection refused
2022-10-23T21:25:57.135509+00:00 kube1 kubelet[761]: I1023 21:25:57.135349     761 topology_manager.go:205] "Topology Admit Handler"
2022-10-23T21:25:57.140937+00:00 kube1 kubelet[761]: I1023 21:25:57.140666     761 topology_manager.go:205] "Topology Admit Handler"
2022-10-23T21:25:57.146256+00:00 kube1 kubelet[761]: I1023 21:25:57.146028     761 status_manager.go:667] "Failed to get status for pod" podUID=d575e8b9e62a08d0e3eb5977328fd45a pod="kube-system/kube-scheduler-kube1" err="Get \"https://172.16.99.56:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-kube1\": dial tcp 172.16.99.56:6443: connect: connection refused"
2022-10-23T21:25:57.147769+00:00 kube1 kubelet[761]: I1023 21:25:57.147611     761 topology_manager.go:205] "Topology Admit Handler"
2022-10-23T21:25:57.148286+00:00 kube1 kubelet[761]: I1023 21:25:57.148103     761 status_manager.go:667] "Failed to get status for pod" podUID=50d9d00ed98a81d0cbe26b3c69773461 pod="kube-system/etcd-kube1" err="Get \"https://172.16.99.56:6443/api/v1/namespaces/kube-system/pods/etcd-kube1\": dial tcp 172.16.99.56:6443: connect: connection refused"
2022-10-23T21:25:57.158260+00:00 kube1 kubelet[761]: I1023 21:25:57.155388     761 status_manager.go:667] "Failed to get status for pod" podUID=a78085afce602b237cb3e3a8cc1f3c6d pod="kube-system/kube-apiserver-kube1" err="Get \"https://172.16.99.56:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-kube1\": dial tcp 172.16.99.56:6443: connect: connection refused"

[ ..]

Regards,

1 Like

Hi , any update , I have the same issue with containerd ?!!

Hello There

I am having the same issue. Tried multiple OS reloads, containerd configurations, docker configs, but kubernetes becomes unresponsive after installation.

kubeadm version: &version.Info{Major:“1”, Minor:“25”, GitVersion:“v1.25.4”, GitCommit:“872a965c6c6526caa949f0c6ac028ef7aff3fb78”, GitTreeState:“clean”, BuildDate:“2022-11-09T13:35:06Z”, GoVersion:“go1.19.3”, Compiler:“gc”, Platform:“linux/amd64”}

ubuntu 22.04?

Yes ubuntu 22.04

The issue seems to be related networking specifically the CNI plugin. You can check this section for troubleshooting it.

2022-10-23T21:25:56.931052+00:00 kube1 kubelet[761]: E1023 21:25:56.930608 761 kubelet.go:2373] “Container runtime network not ready” networkReady=“NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized”

I am having the same issue here, have been running this for 5 times in a row and the result is the same.

Dec 04 00:32:12 master-node kubelet[29565]: E1204 00:32:12.633281   29565 kubelet.go:1712] "Failed creating a mirror pod for" err="Post \"https://10.211.55.3:6443/api/v1/namespaces/k>
Dec 04 00:32:12 master-node kubelet[29565]: E1204 00:32:12.633566   29565 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been om>
Dec 04 00:32:13 master-node kubelet[29565]: E1204 00:32:13.328242   29565 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://10.211.55.3:6443/api>
Dec 04 00:32:14 master-node kubelet[29565]: E1204 00:32:14.362613   29565 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPlugin>
Dec 04 00:32:14 master-node kubelet[29565]: I1204 00:32:14.746161   29565 scope.go:115] "RemoveContainer" containerID="c069f7921eebf5d5a38cabc0e65f7095bbc8cd9dfa1f34c3943b33797265860>
Dec 04 00:32:14 master-node kubelet[29565]: E1204 00:32:14.746306   29565 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been om>
Dec 04 00:32:14 master-node kubelet[29565]: E1204 00:32:14.746916   29565 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" w>
Dec 04 00:32:15 master-node kubelet[29565]: I1204 00:32:15.784694   29565 scope.go:115] "RemoveContainer" containerID="97b404af969156532e2d4457e120e0df522af87fee1e8fff35de636c51a0bb8>
Dec 04 00:32:15 master-node kubelet[29565]: E1204 00:32:15.784839   29565 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been om>
Dec 04 00:32:15 master-node kubelet[29565]: E1204 00:32:15.785130   29565 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with

i got same issue.

coredns-6d4b75cb6d-pjll4         0/1     CrashLoopBackOff   19 (4m44s ago)   159m
kube-proxy-2fzdv                 0/1     CrashLoopBackOff   18 (2m20s ago)   106m

and the calico also got

calico-system      calico-node-4rdlm                         0/1     CrashLoopBackOff   21 (2m57s ago)   108m

it seemes like wrong nameserver at /etc/resolv.conf

nameserver 127.0.0.53
options edns0 trust-ad
search localdomain

check the netplan config add the nameservers and apply
then delete pod

root@node-0:~# kubectl delete pod  coredns-6d4b75cb6d-kblz4 -n kube-system

the pod will restart as well

kubectl get pod  -n kube-system
NAME                             READY   STATUS    RESTARTS       AGE
coredns-6d4b75cb6d-cb6ln         1/1     Running   0              15m
coredns-6d4b75cb6d-r9vhc         1/1     Running   0              14m
etcd-node-0                      1/1     Running   0              174m
kube-apiserver-node-0            1/1     Running   0              174m
kube-controller-manager-node-0   1/1     Running   0              174m
kube-proxy-2fzdv                 1/1     Running   21 (20m ago)   142m
kube-proxy-8d5xk                 1/1     Running   0              174m
kube-scheduler-node-0            1/1     Running   0              174m

I had a similar problem. I am new in the Kubernetes world. Lab tests work fine. I decided to move forward to the real cloud provider. I selected netcup.eu.

  • 2 VPS - 1 control plane, 2 - worker node
  • Debian 11
  • Additional vLAN. It appears as the “eth1” NIC.

At first, I followed all K8S instructions and decided to set up the control plane. I chose the containerd container runtime. I tried different things and reinstalled a system from scratch. Nothing worked. Finally, I thought about changing a container runtime to CRI-O. I followed this instruction on how to create a cluster with CRI-O. Different sections of this article can differ for you. For example, I prefer to use the firewalld instead of ufw.

My final stack:

  • 2 VPS
  • Additional vLAN presented as eth1. It was my responsibility to set up communication between them.
  • Debian 11
  • firewalld
  • CRI-O
  • Calcio