Kubeadm init failed,Maybe it's a containerd issue?

Hi
I encountered a problem during the installation of kubernetes,It was successful the first time I executed init, but then I imported the cilium network, and its address conflicted with that of my host. Then I executed kubeadm reset --force, and then restarted kubeadm init after recovery. An error was encountered:

root@kmaster1:~# kubeadm init --config Kubernetes-cluster.yaml 
[init] Using Kubernetes version: v1.28.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
W0602 14:28:02.393550    7815 checks.go:835] detected that the sandbox image "registry.aliyuncs.com/google_containers/pause:3.8" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.aliyuncs.com/google_containers/pause:3.9" as the CRI sandbox image.
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kmaster1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.0.21]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kmaster1 localhost] and IPs [10.0.0.21 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kmaster1 localhost] and IPs [10.0.0.21 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
	timed out waiting for the condition

This error is likely caused by:
	- The kubelet is not running
	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	- 'systemctl status kubelet'
	- 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
	- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
	Once you have found the failing container, you can inspect its logs with:
	- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
root@kmaster1:~# 

I checked the containerd container. It did not create any containers, but it pulled the corresponding image.

root@kmaster1:~# ctr -n k8s.io c ls  
CONTAINER    IMAGE    RUNTIME    
root@kmaster1:~# ctr -n k8s.io i ls  
REF                                                                                                                                     TYPE                                                      DIGEST                                                                  SIZE      PLATFORMS                                                                     LABELS                          
registry.aliyuncs.com/google_containers/coredns:v1.10.1                                                                                 application/vnd.docker.distribution.manifest.list.v2+json sha256:90d3eeb2e2108a14fe2ecbef1bc1b5607834335d99c842a377f338aade9da028 15.4 MiB  linux/amd64,linux/arm/v7,linux/arm64,linux/mips64le,linux/ppc64le,linux/s390x io.cri-containerd.image=managed 
registry.aliyuncs.com/google_containers/coredns@sha256:90d3eeb2e2108a14fe2ecbef1bc1b5607834335d99c842a377f338aade9da028                 application/vnd.docker.distribution.manifest.list.v2+json sha256:90d3eeb2e2108a14fe2ecbef1bc1b5607834335d99c842a377f338aade9da028 15.4 MiB  linux/amd64,linux/arm/v7,linux/arm64,linux/mips64le,linux/ppc64le,linux/s390x io.cri-containerd.image=managed 
registry.aliyuncs.com/google_containers/etcd:3.5.9-0                                                                                    application/vnd.docker.distribution.manifest.list.v2+json sha256:b124583790d2407fa140c01f42166e3292cc8191ef5d37034fe5a89032081b90 98.1 MiB  linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x,windows/amd64  io.cri-containerd.image=managed 
registry.aliyuncs.com/google_containers/etcd@sha256:b124583790d2407fa140c01f42166e3292cc8191ef5d37034fe5a89032081b90                    application/vnd.docker.distribution.manifest.list.v2+json sha256:b124583790d2407fa140c01f42166e3292cc8191ef5d37034fe5a89032081b90 98.1 MiB  linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x,windows/amd64  io.cri-containerd.image=managed 
registry.aliyuncs.com/google_containers/kube-apiserver:v1.28.0                                                                          application/vnd.docker.distribution.manifest.list.v2+json sha256:42ebad4ff7b72c7aa6988a6d4674f9391a80f2e0ba4bf20d705a1f844ba0a5c3 33.0 MiB  linux/amd64,linux/arm64,linux/ppc64le,linux/s390x                             io.cri-containerd.image=managed 
registry.aliyuncs.com/google_containers/kube-apiserver@sha256:42ebad4ff7b72c7aa6988a6d4674f9391a80f2e0ba4bf20d705a1f844ba0a5c3          application/vnd.docker.distribution.manifest.list.v2+json sha256:42ebad4ff7b72c7aa6988a6d4674f9391a80f2e0ba4bf20d705a1f844ba0a5c3 33.0 MiB  linux/amd64,linux/arm64,linux/ppc64le,linux/s390x                             io.cri-containerd.image=managed 
registry.aliyuncs.com/google_containers/kube-controller-manager:v1.28.0                                                                 application/vnd.docker.distribution.manifest.list.v2+json sha256:71da477a3f5ae3be6d6b2d6dd23862036aa30346d0fe7660342a6fb54890232b 31.8 MiB  linux/amd64,linux/arm64,linux/ppc64le,linux/s390x                             io.cri-containerd.image=managed 
registry.aliyuncs.com/google_containers/kube-controller-manager@sha256:71da477a3f5ae3be6d6b2d6dd23862036aa30346d0fe7660342a6fb54890232b application/vnd.docker.distribution.manifest.list.v2+json sha256:71da477a3f5ae3be6d6b2d6dd23862036aa30346d0fe7660342a6fb54890232b 31.8 MiB  linux/amd64,linux/arm64,linux/ppc64le,linux/s390x                             io.cri-containerd.image=managed 
registry.aliyuncs.com/google_containers/kube-proxy:v1.28.0                                                                              application/vnd.docker.distribution.manifest.list.v2+json sha256:a61eeb2562dc22fb158f7e00aff4343da2f67b4899a879b76002ce394d94b886 23.4 MiB  linux/amd64,linux/arm64,linux/ppc64le,linux/s390x                             io.cri-containerd.image=managed 
registry.aliyuncs.com/google_containers/kube-proxy@sha256:a61eeb2562dc22fb158f7e00aff4343da2f67b4899a879b76002ce394d94b886              application/vnd.docker.distribution.manifest.list.v2+json sha256:a61eeb2562dc22fb158f7e00aff4343da2f67b4899a879b76002ce394d94b886 23.4 MiB  linux/amd64,linux/arm64,linux/ppc64le,linux/s390x                             io.cri-containerd.image=managed 
registry.aliyuncs.com/google_containers/kube-scheduler:v1.28.0                                                                          application/vnd.docker.distribution.manifest.list.v2+json sha256:cd2275aed550dca60fbccb136fdc407a8e9dd045a015762d7a769e4dee36b6c1 17.9 MiB  linux/amd64,linux/arm64,linux/ppc64le,linux/s390x                             io.cri-containerd.image=managed 
registry.aliyuncs.com/google_containers/kube-scheduler@sha256:cd2275aed550dca60fbccb136fdc407a8e9dd045a015762d7a769e4dee36b6c1          application/vnd.docker.distribution.manifest.list.v2+json sha256:cd2275aed550dca60fbccb136fdc407a8e9dd045a015762d7a769e4dee36b6c1 17.9 MiB  linux/amd64,linux/arm64,linux/ppc64le,linux/s390x                             io.cri-containerd.image=managed 
registry.aliyuncs.com/google_containers/pause:3.9                                                                                       application/vnd.docker.distribution.manifest.list.v2+json sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 314.0 KiB linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x,windows/amd64  io.cri-containerd.image=managed 
registry.aliyuncs.com/google_containers/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097                   application/vnd.docker.distribution.manifest.list.v2+json sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 314.0 KiB linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x,windows/amd64  io.cri-containerd.image=managed 
sha256:4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                                                 application/vnd.docker.distribution.manifest.list.v2+json sha256:71da477a3f5ae3be6d6b2d6dd23862036aa30346d0fe7660342a6fb54890232b 31.8 MiB  linux/amd64,linux/arm64,linux/ppc64le,linux/s390x                             io.cri-containerd.image=managed 
sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                                 application/vnd.docker.distribution.manifest.list.v2+json sha256:b124583790d2407fa140c01f42166e3292cc8191ef5d37034fe5a89032081b90 98.1 MiB  linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x,windows/amd64  io.cri-containerd.image=managed 
sha256:bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                                                 application/vnd.docker.distribution.manifest.list.v2+json sha256:42ebad4ff7b72c7aa6988a6d4674f9391a80f2e0ba4bf20d705a1f844ba0a5c3 33.0 MiB  linux/amd64,linux/arm64,linux/ppc64le,linux/s390x                             io.cri-containerd.image=managed 
sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c                                                                 application/vnd.docker.distribution.manifest.list.v2+json sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 314.0 KiB linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x,windows/amd64  io.cri-containerd.image=managed 
sha256:ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                                                 application/vnd.docker.distribution.manifest.list.v2+json sha256:a61eeb2562dc22fb158f7e00aff4343da2f67b4899a879b76002ce394d94b886 23.4 MiB  linux/amd64,linux/arm64,linux/ppc64le,linux/s390x                             io.cri-containerd.image=managed 
sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                                 application/vnd.docker.distribution.manifest.list.v2+json sha256:90d3eeb2e2108a14fe2ecbef1bc1b5607834335d99c842a377f338aade9da028 15.4 MiB  linux/amd64,linux/arm/v7,linux/arm64,linux/mips64le,linux/ppc64le,linux/s390x io.cri-containerd.image=managed 
sha256:f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                                                 application/vnd.docker.distribution.manifest.list.v2+json sha256:cd2275aed550dca60fbccb136fdc407a8e9dd045a015762d7a769e4dee36b6c1 17.9 MiB  linux/amd64,linux/arm64,linux/ppc64le,linux/s390x                             io.cri-containerd.image=managed 
root@kmaster1:~# 

my environment information:

root@kmaster1:~# uname -r 
6.1.0-13-amd64
root@kmaster1:~# uname -a
Linux kmaster1 6.1.0-13-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.55-1 (2023-09-29) x86_64 GNU/Linux
root@kmaster1:~# cat /etc/debian_version 
12.5
root@kmaster1:~# kubelet version
E0602 14:36:22.471996    7980 run.go:74] "command failed" err="unknown command version"
root@kmaster1:~# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"28", GitVersion:"v1.28.2", GitCommit:"89a4ea3e1e4ddd7f7572286090359983e0387b2f", GitTreeState:"clean", BuildDate:"2023-09-13T09:34:32Z", GoVersion:"go1.20.8", Compiler:"gc", Platform:"linux/amd64"}
root@kmaster1:~# kubectl version
Client Version: v1.28.2
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
The connection to the server 10.0.0.21:6443 was refused - did you specify the right host or port?
root@kmaster1:~# runc -v
runc version 1.1.12
commit: v1.1.12-0-g51d5e946
spec: 1.0.2-dev
go: go1.20.13
libseccomp: 2.5.4
root@kmaster1:~# containerd -v 
containerd github.com/containerd/containerd v1.7.13 7c3aca7a610df76212171d200ca3811ff6096eb8
root@kmaster1:~# 

Here are some error messages I found in the kubelet logs(The content is incomplete and is filtered information):

Jun 02 13:46:00 kmaster1 kubelet[658]: I0602 13:46:00.418850     658 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Jun 02 13:46:00 kmaster1 kubelet[658]: I0602 13:46:00.439107     658 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
Jun 02 13:46:00 kmaster1 kubelet[658]: E0602 13:46:00.453782     658 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs"
Jun 02 13:46:00 kmaster1 kubelet[658]: E0602 13:46:00.453866     658 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Jun 02 13:46:00 kmaster1 kubelet[658]: I0602 13:46:00.506290     658 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Jun 02 13:46:00 kmaster1 kubelet[658]: E0602 13:46:00.513551     658 kubelet.go:2327] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
Jun 02 14:03:39 kmaster1 kubelet[1120]: I0602 14:03:39.312424    1120 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Jun 02 14:03:39 kmaster1 kubelet[1120]: I0602 14:03:39.316470    1120 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
Jun 02 14:03:39 kmaster1 kubelet[1120]: E0602 14:03:39.319191    1120 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs"
Jun 02 14:03:39 kmaster1 kubelet[1120]: E0602 14:03:39.319211    1120 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Jun 02 14:03:39 kmaster1 kubelet[1120]: E0602 14:03:39.335951    1120 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Jun 02 14:03:39 kmaster1 kubelet[1120]: I0602 14:03:39.346537    1120 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Jun 02 14:26:58 kmaster1 kubelet[3511]: I0602 14:26:58.865189    3511 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Jun 02 14:26:58 kmaster1 kubelet[3511]: I0602 14:26:58.868954    3511 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
Jun 02 14:26:58 kmaster1 kubelet[3511]: E0602 14:26:58.871007    3511 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs"
Jun 02 14:26:58 kmaster1 kubelet[3511]: E0602 14:26:58.871131    3511 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Jun 02 14:26:58 kmaster1 kubelet[3511]: I0602 14:26:58.883078    3511 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Jun 02 14:26:58 kmaster1 kubelet[3511]: E0602 14:26:58.887446    3511 kubelet.go:2327] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
Jun 02 14:26:59 kmaster1 kubelet[3555]: I0602 14:26:59.084609    3555 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Jun 02 14:26:59 kmaster1 kubelet[3555]: I0602 14:26:59.090511    3555 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
Jun 02 14:26:59 kmaster1 kubelet[3555]: E0602 14:26:59.091925    3555 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs"
Jun 02 14:26:59 kmaster1 kubelet[3555]: E0602 14:26:59.091981    3555 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Jun 02 14:26:59 kmaster1 kubelet[3555]: E0602 14:26:59.102172    3555 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Jun 02 14:26:59 kmaster1 kubelet[3555]: I0602 14:26:59.103552    3555 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Jun 02 14:28:03 kmaster1 kubelet[7909]: I0602 14:28:03.967366    7909 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Jun 02 14:28:03 kmaster1 kubelet[7909]: I0602 14:28:03.973456    7909 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
Jun 02 14:28:03 kmaster1 kubelet[7909]: E0602 14:28:03.974820    7909 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs"
Jun 02 14:28:03 kmaster1 kubelet[7909]: E0602 14:28:03.974835    7909 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Jun 02 14:28:03 kmaster1 kubelet[7909]: E0602 14:28:03.979881    7909 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Jun 02 14:28:03 kmaster1 kubelet[7909]: I0602 14:28:03.987221    7909 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"

Contents of the configuration file:

root@kmaster1:~# cat Kubernetes-cluster.yaml 
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 10.0.0.21
  bindPort: 6443
nodeRegistration:
  #criSocket: unix:///var/run/containerd/containerd.sock
  criSocket: unix:///run/containerd/containerd.sock
  imagePullPolicy: IfNotPresent
  name: kmaster1
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: 1.28.0
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
---
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
cgroupDriver: systemd

I have successfully installed it on these machines once.

But later when I imported the cliume network, I did not modify its network segment range,So it conflicts with my host network segment.

Next I executed kubeadm reset --force and rebooted my machine.

After restarting, I checked that his network was normal, and then re-executed init,encountered the current problem.

I checked my DNS and this is its result:

root@kmaster1:~# nslookup localhost 
Server:		223.5.5.5
Address:	223.5.5.5#53

Name:	localhost
Address: 127.0.0.1
Name:	localhost
Address: ::1

At the same time, I also added the corresponding resolution in the /etc/hosts file

root@kmaster1:~# cat /etc/hosts
10.0.0.5	debian

# The following lines are desirable for IPv6 capable hosts
#::1     localhost ip6-localhost ip6-loopback
#ff02::1 ip6-allnodes
#ff02::2 ip6-allrouters
10.0.0.21 kmaster1
10.0.0.22 knode1
10.0.0.23 knode2
127.0.0.1	localhost

But now executing kubeadm init will still prompt:

[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
	timed out waiting for the condition

This error is likely caused by:
	- The kubelet is not running
	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	- 'systemctl status kubelet'
	- 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
	- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
	Once you have found the failing container, you can inspect its logs with:
	- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
couldn't initialize a Kubernetes cluster
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init.runWaitControlPlanePhase
	cmd/kubeadm/app/cmd/phases/init/waitcontrolplane.go:108
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
	cmd/kubeadm/app/cmd/phases/workflow/runner.go:259
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
	cmd/kubeadm/app/cmd/phases/workflow/runner.go:446
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
	cmd/kubeadm/app/cmd/phases/workflow/runner.go:232
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
	cmd/kubeadm/app/cmd/init.go:111
github.com/spf13/cobra.(*Command).execute
	vendor/github.com/spf13/cobra/command.go:940
github.com/spf13/cobra.(*Command).ExecuteC
	vendor/github.com/spf13/cobra/command.go:1068
github.com/spf13/cobra.(*Command).Execute
	vendor/github.com/spf13/cobra/command.go:992
k8s.io/kubernetes/cmd/kubeadm/app.Run
	cmd/kubeadm/app/kubeadm.go:50
main.main
	cmd/kubeadm/kubeadm.go:25
runtime.main
	/usr/local/go/src/runtime/proc.go:250
runtime.goexit
	/usr/local/go/src/runtime/asm_amd64.s:1598
error execution phase wait-control-plane
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
	cmd/kubeadm/app/cmd/phases/workflow/runner.go:260
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
	cmd/kubeadm/app/cmd/phases/workflow/runner.go:446
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
	cmd/kubeadm/app/cmd/phases/workflow/runner.go:232
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
	cmd/kubeadm/app/cmd/init.go:111
github.com/spf13/cobra.(*Command).execute
	vendor/github.com/spf13/cobra/command.go:940
github.com/spf13/cobra.(*Command).ExecuteC
	vendor/github.com/spf13/cobra/command.go:1068
github.com/spf13/cobra.(*Command).Execute
	vendor/github.com/spf13/cobra/command.go:992
k8s.io/kubernetes/cmd/kubeadm/app.Run
	cmd/kubeadm/app/kubeadm.go:50
main.main
	cmd/kubeadm/kubeadm.go:25
runtime.main
	/usr/local/go/src/runtime/proc.go:250
runtime.goexit
	/usr/local/go/src/runtime/asm_amd64.s:1598

I also checked kubelet

root@kmaster1:~# curl http://localhost:10248/healthz 
okroot@kmaster1:~# 

Hope to get your advice, thank you!