Trying to initialize a k8s cluster through kubeadm, facing following issue. Attached screenshots of the same.
W0501 18:13:13.516865 1896313 version.go:104] could not fetch a Kubernetes version from the internet: unable to get URL “https://dl.k8s.io/release/stable-1.txt”: Get “https://dl.k8s.io/release/stable-1.txt”: dial tcp 34.107.204.206:443: connect: connection refused
W0501 18:13:13.516937 1896313 version.go:105] falling back to the local client version: v1.30.0
[init] Using Kubernetes version: v1.30.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using ‘kubeadm config images pull’
W0501 18:13:13.654828 1896313 checks.go:844] detected that the sandbox image “registry.k8s.io/pause:3.8” of the container runtime is inconsistent with that used by kubeadm.It is recommended to use “registry.k8s.io/pause:3.9” as the CRI sandbox image.
[certs] Using certificateDir folder “/etc/kubernetes/pki”
[certs] Generating “ca” certificate and key
[certs] Generating “apiserver” certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master-node] and IPs [10.96.0.1 X.X.X.X
]
[certs] Generating “apiserver-kubelet-client” certificate and key
[certs] Generating “front-proxy-ca” certificate and key
[certs] Generating “front-proxy-client” certificate and key
[certs] Generating “etcd/ca” certificate and key
[certs] Generating “etcd/server” certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master-node] and IPs [X.X.X.X 127.0.0.1 ::1]
[certs] Generating “etcd/peer” certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master-node] and IPs [10.159.108.25 127.0.0.1 ::1]
[certs] Generating “etcd/healthcheck-client” certificate and key
[certs] Generating “apiserver-etcd-client” certificate and key
[certs] Generating “sa” key and public key
[kubeconfig] Using kubeconfig folder “/etc/kubernetes”
[kubeconfig] Writing “admin.conf” kubeconfig file
[kubeconfig] Writing “super-admin.conf” kubeconfig file
[kubeconfig] Writing “kubelet.conf” kubeconfig file
[kubeconfig] Writing “controller-manager.conf” kubeconfig file
[kubeconfig] Writing “scheduler.conf” kubeconfig file
[etcd] Creating static Pod manifest for local etcd in “/etc/kubernetes/manifests”
[control-plane] Using manifest folder “/etc/kubernetes/manifests”
[control-plane] Creating static Pod manifest for “kube-apiserver”
[control-plane] Creating static Pod manifest for “kube-controller-manager”
[control-plane] Creating static Pod manifest for “kube-scheduler”
[kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
[kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory “/etc/kubernetes/manifests”
[kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 1.001742882s
[api-check] Waiting for a healthy API server. This can take up to 4m0s
[api-check] The API server is not healthy after 4m0.001168712s
Unfortunately, an error has occurred:
context deadline exceeded
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- ‘systemctl status kubelet’
- ‘journalctl -xeu kubelet’
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- ‘crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock ps -a | grep kube | grep -v pause’
Once you have found the failing container, you can inspect its logs with:
- ‘crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock logs CONTAINERID’
error execution phase wait-control-plane: couldn’t initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
Presumably because you are in an area with limited Internet access or a slow Internet connection, consider using a mirror accelerator or setting up a mirror in your local mirror repository.
For Docker users, you can configure the Docker daemon to use the image accelerator.
For users who use containerd, you can modify /etc/containerd/config.toml to configure the mirroring repository image.
As a temporary solution for Docker users: you can try manually pulling the desired pause image from an accessible image repository and then re-tagging it and pushing it to your private repository or using it locally.
Thank you for your tips. I had the same issue, however it solved.
by the way, is there any other way to change sandbox image to local registry?
and I am wondering why k8s does not use local image already downloaded.
Am using:
OS: RHEL 8
Docker 24.0.0
Containerd
Kubernetes 1.29.7
I0820 01:20:15.455571 3178467 initconfiguration.go:122] detected and using CRI socket: unix:///var/run/containerd/containerd.sock
I0820 01:20:15.455766 3178467 interface.go:432] Looking for default routes with IPv4 addresses
I0820 01:20:15.455773 3178467 interface.go:437] Default route transits interface "ens192"
I0820 01:20:15.455893 3178467 interface.go:209] Interface ens192 is up
I0820 01:20:15.455930 3178467 interface.go:257] Interface "ens192" has 1 addresses :[10.179.193.75/24].
I0820 01:20:15.455938 3178467 interface.go:224] Checking addr 10.179.193.75/24.
I0820 01:20:15.455944 3178467 interface.go:231] IP found 10.179.193.75
I0820 01:20:15.455957 3178467 interface.go:263] Found valid IPv4 address 10.179.193.75 for interface "ens192".
I0820 01:20:15.455963 3178467 interface.go:443] Found active IP 10.179.193.75
I0820 01:20:15.455987 3178467 kubelet.go:196] the value of KubeletConfiguration.cgroupDriver is empty; setting it to "systemd"
I0820 01:20:15.466329 3178467 checks.go:563] validating Kubernetes and kubeadm version
I0820 01:20:15.466365 3178467 checks.go:168] validating if the firewall is enabled and active
I0820 01:20:15.489790 3178467 checks.go:203] validating availability of port 6443
I0820 01:20:15.490423 3178467 checks.go:203] validating availability of port 10259
I0820 01:20:15.490522 3178467 checks.go:203] validating availability of port 10257
I0820 01:20:15.490630 3178467 checks.go:280] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml
I0820 01:20:15.490675 3178467 checks.go:280] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml
I0820 01:20:15.490698 3178467 checks.go:280] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml
I0820 01:20:15.490714 3178467 checks.go:280] validating the existence of file /etc/kubernetes/manifests/etcd.yaml
I0820 01:20:15.490733 3178467 checks.go:430] validating if the connectivity type is via proxy or direct
I0820 01:20:15.490838 3178467 checks.go:469] validating http connectivity to first IP address in the CIDR
I0820 01:20:15.490881 3178467 checks.go:469] validating http connectivity to first IP address in the CIDR
I0820 01:20:15.490909 3178467 checks.go:104] validating the container runtime
I0820 01:20:15.553517 3178467 checks.go:639] validating whether swap is enabled or not
I0820 01:20:15.553828 3178467 checks.go:370] validating the presence of executable crictl
I0820 01:20:15.553902 3178467 checks.go:370] validating the presence of executable conntrack
I0820 01:20:15.554071 3178467 checks.go:370] validating the presence of executable ip
I0820 01:20:15.554108 3178467 checks.go:370] validating the presence of executable iptables
I0820 01:20:15.554154 3178467 checks.go:370] validating the presence of executable mount
I0820 01:20:15.554327 3178467 checks.go:370] validating the presence of executable nsenter
I0820 01:20:15.554373 3178467 checks.go:370] validating the presence of executable ebtables
I0820 01:20:15.554405 3178467 checks.go:370] validating the presence of executable ethtool
I0820 01:20:15.554429 3178467 checks.go:370] validating the presence of executable socat
I0820 01:20:15.554461 3178467 checks.go:370] validating the presence of executable tc
I0820 01:20:15.554486 3178467 checks.go:370] validating the presence of executable touch
I0820 01:20:15.554520 3178467 checks.go:516] running all checks
I0820 01:20:15.568320 3178467 checks.go:401] checking whether the given node name is valid and reachable using net.LookupHost
I0820 01:20:15.568659 3178467 checks.go:605] validating kubelet version
I0820 01:20:15.629837 3178467 checks.go:130] validating if the "kubelet" service is enabled and active
I0820 01:20:15.656501 3178467 checks.go:203] validating availability of port 10250
I0820 01:20:15.656905 3178467 checks.go:329] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables
I0820 01:20:15.656957 3178467 checks.go:329] validating the contents of file /proc/sys/net/ipv4/ip_forward
I0820 01:20:15.657011 3178467 checks.go:203] validating availability of port 2379
I0820 01:20:15.657060 3178467 checks.go:203] validating availability of port 2380
I0820 01:20:15.657095 3178467 checks.go:243] validating the existence and emptiness of directory /var/lib/etcd
I0820 01:20:15.657378 3178467 checks.go:828] using image pull policy: IfNotPresent
I0820 01:20:15.688177 3178467 checks.go:854] pulling: registry.k8s.io/kube-apiserver:v1.29.7
I0820 01:21:16.321413 3178467 checks.go:854] pulling: registry.k8s.io/kube-controller-manager:v1.29.7
I0820 01:22:12.879993 3178467 checks.go:854] pulling: registry.k8s.io/kube-scheduler:v1.29.7
I0820 01:22:48.931985 3178467 checks.go:854] pulling: registry.k8s.io/kube-proxy:v1.29.7
I0820 01:23:24.002036 3178467 checks.go:854] pulling: registry.k8s.io/coredns/coredns:v1.11.1
I0820 01:24:01.065061 3178467 checks.go:854] pulling: registry.k8s.io/pause:3.9
I0820 01:24:10.196047 3178467 checks.go:854] pulling: registry.k8s.io/etcd:3.5.12-0
I0820 01:25:25.392961 3178467 certs.go:112] creating a new certificate authority for ca
I0820 01:25:25.978215 3178467 certs.go:519] validating certificate period for ca certificate
I0820 01:25:26.745643 3178467 certs.go:112] creating a new certificate authority for front-proxy-ca
I0820 01:25:27.064819 3178467 certs.go:519] validating certificate period for front-proxy-ca certificate
I0820 01:25:27.483012 3178467 certs.go:112] creating a new certificate authority for etcd-ca
I0820 01:25:27.668619 3178467 certs.go:519] validating certificate period for etcd/ca certificate
I0820 01:25:29.552095 3178467 certs.go:78] creating new public/private key files for signing service account users
I0820 01:25:29.687441 3178467 kubeconfig.go:112] creating kubeconfig file for admin.conf
I0820 01:25:29.870492 3178467 kubeconfig.go:112] creating kubeconfig file for super-admin.conf
I0820 01:25:30.068247 3178467 kubeconfig.go:112] creating kubeconfig file for kubelet.conf
I0820 01:25:30.261966 3178467 kubeconfig.go:112] creating kubeconfig file for controller-manager.conf
I0820 01:25:30.538109 3178467 kubeconfig.go:112] creating kubeconfig file for scheduler.conf
I0820 01:25:30.725911 3178467 local.go:65] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml"
I0820 01:25:30.725940 3178467 manifests.go:102] [control-plane] getting StaticPodSpecs
I0820 01:25:30.726182 3178467 certs.go:519] validating certificate period for CA certificate
I0820 01:25:30.726240 3178467 manifests.go:128] [control-plane] adding volume "ca-certs" for component "kube-apiserver"
I0820 01:25:30.726246 3178467 manifests.go:128] [control-plane] adding volume "etc-pki" for component "kube-apiserver"
I0820 01:25:30.726250 3178467 manifests.go:128] [control-plane] adding volume "k8s-certs" for component "kube-apiserver"
I0820 01:25:30.727049 3178467 manifests.go:157] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml"
I0820 01:25:30.727065 3178467 manifests.go:102] [control-plane] getting StaticPodSpecs
I0820 01:25:30.727239 3178467 manifests.go:128] [control-plane] adding volume "ca-certs" for component "kube-controller-manager"
I0820 01:25:30.727244 3178467 manifests.go:128] [control-plane] adding volume "etc-pki" for component "kube-controller-manager"
I0820 01:25:30.727247 3178467 manifests.go:128] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager"
I0820 01:25:30.727251 3178467 manifests.go:128] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager"
I0820 01:25:30.727254 3178467 manifests.go:128] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager"
I0820 01:25:30.729532 3178467 manifests.go:157] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
I0820 01:25:30.729559 3178467 manifests.go:102] [control-plane] getting StaticPodSpecs
I0820 01:25:30.729791 3178467 manifests.go:128] [control-plane] adding volume "kubeconfig" for component "kube-scheduler"
I0820 01:25:30.730956 3178467 manifests.go:157] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
I0820 01:25:30.730970 3178467 kubelet.go:68] Stopping the kubelet
I0820 01:25:30.949012 3178467 waitcontrolplane.go:83] [wait-control-plane] Waiting for the API server to be healthy
couldn't initialize a Kubernetes cluster
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init.runWaitControlPlanePhase
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init/waitcontrolplane.go:109
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:259
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:446
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:232
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:124
github.com/spf13/cobra.(*Command).execute
github.com/spf13/cobra@v1.7.0/command.go:940
github.com/spf13/cobra.(*Command).ExecuteC
github.com/spf13/cobra@v1.7.0/command.go:1068
github.com/spf13/cobra.(*Command).Execute
github.com/spf13/cobra@v1.7.0/command.go:992
k8s.io/kubernetes/cmd/kubeadm/app.Run
k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50
main.main
k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
runtime/proc.go:271
runtime.goexit
runtime/asm_amd64.s:1695
error execution phase wait-control-plane
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:260
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:446
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:232
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:124
github.com/spf13/cobra.(*Command).execute
github.com/spf13/cobra@v1.7.0/command.go:940
github.com/spf13/cobra.(*Command).ExecuteC
github.com/spf13/cobra@v1.7.0/command.go:1068
github.com/spf13/cobra.(*Command).Execute
github.com/spf13/cobra@v1.7.0/command.go:992
k8s.io/kubernetes/cmd/kubeadm/app.Run
k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50
main.main
k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
runtime/proc.go:271
runtime.goexit
runtime/asm_amd64.s:1695
Error Logs from kubelet
I0820 01:25:31.562757 3179613 kubelet.go:402] “Kubelet is running in standalone mode, will skip”
I0820 01:25:31.564391 3179613 kubelet.go:1618] “No API server defined - no node status update”
I0820 01:25:31.563669 3179613 volume_host.go:77] “KubeClient is nil. Skip initialization of CSID”
E0820 01:25:31.639179 3179613 kubelet.go:2361] “Skipping pod synchronization” err=“[container runtime is down]”
I0820 01:25:31.574185 3179613 factory.go:219] Registration of the crio container factory failed:
E0820 01:25:31.571963 3179613 kubelet.go:1462] “Image garbage collection failed once. Stats init”
Hi! Dear andersonvm .I am a young man from China who has just started learning about K8s. I encountered an issue with the initialization of kubeadm init, and I am not sure how to contact you. Would it be convenient for me? Ubuntu 22.04 Keepalived+VIP is active, haproxy is active, firewall policy is turned off “Unable to register node with API server” err=“Post "https://192.168.31.99:16443/api/v1/nodes\”: dial tcp 192.168.31.99:16443: connect: connection refused" node=“master1”