Asking for help? Comment out what you need so we can get more information to help you!
Cluster information:
Kubernetes version:
Cloud being used: (put bare-metal if not on a public cloud)
Installation method:
Host OS: Ubuntu 22.04
CNI and version:
CRI and version:
During the initialization of the very first control-plane-nodes (3 control-plane-nodes + 3 worker-nodes) I’m getting these errors :
root@k8s-eu-1-control-plane-node-1:~# sudo kubeadm init --control-plane-endpoint k82-eu-1-load-balancer-dns-1:53 --upload-certs --v=8 --ignore-preflight-errors=Port-6443
I1128 17:42:29.255247 31193 initconfiguration.go:117] detected and using CRI socket: unix:///var/run/containerd/containerd.sock
I1128 17:42:29.256565 31193 interface.go:432] Looking for default routes with IPv4 addresses
I1128 17:42:29.256574 31193 interface.go:437] Default route transits interface "eth0"
I1128 17:42:29.256762 31193 interface.go:209] Interface eth0 is up
I1128 17:42:29.256983 31193 interface.go:257] Interface "eth0" has 2 addresses :[aa.aaa.aaa.aa/19 10.0.0.30/32].
I1128 17:42:29.257001 31193 interface.go:224] Checking addr aa.aaa.aaa.aa/19.
I1128 17:42:29.257012 31193 interface.go:231] IP found aa.aaa.aaa.aa
I1128 17:42:29.257029 31193 interface.go:263] Found valid IPv4 address aa.aaa.aaa.aa for interface "eth0".
I1128 17:42:29.257036 31193 interface.go:443] Found active IP aa.aaa.aaa.aa
I1128 17:42:29.257163 31193 kubelet.go:196] the value of KubeletConfiguration.cgroupDriver is empty; setting it to "systemd"
I1128 17:42:29.267107 31193 version.go:187] fetching Kubernetes version from URL: https://dl.k8s.io/release/stable-1.txt
[init] Using Kubernetes version: v1.28.4
[preflight] Running pre-flight checks
I1128 17:42:29.559049 31193 checks.go:563] validating Kubernetes and kubeadm version
I1128 17:42:29.559082 31193 checks.go:168] validating if the firewall is enabled and active
I1128 17:42:29.568760 31193 checks.go:203] validating availability of port 6443
[WARNING Port-6443]: Port 6443 is in use
I1128 17:42:29.569105 31193 checks.go:203] validating availability of port 10259
I1128 17:42:29.569133 31193 checks.go:203] validating availability of port 10257
I1128 17:42:29.569152 31193 checks.go:280] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml
I1128 17:42:29.569164 31193 checks.go:280] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml
I1128 17:42:29.569171 31193 checks.go:280] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml
I1128 17:42:29.569179 31193 checks.go:280] validating the existence of file /etc/kubernetes/manifests/etcd.yaml
I1128 17:42:29.569186 31193 checks.go:430] validating if the connectivity type is via proxy or direct
I1128 17:42:29.569212 31193 checks.go:469] validating http connectivity to first IP address in the CIDR
I1128 17:42:29.569229 31193 checks.go:469] validating http connectivity to first IP address in the CIDR
I1128 17:42:29.569235 31193 checks.go:104] validating the container runtime
I1128 17:42:29.627509 31193 checks.go:639] validating whether swap is enabled or not
I1128 17:42:29.628210 31193 checks.go:370] validating the presence of executable crictl
I1128 17:42:29.628302 31193 checks.go:370] validating the presence of executable conntrack
I1128 17:42:29.628330 31193 checks.go:370] validating the presence of executable ip
I1128 17:42:29.628358 31193 checks.go:370] validating the presence of executable iptables
I1128 17:42:29.628393 31193 checks.go:370] validating the presence of executable mount
I1128 17:42:29.628433 31193 checks.go:370] validating the presence of executable nsenter
I1128 17:42:29.628453 31193 checks.go:370] validating the presence of executable ebtables
I1128 17:42:29.628482 31193 checks.go:370] validating the presence of executable ethtool
I1128 17:42:29.628514 31193 checks.go:370] validating the presence of executable socat
I1128 17:42:29.628539 31193 checks.go:370] validating the presence of executable tc
I1128 17:42:29.628557 31193 checks.go:370] validating the presence of executable touch
I1128 17:42:29.628579 31193 checks.go:516] running all checks
I1128 17:42:29.645127 31193 checks.go:401] checking whether the given node name is valid and reachable using net.LookupHost
I1128 17:42:29.645200 31193 checks.go:605] validating kubelet version
I1128 17:42:29.719616 31193 checks.go:130] validating if the "kubelet" service is enabled and active
I1128 17:42:29.732450 31193 checks.go:203] validating availability of port 10250
I1128 17:42:29.732572 31193 checks.go:329] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables
I1128 17:42:29.732629 31193 checks.go:329] validating the contents of file /proc/sys/net/ipv4/ip_forward
I1128 17:42:29.732650 31193 checks.go:203] validating availability of port 2379
I1128 17:42:29.732675 31193 checks.go:203] validating availability of port 2380
I1128 17:42:29.732694 31193 checks.go:243] validating the existence and emptiness of directory /var/lib/etcd
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I1128 17:42:29.732832 31193 checks.go:828] using image pull policy: IfNotPresent
I1128 17:42:29.797031 31193 checks.go:846] image exists: registry.k8s.io/kube-apiserver:v1.28.4
I1128 17:42:29.841337 31193 checks.go:846] image exists: registry.k8s.io/kube-controller-manager:v1.28.4
I1128 17:42:29.886504 31193 checks.go:846] image exists: registry.k8s.io/kube-scheduler:v1.28.4
I1128 17:42:29.927376 31193 checks.go:846] image exists: registry.k8s.io/kube-proxy:v1.28.4
W1128 17:42:29.971812 31193 checks.go:835] detected that the sandbox image "registry.k8s.io/pause:3.6" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.k8s.io/pause:3.9" as the CRI sandbox image.
I1128 17:42:30.009450 31193 checks.go:846] image exists: registry.k8s.io/pause:3.9
I1128 17:42:30.043123 31193 checks.go:846] image exists: registry.k8s.io/etcd:3.5.9-0
I1128 17:42:30.072814 31193 checks.go:846] image exists: registry.k8s.io/coredns/coredns:v1.10.1
[certs] Using certificateDir folder "/etc/kubernetes/pki"
I1128 17:42:30.072992 31193 certs.go:112] creating a new certificate authority for ca
[certs] Generating "ca" certificate and key
I1128 17:42:30.335319 31193 certs.go:519] validating certificate period for ca certificate
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k82-eu-1-load-balancer-dns-1 k8s-eu-1-control-plane-node-1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 38.242.249.60]
[certs] Generating "apiserver-kubelet-client" certificate and key
I1128 17:42:30.661803 31193 certs.go:112] creating a new certificate authority for front-proxy-ca
[certs] Generating "front-proxy-ca" certificate and key
I1128 17:42:30.777524 31193 certs.go:519] validating certificate period for front-proxy-ca certificate
[certs] Generating "front-proxy-client" certificate and key
I1128 17:42:31.015301 31193 certs.go:112] creating a new certificate authority for etcd-ca
[certs] Generating "etcd/ca" certificate and key
I1128 17:42:31.154876 31193 certs.go:519] validating certificate period for etcd/ca certificate
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-eu-1-control-plane-node-1 localhost] and IPs [38.242.249.60 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-eu-1-control-plane-node-1 localhost] and IPs [38.242.249.60 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
I1128 17:42:31.674814 31193 certs.go:78] creating new public/private key files for signing service account users
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1128 17:42:31.927500 31193 kubeconfig.go:103] creating kubeconfig file for admin.conf
W1128 17:42:31.927953 31193 endpoint.go:57] [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "admin.conf" kubeconfig file
I1128 17:42:32.224910 31193 kubeconfig.go:103] creating kubeconfig file for kubelet.conf
W1128 17:42:32.225431 31193 endpoint.go:57] [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "kubelet.conf" kubeconfig file
I1128 17:42:32.304014 31193 kubeconfig.go:103] creating kubeconfig file for controller-manager.conf
W1128 17:42:32.304371 31193 endpoint.go:57] [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1128 17:42:32.491972 31193 kubeconfig.go:103] creating kubeconfig file for scheduler.conf
W1128 17:42:32.492400 31193 endpoint.go:57] [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1128 17:42:32.674599 31193 local.go:65] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
I1128 17:42:32.674664 31193 manifests.go:102] [control-plane] getting StaticPodSpecs
I1128 17:42:32.675000 31193 certs.go:519] validating certificate period for CA certificate
I1128 17:42:32.675079 31193 manifests.go:128] [control-plane] adding volume "ca-certs" for component "kube-apiserver"
I1128 17:42:32.675093 31193 manifests.go:128] [control-plane] adding volume "etc-ca-certificates" for component "kube-apiserver"
I1128 17:42:32.675107 31193 manifests.go:128] [control-plane] adding volume "k8s-certs" for component "kube-apiserver"
I1128 17:42:32.675113 31193 manifests.go:128] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-apiserver"
I1128 17:42:32.675121 31193 manifests.go:128] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-apiserver"
I1128 17:42:32.676061 31193 manifests.go:157] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
I1128 17:42:32.676086 31193 manifests.go:102] [control-plane] getting StaticPodSpecs
I1128 17:42:32.676281 31193 manifests.go:128] [control-plane] adding volume "ca-certs" for component "kube-controller-manager"
I1128 17:42:32.676296 31193 manifests.go:128] [control-plane] adding volume "etc-ca-certificates" for component "kube-controller-manager"
I1128 17:42:32.676302 31193 manifests.go:128] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager"
I1128 17:42:32.676310 31193 manifests.go:128] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager"
I1128 17:42:32.676321 31193 manifests.go:128] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager"
I1128 17:42:32.676330 31193 manifests.go:128] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-controller-manager"
I1128 17:42:32.676339 31193 manifests.go:128] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-controller-manager"
I1128 17:42:32.677157 31193 manifests.go:157] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[control-plane] Creating static Pod manifest for "kube-scheduler"
I1128 17:42:32.677177 31193 manifests.go:102] [control-plane] getting StaticPodSpecs
I1128 17:42:32.677340 31193 manifests.go:128] [control-plane] adding volume "kubeconfig" for component "kube-scheduler"
I1128 17:42:32.677808 31193 manifests.go:157] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
I1128 17:42:32.677840 31193 kubelet.go:67] Stopping the kubelet
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
I1128 17:42:32.917314 31193 waitcontrolplane.go:83] [wait-control-plane] Waiting for the API server to be healthy
I1128 17:42:32.918088 31193 loader.go:395] Config loaded from file: /etc/kubernetes/admin.conf
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I1128 17:42:32.930405 31193 round_trippers.go:463] GET https://k82-eu-1-load-balancer-dns-1:53/healthz?timeout=10s
I1128 17:42:32.930428 31193 round_trippers.go:469] Request Headers:
I1128 17:42:32.930465 31193 round_trippers.go:473] Accept: application/json, */*
I1128 17:42:32.930480 31193 round_trippers.go:473] User-Agent: kubeadm/v1.28.2 (linux/amd64) kubernetes/89a4ea3
I1128 17:42:42.939995 31193 round_trippers.go:574] Response Status: in 10009 milliseconds
I1128 17:42:42.940042 31193 round_trippers.go:577] Response Headers:
I1128 17:42:43.441239 31193 round_trippers.go:463] GET https://k82-eu-1-load-balancer-dns-1:53/healthz?timeout=10s
I1128 17:42:43.441283 31193 round_trippers.go:469] Request Headers:
I1128 17:42:43.441293 31193 round_trippers.go:473] Accept: application/json, */*
I1128 17:42:43.441301 31193 round_trippers.go:473] User-Agent: kubeadm/v1.28.2 (linux/amd64) kubernetes/89a4ea3
I1128 17:42:53.441613 31193 round_trippers.go:574] Response Status: in 10000 milliseconds
I1128 17:42:53.441661 31193 round_trippers.go:577] Response Headers:
I1128 17:42:53.940625 31193 round_trippers.go:463] GET https://k82-eu-1-load-balancer-dns-1:53/healthz?timeout=10s
I1128 17:42:53.940670 31193 round_trippers.go:469] Request Headers:
I1128 17:42:53.940680 31193 round_trippers.go:473] Accept: application/json, */*
I1128 17:42:53.940687 31193 round_trippers.go:473] User-Agent: kubeadm/v1.28.2 (linux/amd64) kubernetes/89a4ea3
I1128 17:43:03.941173 31193 round_trippers.go:574] Response Status: in 10000 milliseconds
I1128 17:43:03.941252 31193 round_trippers.go:577] Response Headers:
I1128 17:43:04.441176 31193 round_trippers.go:463] GET https://k82-eu-1-load-balancer-dns-1:53/healthz?timeout=10s
I1128 17:43:04.441214 31193 round_trippers.go:469] Request Headers:
I1128 17:43:04.441224 31193 round_trippers.go:473] Accept: application/json, */*
I1128 17:43:04.441230 31193 round_trippers.go:473] User-Agent: kubeadm/v1.28.2 (linux/amd64) kubernetes/89a4ea3
[kubelet-check] Initial timeout of 40s passed.
I1128 17:43:14.442189 31193 round_trippers.go:574] Response Status: in 10000 milliseconds
I1128 17:43:14.442229 31193 round_trippers.go:577] Response Headers:
I1128 17:43:14.941039 31193 round_trippers.go:463] GET https://k82-eu-1-load-balancer-dns-1:53/healthz?timeout=10s
I1128 17:43:14.941104 31193 round_trippers.go:469] Request Headers:
I1128 17:43:14.941120 31193 round_trippers.go:473] User-Agent: kubeadm/v1.28.2 (linux/amd64) kubernetes/89a4ea3
I1128 17:43:14.941140 31193 round_trippers.go:473] Accept: application/json, */*
I1128 17:43:24.941173 31193 round_trippers.go:574] Response Status: in 10000 milliseconds
I1128 17:43:24.941223 31193 round_trippers.go:577] Response Headers:
I1128 17:43:24.941507 31193 round_trippers.go:463] GET https://k82-eu-1-load-balancer-dns-1:53/healthz?timeout=10s
I1128 17:43:24.941528 31193 round_trippers.go:469] Request Headers:
I1128 17:43:24.941539 31193 round_trippers.go:473] Accept: application/json, */*
I1128 17:43:24.941548 31193 round_trippers.go:473] User-Agent: kubeadm/v1.28.2 (linux/amd64) kubernetes/89a4ea3
I1128 17:43:34.942362 31193 round_trippers.go:574] Response Status: in 10000 milliseconds
I1128 17:43:34.942404 31193 round_trippers.go:577] Response Headers:
I1128 17:43:35.440585 31193 round_trippers.go:463] GET https://k82-eu-1-load-balancer-dns-1:53/healthz?timeout=10s
I1128 17:43:35.440632 31193 round_trippers.go:469] Request Headers:
I1128 17:43:35.440642 31193 round_trippers.go:473] Accept: application/json, */*
I1128 17:43:35.440648 31193 round_trippers.go:473] User-Agent: kubeadm/v1.28.2 (linux/amd64) kubernetes/89a4ea3
I1128 17:43:45.441230 31193 round_trippers.go:574] Response Status: in 10000 milliseconds
I1128 17:43:45.441291 31193 round_trippers.go:577] Response Headers:
I1128 17:43:45.441580 31193 round_trippers.go:463] GET https://k82-eu-1-load-balancer-dns-1:53/healthz?timeout=10s
I1128 17:43:45.441606 31193 round_trippers.go:469] Request Headers:
I1128 17:43:45.441640 31193 round_trippers.go:473] Accept: application/json, */*
I1128 17:43:45.441650 31193 round_trippers.go:473] User-Agent: kubeadm/v1.28.2 (linux/amd64) kubernetes/89a4ea3
I1128 17:43:55.441767 31193 round_trippers.go:574] Response Status: in 10000 milliseconds
I1128 17:43:55.441817 31193 round_trippers.go:577] Response Headers:
I1128 17:43:55.940682 31193 round_trippers.go:463] GET https://k82-eu-1-load-balancer-dns-1:53/healthz?timeout=10s
I1128 17:43:55.940714 31193 round_trippers.go:469] Request Headers:
I1128 17:43:55.940728 31193 round_trippers.go:473] Accept: application/json, */*
I1128 17:43:55.940741 31193 round_trippers.go:473] User-Agent: kubeadm/v1.28.2 (linux/amd64) kubernetes/89a4ea3
I1128 17:44:05.940729 31193 round_trippers.go:574] Response Status: in 9999 milliseconds
I1128 17:44:05.940767 31193 round_trippers.go:577] Response Headers:
I1128 17:44:06.440840 31193 round_trippers.go:463] GET https://k82-eu-1-load-balancer-dns-1:53/healthz?timeout=10s
I1128 17:44:06.440913 31193 round_trippers.go:469] Request Headers:
I1128 17:44:06.440926 31193 round_trippers.go:473] Accept: application/json, */*
I1128 17:44:06.440944 31193 round_trippers.go:473] User-Agent: kubeadm/v1.28.2 (linux/amd64) kubernetes/89a4ea3
I1128 17:44:16.441595 31193 round_trippers.go:574] Response Status: in 10000 milliseconds
I1128 17:44:16.441630 31193 round_trippers.go:577] Response Headers:
I1128 17:44:16.940457 31193 round_trippers.go:463] GET https://k82-eu-1-load-balancer-dns-1:53/healthz?timeout=10s
I1128 17:44:16.940492 31193 round_trippers.go:469] Request Headers:
I1128 17:44:16.940502 31193 round_trippers.go:473] Accept: application/json, */*
I1128 17:44:16.940508 31193 round_trippers.go:473] User-Agent: kubeadm/v1.28.2 (linux/amd64) kubernetes/89a4ea3
I1128 17:44:26.941069 31193 round_trippers.go:574] Response Status: in 10000 milliseconds
I1128 17:44:26.941125 31193 round_trippers.go:577] Response Headers:
I1128 17:44:27.441003 31193 round_trippers.go:463] GET https://k82-eu-1-load-balancer-dns-1:53/healthz?timeout=10s
I1128 17:44:27.441044 31193 round_trippers.go:469] Request Headers:
I1128 17:44:27.441056 31193 round_trippers.go:473] User-Agent: kubeadm/v1.28.2 (linux/amd64) kubernetes/89a4ea3
I1128 17:44:27.441067 31193 round_trippers.go:473] Accept: application/json, */*
I1128 17:44:37.441221 31193 round_trippers.go:574] Response Status: in 10000 milliseconds
I1128 17:44:37.441293 31193 round_trippers.go:577] Response Headers:
I1128 17:44:37.441517 31193 round_trippers.go:463] GET https://k82-eu-1-load-balancer-dns-1:53/healthz?timeout=10s
I1128 17:44:37.441529 31193 round_trippers.go:469] Request Headers:
I1128 17:44:37.441537 31193 round_trippers.go:473] Accept: application/json, */*
I1128 17:44:37.441543 31193 round_trippers.go:473] User-Agent: kubeadm/v1.28.2 (linux/amd64) kubernetes/89a4ea3
I1128 17:44:47.442368 31193 round_trippers.go:574] Response Status: in 10000 milliseconds
I1128 17:44:47.442425 31193 round_trippers.go:577] Response Headers:
I1128 17:44:47.941439 31193 round_trippers.go:463] GET https://k82-eu-1-load-balancer-dns-1:53/healthz?timeout=10s
I1128 17:44:47.941515 31193 round_trippers.go:469] Request Headers:
I1128 17:44:47.941533 31193 round_trippers.go:473] Accept: application/json, */*
I1128 17:44:47.941549 31193 round_trippers.go:473] User-Agent: kubeadm/v1.28.2 (linux/amd64) kubernetes/89a4ea3
I1128 17:44:57.941446 31193 round_trippers.go:574] Response Status: in 9999 milliseconds
I1128 17:44:57.941502 31193 round_trippers.go:577] Response Headers:
I1128 17:44:58.440692 31193 round_trippers.go:463] GET https://k82-eu-1-load-balancer-dns-1:53/healthz?timeout=10s
I1128 17:44:58.440758 31193 round_trippers.go:469] Request Headers:
I1128 17:44:58.440773 31193 round_trippers.go:473] Accept: application/json, */*
I1128 17:44:58.440782 31193 round_trippers.go:473] User-Agent: kubeadm/v1.28.2 (linux/amd64) kubernetes/89a4ea3
I1128 17:45:08.441721 31193 round_trippers.go:574] Response Status: in 10000 milliseconds
I1128 17:45:08.441776 31193 round_trippers.go:577] Response Headers:
I1128 17:45:08.949674 31193 round_trippers.go:463] GET https://k82-eu-1-load-balancer-dns-1:53/healthz?timeout=10s
I1128 17:45:08.949723 31193 round_trippers.go:469] Request Headers:
I1128 17:45:08.949736 31193 round_trippers.go:473] Accept: application/json, */*
I1128 17:45:08.949746 31193 round_trippers.go:473] User-Agent: kubeadm/v1.28.2 (linux/amd64) kubernetes/89a4ea3
I1128 17:45:18.949946 31193 round_trippers.go:574] Response Status: in 10000 milliseconds
I1128 17:45:18.949992 31193 round_trippers.go:577] Response Headers:
I1128 17:45:19.440915 31193 round_trippers.go:463] GET https://k82-eu-1-load-balancer-dns-1:53/healthz?timeout=10s
I1128 17:45:19.440948 31193 round_trippers.go:469] Request Headers:
I1128 17:45:19.440957 31193 round_trippers.go:473] Accept: application/json, */*
I1128 17:45:19.440963 31193 round_trippers.go:473] User-Agent: kubeadm/v1.28.2 (linux/amd64) kubernetes/89a4ea3
I1128 17:45:29.441182 31193 round_trippers.go:574] Response Status: in 10000 milliseconds
I1128 17:45:29.441220 31193 round_trippers.go:577] Response Headers:
I1128 17:45:29.940652 31193 round_trippers.go:463] GET https://k82-eu-1-load-balancer-dns-1:53/healthz?timeout=10s
I1128 17:45:29.940696 31193 round_trippers.go:469] Request Headers:
I1128 17:45:29.940707 31193 round_trippers.go:473] Accept: application/json, */*
I1128 17:45:29.940716 31193 round_trippers.go:473] User-Agent: kubeadm/v1.28.2 (linux/amd64) kubernetes/89a4ea3
I1128 17:45:39.941785 31193 round_trippers.go:574] Response Status: in 10001 milliseconds
I1128 17:45:39.941833 31193 round_trippers.go:577] Response Headers:
I1128 17:45:40.440701 31193 round_trippers.go:463] GET https://k82-eu-1-load-balancer-dns-1:53/healthz?timeout=10s
I1128 17:45:40.440730 31193 round_trippers.go:469] Request Headers:
I1128 17:45:40.440739 31193 round_trippers.go:473] Accept: application/json, */*
I1128 17:45:40.440746 31193 round_trippers.go:473] User-Agent: kubeadm/v1.28.2 (linux/amd64) kubernetes/89a4ea3
I1128 17:45:50.441335 31193 round_trippers.go:574] Response Status: in 10000 milliseconds
I1128 17:45:50.441394 31193 round_trippers.go:577] Response Headers:
I1128 17:45:50.941305 31193 round_trippers.go:463] GET https://k82-eu-1-load-balancer-dns-1:53/healthz?timeout=10s
I1128 17:45:50.941372 31193 round_trippers.go:469] Request Headers:
I1128 17:45:50.941385 31193 round_trippers.go:473] Accept: application/json, */*
I1128 17:45:50.941395 31193 round_trippers.go:473] User-Agent: kubeadm/v1.28.2 (linux/amd64) kubernetes/89a4ea3
I1128 17:46:00.941280 31193 round_trippers.go:574] Response Status: in 9999 milliseconds
I1128 17:46:00.941348 31193 round_trippers.go:577] Response Headers:
I1128 17:46:00.941681 31193 round_trippers.go:463] GET https://k82-eu-1-load-balancer-dns-1:53/healthz?timeout=10s
I1128 17:46:00.941693 31193 round_trippers.go:469] Request Headers:
I1128 17:46:00.941700 31193 round_trippers.go:473] Accept: application/json, */*
I1128 17:46:00.941707 31193 round_trippers.go:473] User-Agent: kubeadm/v1.28.2 (linux/amd64) kubernetes/89a4ea3
I1128 17:46:10.942209 31193 round_trippers.go:574] Response Status: in 10000 milliseconds
I1128 17:46:10.942258 31193 round_trippers.go:577] Response Headers:
I1128 17:46:11.441262 31193 round_trippers.go:463] GET https://k82-eu-1-load-balancer-dns-1:53/healthz?timeout=10s
I1128 17:46:11.441303 31193 round_trippers.go:469] Request Headers:
I1128 17:46:11.441315 31193 round_trippers.go:473] Accept: application/json, */*
I1128 17:46:11.441325 31193 round_trippers.go:473] User-Agent: kubeadm/v1.28.2 (linux/amd64) kubernetes/89a4ea3
I1128 17:46:21.441610 31193 round_trippers.go:574] Response Status: in 10000 milliseconds
I1128 17:46:21.441649 31193 round_trippers.go:577] Response Headers:
I1128 17:46:21.940554 31193 round_trippers.go:463] GET https://k82-eu-1-load-balancer-dns-1:53/healthz?timeout=10s
I1128 17:46:21.940588 31193 round_trippers.go:469] Request Headers:
I1128 17:46:21.940597 31193 round_trippers.go:473] Accept: application/json, */*
I1128 17:46:21.940604 31193 round_trippers.go:473] User-Agent: kubeadm/v1.28.2 (linux/amd64) kubernetes/89a4ea3
I1128 17:46:31.941224 31193 round_trippers.go:574] Response Status: in 10000 milliseconds
I1128 17:46:31.941272 31193 round_trippers.go:577] Response Headers:
I1128 17:46:32.441186 31193 round_trippers.go:463] GET https://k82-eu-1-load-balancer-dns-1:53/healthz?timeout=10s
I1128 17:46:32.441237 31193 round_trippers.go:469] Request Headers:
I1128 17:46:32.441245 31193 round_trippers.go:473] Accept: application/json, */*
I1128 17:46:32.441252 31193 round_trippers.go:473] User-Agent: kubeadm/v1.28.2 (linux/amd64) kubernetes/89a4ea3
I1128 17:46:42.441651 31193 round_trippers.go:574] Response Status: in 10000 milliseconds
I1128 17:46:42.441699 31193 round_trippers.go:577] Response Headers:
I1128 17:46:42.940653 31193 round_trippers.go:463] GET https://k82-eu-1-load-balancer-dns-1:53/healthz?timeout=10s
I1128 17:46:42.940688 31193 round_trippers.go:469] Request Headers:
I1128 17:46:42.940699 31193 round_trippers.go:473] Accept: application/json, */*
I1128 17:46:42.940706 31193 round_trippers.go:473] User-Agent: kubeadm/v1.28.2 (linux/amd64) kubernetes/89a4ea3
I1128 17:46:52.940879 31193 round_trippers.go:574] Response Status: in 10000 milliseconds
I1128 17:46:52.940925 31193 round_trippers.go:577] Response Headers:
I1128 17:46:52.941047 31193 round_trippers.go:463] GET https://k82-eu-1-load-balancer-dns-1:53/healthz?timeout=10s
I1128 17:46:52.941061 31193 round_trippers.go:469] Request Headers:
I1128 17:46:52.941068 31193 round_trippers.go:473] Accept: application/json, */*
I1128 17:46:52.941078 31193 round_trippers.go:473] User-Agent: kubeadm/v1.28.2 (linux/amd64) kubernetes/89a4ea3
I1128 17:47:02.941262 31193 round_trippers.go:574] Response Status: in 10000 milliseconds
I1128 17:47:02.941311 31193 round_trippers.go:577] Response Headers:
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock logs CONTAINERID'
couldn't initialize a Kubernetes cluster
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init.runWaitControlPlanePhase
cmd/kubeadm/app/cmd/phases/init/waitcontrolplane.go:108
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
cmd/kubeadm/app/cmd/phases/workflow/runner.go:259
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
cmd/kubeadm/app/cmd/phases/workflow/runner.go:446
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
cmd/kubeadm/app/cmd/phases/workflow/runner.go:232
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
cmd/kubeadm/app/cmd/init.go:111
github.com/spf13/cobra.(*Command).execute
vendor/github.com/spf13/cobra/command.go:940
github.com/spf13/cobra.(*Command).ExecuteC
vendor/github.com/spf13/cobra/command.go:1068
github.com/spf13/cobra.(*Command).Execute
vendor/github.com/spf13/cobra/command.go:992
k8s.io/kubernetes/cmd/kubeadm/app.Run
cmd/kubeadm/app/kubeadm.go:50
main.main
cmd/kubeadm/kubeadm.go:25
runtime.main
/usr/local/go/src/runtime/proc.go:250
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1598
error execution phase wait-control-plane
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
cmd/kubeadm/app/cmd/phases/workflow/runner.go:260
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
cmd/kubeadm/app/cmd/phases/workflow/runner.go:446
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
cmd/kubeadm/app/cmd/phases/workflow/runner.go:232
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
cmd/kubeadm/app/cmd/init.go:111
github.com/spf13/cobra.(*Command).execute
vendor/github.com/spf13/cobra/command.go:940
github.com/spf13/cobra.(*Command).ExecuteC
vendor/github.com/spf13/cobra/command.go:1068
github.com/spf13/cobra.(*Command).Execute
vendor/github.com/spf13/cobra/command.go:992
k8s.io/kubernetes/cmd/kubeadm/app.Run
cmd/kubeadm/app/kubeadm.go:50
main.main
cmd/kubeadm/kubeadm.go:25
runtime.main
/usr/local/go/src/runtime/proc.go:250
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1598
Output of journalctl -xeu kubelet
: KubeletJournalCtlOutput.txt - Google Drive
root@k8s-eu-1-control-plane-node-1:~# crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock ps -a | grep kube | grep -v pause
579c2b9e5d17a 7fe0e6f37db33 41 seconds ago Exited kube-apiserver 50 7d52f351045d2 kube-apiserver-k8s-eu-1-control-plane-node-1
9db9a2fe179e3 e3db313c6dbc0 16 minutes ago Running kube-scheduler 25 d55a5e9d9be56 kube-scheduler-k8s-eu-1-control-plane-node-1
d3887c919854f d058aa5ab969c 16 minutes ago Running kube-controller-manager 18 e61c1eb6a8700 kube-controller-manager-k8s-eu-1-control-plane-node-1
root@k8s-eu-1-control-plane-node-1:~#
root@k8s-eu-1-control-plane-node-1:~# crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock logs 579c2b9e5d17a
I1128 16:58:28.080267 1 options.go:220] external host was not specified, using 38.242.249.60
I1128 16:58:28.081342 1 server.go:148] Version: v1.28.4
I1128 16:58:28.081365 1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
E1128 16:58:28.081652 1 run.go:74] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use"
root@k8s-eu-1-control-plane-node-1:~# ps xa | grep 6443
33348 pts/0 R+ 0:00 grep --color=auto 6443
If I do kubeadm init --pod-network-cidr=192.168.0.0/16
the initialization process goes fine
Based on what is described here: https://github.com/kubernetes/kubeadm/blob/main/docs/ha-considerations.md#keepalived-configuration
I defined:
/etc/haproxy/haproxy.cfg
:
# https://github.com/kubernetes/kubeadm/blob/main/docs/ha-considerations.md#haproxy-configuration
# /etc/haproxy/haproxy.cfg
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
log /dev/log local0
log /dev/log local1 notice
daemon
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 1
timeout http-request 10s
timeout queue 20s
timeout connect 5s
timeout client 20s
timeout server 20s
timeout http-keep-alive 10s
timeout check 10s
#---------------------------------------------------------------------
# apiserver frontend which proxys to the control plane nodes
#---------------------------------------------------------------------
frontend apiserver
bind *:6445
mode tcp
option tcplog
default_backend apiserverbackend
#---------------------------------------------------------------------
# round robin balancing for apiserver
#---------------------------------------------------------------------
# https://github.com/kubernetes/kubeadm/blob/main/docs/ha-considerations.md#bootstrap-the-cluster
backend apiserverbackend
#option httpchk GET /healthz
option httpchk GET /livez
http-check expect status 200
mode tcp
option ssl-hello-chk
balance roundrobin
server k82-eu-1-load-balancer-dns-1 ppp.pp.ppp.pp:53
server k82-eu-1-load-balancer-dns-2 yyy.yy.yyy.yy:53
/etc/keepalived/keepalived.conf
:
# https://github.com/kubernetes/kubeadm/blob/main/docs/ha-considerations.md#keepalived-configuration
# https://www.server-world.info/en/note?os=Ubuntu_22.04&p=keepalived&f=1
! /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
enable_script_security
}
vrrp_script check_apiserver {
script "/etc/keepalived/check_apiserver.sh"
interval 3
weight -2
fall 10
rise 2
}
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 51
priority 101
authentication {
auth_type PASS
auth_pass 42
}
virtual_ipaddress {
10.0.0.30
}
track_script {
check_apiserver
}
}
/etc/keepalived/check_apiserver.sh
:
# https://github.com/kubernetes/kubeadm/blob/main/docs/ha-considerations.md#keepalived-configuration
# https://www.server-world.info/en/note?os=Ubuntu_22.04&p=keepalived&f=1
#!/bin/sh
errorExit() {
echo "*** $*" 1>&2
exit 1
}
APISERVER_DEST_PORT=6445
APISERVER_VIP=10.0.0.30
curl --silent --max-time 2 --insecure https://localhost:${APISERVER_DEST_PORT}/ -o /dev/null || errorExit "Error GET https://localhost:${APISERVER_DEST_PORT}/"
if ip addr | grep -q ${APISERVER_VIP}; then
curl --silent --max-time 2 --insecure https://${APISERVER_VIP}:${APISERVER_DEST_PORT}/ -o /dev/null || errorExit "Error GET https://${APISERVER_VIP}:${APISERVER_DEST_PORT}/"
fi
/etc/kubernetes/manifests/haproxy.yaml
:
# https://github.com/kubernetes/kubeadm/blob/main/docs/ha-considerations.md#option-2-run-the-services-as-static-pods
apiVersion: v1
kind: Pod
metadata:
name: haproxy
namespace: kube-system
spec:
containers:
- image: haproxy:2.1.4
name: haproxy
livenessProbe:
failureThreshold: 8
httpGet:
host: localhost
path: /healthz
port: 6445
scheme: HTTPS
volumeMounts:
- mountPath: /usr/local/etc/haproxy/haproxy.cfg
name: haproxyconf
readOnly: true
hostNetwork: true
volumes:
- hostPath:
path: /etc/haproxy/haproxy.cfg
type: FileOrCreate
name: haproxyconf
status: {}
/etc/kubernetes/manifests/keepalived.yaml
:
# https://github.com/kubernetes/kubeadm/blob/main/docs/ha-considerations.md#option-2-run-the-services-as-static-pods
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
name: keepalived
namespace: kube-system
spec:
containers:
- image: osixia/keepalived:2.0.17
name: keepalived
resources: {}
securityContext:
capabilities:
add:
- NET_ADMIN
- NET_BROADCAST
- NET_RAW
volumeMounts:
- mountPath: /usr/local/etc/keepalived/keepalived.conf
name: config
- mountPath: /etc/keepalived/check_apiserver.sh
name: check
hostNetwork: true
volumes:
- hostPath:
path: /etc/keepalived/keepalived.conf
name: config
- hostPath:
path: /etc/keepalived/check_apiserver.sh
name: check
status: {}
What am I doing wrong? How to make the initialization process work?