Kubeadm cannot validate component configs for API groups

Asking for help? Comment out what you need so we can get more information to help you!

Cluster information:

Kubernetes version:
Cloud being used: (put bare-metal if not on a public cloud)
Installation method: 1.18.2
Host OS: Ubuntu 20.04 Desktop
#CNI and version:
#CRI and version:

You can format your yaml by highlighting it and pressing Ctrl-Shift-C, it will make your output easier to read.

Hi All,

I have any error or issue while connecting to node1 and node2 from master. I’m new to kubernetes. I just want to install kubernetes with nodes. I dont know where is yaml located

So, below issue i am facing

Note All my kubectl get nodes and kubectl get pods --all-namespaces are working fine it showing in master.

Kubernetes Master Node IP address: 192.168.0.13 Hostname: k8-master
Kubernetes Slave Node 1 IP address: 52.172.193.169 Hostname: k8-slave1
Kubernetes Slave Node 2 IP address: 138.91.249.9 Hostname: k8-slave2

k8-master installed in Ubuntu 20.04 desktop
k8-slave1 installed in Azure VM
k8-slave2 installed in Azure VM

when i ping from k8-master to k8-slave1 and k8-slave2 both are ping
when i ping from k8-slave1 and k8-slave2 to k8-master its not ping
when i ping from k8-slave1 to k8-slave2 and k8-slave2 and k8-slave1 its ping

ERROR ON MASTER NODES

  1. when i creating a new token i am getting cannot validate error in master
    kubeadm token create
    W0513 16:41:57.132489 68541 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
    ft9nrq.ulwkvq7rdb006qu4

ERROR ON SLAVE 1 and SLAVE 2 NODES
2. In both iam getting same below error when trying to connect to master

sudo kubeadm join 192.168.0.13:6443 --token mdhfj9.lu8iehx6zx3rzb1w --discovery-token-ca-cert-hash sha256:876cdf59b55ca5ef104e180c10fc56332d6a11ce3d3eeefbe21a9ff6b0f278e5
W0513 11:13:12.191904 126238 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected “cgroupfs” as the Docker cgroup driver. The recommended driver is “systemd”. Please follow the guide at https://kubernetes.io/docs/setup/cri/
error execution phase preflight: couldn’t validate the identity of the API Server: Get https://192.168.0.13:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
To see the stack trace of this error execute with --v=5 or higher

I don’t know to solve this issue i search in google but fail to solve it and i don’t know how master can communicate with slave1 and slave2.

Hi! What do you have as the container manager ?
Is API server port listening on port 6443 ?

 nc -z k8-master 6443 && echo 'ok'

If not, It looks like kubeadm init failed.

 sudo kudeadm init

And look for what went wrong… e.g. I had installed containerd as CRI

Found multiple CRI sockets, please use --cri-socket to select one: /var/run/dockershim.sock, /var/run/crio/crio.sock

Selecting --cri-socket /var/run/containerd/containerd.sock returned

    [preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR Port-10259]: Port 10259 is in use
	[ERROR Port-10257]: Port 10257 is in use
	[ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
	[ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
	[ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists

So, reset first :slight_smile: which may take a couple of minutes:

 sudo kubeadm reset --kubeconfig /etc/kubernetes/admin.conf --cri-socket /var/run/containerd/containerd.sock

And finally reinitialize kubernetes:

sudo kubeadm init --config /etc/kubernetes/kubeadm-config.yaml 

You’ll will eventually join the nodes as explained by kubernetes.io docs about cluster

To join : on master node1:

 sudo kubeadm token create --print-join-command

back on node2 host, edit token: and copy it as from above or try with parameters as follow:

 sudo kubeadm join --config /etc/kubernetes/kubeadm-client.conf --ignore-preflight-errors=all

Related issue resolution on kubernetes/kubeadm

I am having network configuration issue in minikube when i upgrade kubeadm and choosing new configuration settings to install in the minikube. so i have to use --cri-socket to resolve this networking issue?

Yes, you do. Can you see the following message up from kubeadm reset ? Incorrect DNS settings cause non-functional cluster…

The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]