New installation issues

I’m new to Kubernetes, I just deployed the first node following the tutorial from https://kubernetes.io/docs/setup.
All goes well until systemctl restart kubelet, it fails to start, steps described here: Installing kubeadm | Kubernetes

In /var/log/messages I’m seeing: (whole log at the end of the post)

systemd: Started kubelet: The Kubernetes Node Agent.
systemd: Started Kubernetes systemd probe.
kubelet: I1006 19:45:34.666064   17903 server.go:411] Version: v1.19.2
F1006 19:45:34.666565   17903 server.go:265] failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory

I wonder what did I miss?
Thank you for your help!

Cluster information:

Kubernetes version: 1.19.2-0
Cloud being used: bare-metal
Installation method: yum
Host OS: CentOS Linux release 7.8.2003 (Core), sestatus - permissive
CNI and version: Container Network Interface - I don’t recall configuring any
CRI and version: 1.13.0-0

Oct  6 19:45:34 hostname systemd: Started kubelet: The Kubernetes Node Agent.
Oct  6 19:45:34 hostname systemd: Started Kubernetes systemd probe.
Oct  6 19:45:34 hostname kubelet: I1006 19:45:34.666064   17903 server.go:411] Version: v1.19.2
Oct  6 19:45:34 hostname kubelet: F1006 19:45:34.666565   17903 server.go:265] failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory
Oct  6 19:45:34 hostname kubelet: goroutine 1 [running]:
Oct  6 19:45:34 hostname kubelet: k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc000010001, 0xc0001f6360, 0xb0, 0x102)
Oct  6 19:45:34 hostname kubelet: /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:996 +0xb9
Oct  6 19:45:34 hostname kubelet: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x6cf6140, 0xc000000003, 0x0, 0x0, 0xc0002748c0, 0x6b49c19, 0x9, 0x109, 0x0)
Oct  6 19:45:34 hostname kubelet: /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:945 +0x191
Oct  6 19:45:34 hostname kubelet: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x6cf6140, 0xc000000003, 0x0, 0x0, 0x1, 0xc00113fc80, 0x1, 0x1)
Oct  6 19:45:34 hostname kubelet: /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:718 +0x165
Oct  6 19:45:34 hostname kubelet: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).print(...)
Oct  6 19:45:34 hostname kubelet: /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:703
Oct  6 19:45:34 hostname kubelet: k8s.io/kubernetes/vendor/k8s.io/klog/v2.Fatal(...)
Oct  6 19:45:34 hostname kubelet: /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1436
Oct  6 19:45:34 hostname kubelet: k8s.io/kubernetes/cmd/kubelet/app.NewKubeletCommand.func1(0xc000206b00, 0xc00004e090, 0x3, 0x3)
Oct  6 19:45:34 hostname kubelet: /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/app/server.go:265 +0x63e
Oct  6 19:45:34 hostname kubelet: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc000206b00, 0xc00004e090, 0x3, 0x3, 0xc000206b00, 0xc00004e090)
Oct  6 19:45:34 hostname kubelet: /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:846 +0x2c2
Oct  6 19:45:34 hostname kubelet: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc000206b00, 0x163b7ed2699daf34, 0x6cf5c60, 0x409b05)
Oct  6 19:45:34 hostname systemd: kubelet.service: main process exited, code=exited, status=255/n/a
Oct  6 19:45:34 hostname kubelet: /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:950 +0x375
Oct  6 19:45:34 hostname kubelet: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...)
Oct  6 19:45:34 hostname kubelet: /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:887
Oct  6 19:45:34 hostname kubelet: main.main()
Oct  6 19:45:34 hostname kubelet: _output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/kubelet.go:41 +0xe5
Oct  6 19:45:34 hostname kubelet: goroutine 6 [chan receive]:
Oct  6 19:45:34 hostname kubelet: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).flushDaemon(0x6cf6140)
Oct  6 19:45:34 hostname kubelet: /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1131 +0x8b
Oct  6 19:45:34 hostname kubelet: created by k8s.io/kubernetes/vendor/k8s.io/klog/v2.init.0
Oct  6 19:45:34 hostname kubelet: /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:416 +0xd8
Oct  6 19:45:34 hostname kubelet: goroutine 194 [select]:
Oct  6 19:45:34 hostname kubelet: k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.(*worker).start(0xc00046a050)
Oct  6 19:45:34 hostname kubelet: /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:154 +0x105
Oct  6 19:45:34 hostname kubelet: created by k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.init.0
Oct  6 19:45:34 hostname kubelet: /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:32 +0x57
Oct  6 19:45:34 hostname kubelet: goroutine 226 [select]:
Oct  6 19:45:34 hostname kubelet: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x4794628, 0x4becee0, 0xc000b9c360, 0x1, 0xc0001480c0)
Oct  6 19:45:34 hostname kubelet: /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
Oct  6 19:45:34 hostname kubelet: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4794628, 0x12a05f200, 0x0, 0xc0002b5601, 0xc0001480c0)
Oct  6 19:45:34 hostname systemd: Unit kubelet.service entered failed state.
Oct  6 19:45:34 hostname kubelet: /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
Oct  6 19:45:34 hostname kubelet: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)
Oct  6 19:45:34 hostname kubelet: /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
Oct  6 19:45:34 hostname kubelet: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Forever(0x4794628, 0x12a05f200)
Oct  6 19:45:34 hostname kubelet: /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:81 +0x4f
Oct  6 19:45:34 hostname kubelet: created by k8s.io/kubernetes/vendor/k8s.io/component-base/logs.InitLogs
Oct  6 19:45:34 hostname kubelet: /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/component-base/logs/logs.go:58 +0x8a
Oct  6 19:45:34 hostname kubelet: goroutine 228 [syscall]:
Oct  6 19:45:34 hostname kubelet: os/signal.signal_recv(0x0)
Oct  6 19:45:34 hostname kubelet: /usr/local/go/src/runtime/sigqueue.go:147 +0x9d
Oct  6 19:45:34 hostname kubelet: os/signal.loop()
Oct  6 19:45:34 hostname kubelet: /usr/local/go/src/os/signal/signal_unix.go:23 +0x25
Oct  6 19:45:34 hostname kubelet: created by os/signal.Notify.func1.1
Oct  6 19:45:34 hostname kubelet: /usr/local/go/src/os/signal/signal.go:150 +0x45
Oct  6 19:45:34 hostname kubelet: goroutine 229 [chan receive]:
Oct  6 19:45:34 hostname kubelet: k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.SetupSignalContext.func1(0xc0002b0f50)
Oct  6 19:45:34 hostname kubelet: /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/signal.go:48 +0x36
Oct  6 19:45:34 hostname kubelet: created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.SetupSignalContext
Oct  6 19:45:34 hostname kubelet: /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/signal.go:47 +0xf3
Oct  6 19:45:34 hostname systemd: kubelet.service failed.

After running the following commands kublet started, but…

% sudo systemctl stop crio
% sudo kubeadm init phase certs all
% sudo kubeadm init phase kubeconfig all
% sudo kubeadm init

/var/log/messages is complaining about:

Oct  6 20:38:48 hostname kubelet: W1006 20:38:48.853158   30933 cni.go:239] Unable to update cni config: no valid networks found in /etc/cni/net.d
Oct  6 20:38:50 hostname kubelet: E1006 20:38:50.670201   30933 kubelet.go:2103] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Oct  6 20:38:53 hostname kubelet: W1006 20:38:53.853773   30933 cni.go:204] Error validating CNI config list {"cniVersion":"0.3.1","name":"crio","plugins":[{"bridge":"cni0","cniVersion":"0.3.1","hairpinMode":true,"ipMasq":true,"ipam":{"ranges":[[{"subnet":"10.85.0.0/16"}],[{"subnet":"1100:200::/24"}]],"routes":[{"dst":"0.0.0.0/0"},{"dst":"1100:200::1/24"}],"type":"host-local"},"isGateway":true,"name":"crio","type":"bridge"}]}: [failed to find plugin "bridge" in path [/opt/cni/bin]]
Oct  6 20:38:53 hostname kubelet: W1006 20:38:53.853976   30933 cni.go:204] Error validating CNI config list {"cniVersion":"0.3.1","name":"","plugins":[{"cniVersion":"0.3.1","type":"loopback"}]}: [failed to find plugin "loopback" in path [/opt/cni/bin]]

Hello, @Sparky
Did you deploy any network plugin addons in the cluster?

Yes, it is looking better now
I found these documentations, they are better than the original

This was solved by re-deploying the servers using: Installing kubeadm | Kubernetes
Some steps:

yum install yum-utils device-mapper-persistent-data lvm2 containerd docker-ce docker-ce-cli containerd install kubelet kubeadm kubectl

create /etc/sysctl.d/k8s.conf and add
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

then sysctl --system

enable firewalld and allow all connections between the master and the nodes

mkdir /etc/docker

create /etc/docker/daemon.json
{
“exec-opts”: [“native.cgroupdriver=systemd”],
“log-driver”: “json-file”,
“log-opts”: {
“max-size”: “100m”
},
“storage-driver”: “overlay2”,
“storage-opts”: [
“overlay2.override_kernel_check=true”
]
}

systemctl daemon-reload
systemctl restart docker
systemctl enable docker

The last step:
kubeadm init --pod-network-cidr=192.168.0.0/16