Kubernetes (K8s) Cluster setup - Ubuntu 22.04 : Network plugin returns error: cni plugin not initialized

Hi Team,

Currently facing some issues with Kubernetes clusters (1 master node, 1 worker node) setup in Ubuntu 22.04.

I am following exact steps mentioned in this [blog] (How to Install Kubernetes (K8s) on Ubuntu 24.04 - HostnExtra) to setup k8s cluster.

Step 1: Update and Upgrade the System (all nodes)

sudo apt update && sudo apt upgrade -y

Step 2: Install Docker (all nodes)

sudo apt install -y docker.io

Note: Docker is installed and status is active.

Step 3: Install Kubernetes Components (all nodes)

curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] 

https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /" | sudo tee /etc/apt/sources.list.d/kubernetes.list

Install the Kubernetes components:

sudo apt update
sudo apt install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

Step 4: Disable Swap (all nodes)

sudo swapoff -a
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

Step 5: Initialize the Master Node (Master node)

sudo kubeadm init --pod-network-cidr=10.244.0.0/16

Output:

kubeadm join 10.x.x.x:6443 --token z8gd9n.abccd \
        --discovery-token-ca-cert-hash sha256:9967fc499fca61705d206axxx

Step 6: Configure kubectl for the Master Node (Master node - Non root user)

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Step 7: Install a Pod Network Add-on on Master node (Master node)

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Step 8: Join Worker Nodes to the Cluster (Worker node)

kubeadm join 10.x.x.x:6443 --token z8gd9n.abccd \
        --discovery-token-ca-cert-hash sha256:9967fc499fca61705d206axxx

Step 9: verifying cluster

kubectl get nodes

Output:

NAME       STATUS     ROLES           AGE   VERSION
pocmst      NotReady   control-plane   86m   v1.30.6
pocwrk      NotReady   <none>          82m   v1.30.6

Note: both nodes are NotReady state.

In order to troubleshoot this Notready state, followed below procedure.

Step 1: kubectl describe node pocmst

Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Tue, 19 Nov 2024 13:20:26 +0000   Tue, 19 Nov 2024 11:48:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Tue, 19 Nov 2024 13:20:26 +0000   Tue, 19 Nov 2024 11:48:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Tue, 19 Nov 2024 13:20:26 +0000   Tue, 19 Nov 2024 11:48:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            False   Tue, 19 Nov 2024 13:20:26 +0000   Tue, 19 Nov 2024 11:48:04 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized

Noted the issue: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized

Step 2: journalctl -u kubelet

 E1119 13:27:42.392914    2139 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.6.0-flannel1\\\"\"" pod="kube-flannel/kube-flannel-ds-hlsgr" podUID="b3d8f1d3-2a77-448f-b2df-3ee5ebe629d4"
Nov 19 13:27:46 wso2dmigpocmstr kubelet[2139]: E1119 13:27:46.078441    2139 kubelet.go:2920] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"

Step 3:

kubectl get pod -A

NAMESPACE      NAME                                      READY   STATUS                  RESTARTS       AGE
kube-flannel   kube-flannel-ds-hlsgr                     0/1     Init:ImagePullBackOff   0              100m
kube-flannel   kube-flannel-ds-mvdc2                     0/1     Init:ImagePullBackOff   0              98m
kube-system    coredns-55cb58b774-7l6gm                  0/1     Pending                 0              102m
kube-system    coredns-55cb58b774-7lxtq                  0/1     Pending                 0              102m
kube-system    etcd-pocmstr                              1/1     Running                 39 (54m ago)   102m
kube-system    kube-apiserver-pocmstr                  1/1     Running                 1 (54m ago)    102m
kube-system    kube-controller-manager-pocmstr           1/1     Running                 11 (54m ago)   102m
kube-system    kube-proxy-k5x2x                          1/1     Running                 1 (54m ago)    102m
kube-system    kube-proxy-w2qwz                          0/1     ImagePullBackOff        0              98m

Is there any steps am i missing while setting up k8s cluster? May i know any suggestion to resolve this issue? so that All nodes in cluster will become as READY state.

flannel is network cni, from the pod status above, the images pull fail in the flannel pods .

  1. you can check the image registry correct or not: kubectl describe pods kube-flannel-ds-hlsgr -n kube-system
    and your can try to download mannually(image registry get from kubectl describe): docker pull image-registry
  2. check the PROXY and NO_PROXY envriment variable, make sure the pull images request can forward to correct network
    export HTTP_PROXY=…
    export HTTPS_PROXY=…
    export NO_PROXY=…
  3. your coredns pods is pending, you should check too by kubectl describe