Building a Kubernetes 1.24 Cluster with kubeadm

This topic helps users to start a cluster with tree nodes (1 master and 2 worker nodes).

This lab will allow you to practice the process of building a new Kubernetes cluster.

Cluster information:

Kubernetes version: 1.24
Cloud being used: public cloud aws
Installation method:
Host OS: Ubuntu 20.04.5 LTS
VERSION=“20.04.5 LTS (Focal Fossa)”


Log in to the lab server using the credentials provided:
ssh <username>@<PUBLIC_IP_ADDRESS>

Install Packages

  1. Log into the control plane node. (Note: The following steps must be performed on all three nodes.)
  2. Create configuration file for containerd:

cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf overlay br_netfilter EOF

  1. Load modules:

sudo modprobe overlay sudo modprobe br_netfilter

  1. Set system configurations for Kubernetes networking:

cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-ip6tables = 1 EOF

  1. Apply new settings:

sudo sysctl --system

  1. Install containerd:

sudo apt-get update && sudo apt-get install -y containerd

  1. Create default configuration file for containerd:

sudo mkdir -p /etc/containerd

  1. Generate default containerd configuration and save to the newly created default file:

sudo containerd config default | sudo tee /etc/containerd/config.toml

  1. Restart containerd to ensure new configuration file usage:

sudo systemctl restart containerd

  1. Verify that containerd is running:

sudo systemctl status containerd

  1. Disable swap:

sudo swapoff -a

  1. Install dependency packages:

sudo apt-get update && sudo apt-get install -y apt-transport-https curl

  1. Download and add GPG key:

curl -s | sudo apt-key add -

  1. Add Kubernetes to repository list:

cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list deb kubernetes-xenial main EOF

  1. Update package listings:

sudo apt-get update

  1. Install Kubernetes packages (Note: If you get a dpkg lock message, just wait a minute or two before trying the command again):

sudo apt-get install -y kubelet=1.24.0-00 kubeadm=1.24.0-00 kubectl=1.24.0-00

  1. Turn off automatic updates:

sudo apt-mark hold kubelet kubeadm kubectl

  1. Log into both worker nodes to perform previous steps.

Initialize the Cluster

  1. Initialize the Kubernetes cluster on the control plane node using kubeadm (Note: This is only performed on the Control Plane Node):

sudo kubeadm init --pod-network-cidr --kubernetes-version 1.24.0

  1. Set kubectl access:

mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config

  1. Test access to cluster:

kubectl get nodes

Install the Calico Network Add-On

  1. On the control plane node, install Calico Networking:

kubectl apply -f

  1. Check status of the control plane node:

kubectl get nodes

Join the Worker Nodes to the Cluster

  1. In the control plane node, create the token and copy the kubeadm join command (NOTE:The join command can also be found in the output from kubeadm init command):

kubeadm token create --print-join-command

  1. In both worker nodes, paste the kubeadm join command to join the cluster. Use sudo to run it as root:

sudo kubeadm join ...

  1. In the control plane node, view cluster status (Note: You may have to wait a few moments to allow all nodes to become ready):

kubectl get nodes


Congratulations — you’ve completed this hands-on lab!