Crashing cluster after kubeadm on VPS


Whenever I create my cluster, and deploy Flannel, it works for maybe ~5 minutes. When I check the cluster info by kubectl cluster-info, it shows the correct information. After a while, it simply shows

The connection to the server <server_ip>:6443 was refused - did you specify the right host or port?

Cluster information:

Kubernetes version: 1.27.2
Cloud being used: (put bare-metal if not on a public cloud)
Installation method: kubeadm init, directly from the Installing kubeadm | Kubernetes page
Host OS: Ubuntu 22.04.2 LTS
CNI and version: Flannel 0.22.0
CRI and version: containerd 1.6.21

What I’ve tried

To start off, I followed this tutorial: I didn’t disable the swap, step 3. The CRI part in my containerd config looks like this:

    stream_server_address = ""
    stream_server_port = "0"
    systemd_cgroup = true

      bin_dir = "/opt/cni/bin"
      conf_dir = "/etc/cni/net.d"
      conf_template = ""
      ip_pref = ""
      max_conf_num = 1

When all ready, I initialized my cluster with sudo kubeadm init --pod-network-cidr= --apiserver-advertise-address=<server_ip>. This didn’t gave any errors.


  • What causes this? Is my network configuration off?
  • What step am I missing?
  • For checking logs, what’s the pod that can cause this? Mostly the kube-proxy one is that gets an error, but see no concrete information other then dial tcp <serverip>:6443: connect: connection refused at the end of every statement.