I installed via apt such as:
apt install kubelet kubeadm kubectl containerd docker.io cri-tools
Then off swap via swapoff -a
Then copy admin.conf into $HOME/.kube/config
By yje way, after installation I get the following famous message:
root@debian:/etc/kubernetes# kubeadm init --apiserver-advertise-address=192.168.1.106
[init] Using Kubernetes version: v1.31.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
W1202 15:45:46.780702 120912 checks.go:846] detected that the sandbox image "registry.k8s.io/pause:3.8" of the container runtime is inconsistent with that used by kubeadm.It is recommended to use "registry.k8s.io/pause:3.10" as the CRI sandbox image.
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/super-admin.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/scheduler.conf"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 501.72812ms
[api-check] Waiting for a healthy API server. This can take up to 4m0s
[api-check] The API server is healthy after 4.003655474s
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node debian as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node debian as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: zbhqh6.ocis2anl7lo26jzy
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.1.106:6443 --token zbhqh6.ocis2anl7lo26jzy \
--discovery-token-ca-cert-hash sha256:450de4fb22a2dab526949e904dad8353f945b09164923278117ee88cdb4eb730
root@debian:/etc/kubernetes#
I have the following images:
root@debian:~# crictl images
WARN[0000] image connect using default endpoints: [unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead.
IMAGE TAG IMAGE ID SIZE
registry.k8s.io/coredns/coredns v1.11.3 c69fa2e9cbf5f 18.6MB
registry.k8s.io/etcd 3.5.15-0 2e96e5913fc06 56.9MB
registry.k8s.io/kube-apiserver v1.31.3 f48c085d70203 28MB
registry.k8s.io/kube-controller-manager v1.31.3 b2a5ab7b1d92e 26.1MB
registry.k8s.io/kube-proxy v1.31.3 9c4bd20bd3676 30.2MB
registry.k8s.io/kube-scheduler v1.31.3 bab83bb0895ef 20.1MB
registry.k8s.io/pause 3.10 873ed75102791 320kB
registry.k8s.io/pause 3.8 4873874c08efc 311kB
I disabled apparmor and stop it via:
systemctl stop apparmor
systemctl disable apparmor
But my problem:
See the following commands and their outputs:
root@debian:~# crictl ps
WARN[0000] runtime connect using default endpoints: [unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead.
WARN[0000] image connect using default endpoints: [unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead.
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
d772e204b7393 f48c085d70203 5 seconds ago Running kube-apiserver 74 979de99851915 kube-apiserver-debian
836675095007f b2a5ab7b1d92e 49 seconds ago Running kube-controller-manager 79 6093de386dcdb kube-controller-manager-debian
b49d5a2e4f6db 9c4bd20bd3676 About a minute ago Running kube-proxy 4 7a9a3a6acf57c kube-proxy-c6xcx
410f91f480175 2e96e5913fc06 3 minutes ago Running etcd 79 138ae42bfe66f etcd-debian
root@debian:~# crictl ps
WARN[0000] runtime connect using default endpoints: [unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead.
WARN[0000] image connect using default endpoints: [unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead.
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
d772e204b7393 f48c085d70203 13 seconds ago Running kube-apiserver 74 979de99851915 kube-apiserver-debian
836675095007f b2a5ab7b1d92e 57 seconds ago Running kube-controller-manager 79 6093de386dcdb kube-controller-manager-debian
root@debian:~# crictl ps
WARN[0000] runtime connect using default endpoints: [unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead.
WARN[0000] image connect using default endpoints: [unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead.
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
1fd76dfe743c5 b2a5ab7b1d92e 2 minutes ago Running kube-controller-manager 96 314ae87bde2a1 kube-controller-manager-debian
root@debian:~# crictl ps
WARN[0000] runtime connect using default endpoints: [unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead.
WARN[0000] image connect using default endpoints: [unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead.
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
1fd76dfe743c5 b2a5ab7b1d92e 2 minutes ago Running kube-controller-manager 96 314ae87bde2a1 kube-controller-manager-debian
root@debian:~#
Unfortunately, somtimes a container become up and become down, no difference between them,
I don’t know how to solve this problem.
Cluster information:
root@debian:~# kubectl cluster-info
Kubernetes control plane is running at https://192.168.1.106:6443
CoreDNS is running at https://192.168.1.106:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Kubernetes version:
1.31
Installation method:
kubeadm
Host OS:
Debian SID