Containers become down and become up without any manipulation

I installed via apt such as:
apt install kubelet kubeadm kubectl containerd docker.io cri-tools
Then off swap via swapoff -a
Then copy admin.conf into $HOME/.kube/config
By yje way, after installation I get the following famous message:

root@debian:/etc/kubernetes# kubeadm init --apiserver-advertise-address=192.168.1.106
[init] Using Kubernetes version: v1.31.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
W1202 15:45:46.780702  120912 checks.go:846] detected that the sandbox image "registry.k8s.io/pause:3.8" of the container runtime is inconsistent with that used by kubeadm.It is recommended to use "registry.k8s.io/pause:3.10" as the CRI sandbox image.
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/super-admin.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/scheduler.conf"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 501.72812ms
[api-check] Waiting for a healthy API server. This can take up to 4m0s
[api-check] The API server is healthy after 4.003655474s
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node debian as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node debian as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: zbhqh6.ocis2anl7lo26jzy
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.1.106:6443 --token zbhqh6.ocis2anl7lo26jzy \
	--discovery-token-ca-cert-hash sha256:450de4fb22a2dab526949e904dad8353f945b09164923278117ee88cdb4eb730 
root@debian:/etc/kubernetes# 

I have the following images:

root@debian:~# crictl images
WARN[0000] image connect using default endpoints: [unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead. 
IMAGE                                     TAG                 IMAGE ID            SIZE
registry.k8s.io/coredns/coredns           v1.11.3             c69fa2e9cbf5f       18.6MB
registry.k8s.io/etcd                      3.5.15-0            2e96e5913fc06       56.9MB
registry.k8s.io/kube-apiserver            v1.31.3             f48c085d70203       28MB
registry.k8s.io/kube-controller-manager   v1.31.3             b2a5ab7b1d92e       26.1MB
registry.k8s.io/kube-proxy                v1.31.3             9c4bd20bd3676       30.2MB
registry.k8s.io/kube-scheduler            v1.31.3             bab83bb0895ef       20.1MB
registry.k8s.io/pause                     3.10                873ed75102791       320kB
registry.k8s.io/pause                     3.8                 4873874c08efc       311kB

I disabled apparmor and stop it via:

systemctl stop apparmor
systemctl disable apparmor

But my problem:

See the following commands and their outputs:

root@debian:~# crictl ps
WARN[0000] runtime connect using default endpoints: [unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead. 
WARN[0000] image connect using default endpoints: [unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead. 
CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
d772e204b7393       f48c085d70203       5 seconds ago        Running             kube-apiserver            74                  979de99851915       kube-apiserver-debian
836675095007f       b2a5ab7b1d92e       49 seconds ago       Running             kube-controller-manager   79                  6093de386dcdb       kube-controller-manager-debian
b49d5a2e4f6db       9c4bd20bd3676       About a minute ago   Running             kube-proxy                4                   7a9a3a6acf57c       kube-proxy-c6xcx
410f91f480175       2e96e5913fc06       3 minutes ago        Running             etcd                      79                  138ae42bfe66f       etcd-debian
root@debian:~# crictl ps
WARN[0000] runtime connect using default endpoints: [unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead. 
WARN[0000] image connect using default endpoints: [unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead. 
CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
d772e204b7393       f48c085d70203       13 seconds ago      Running             kube-apiserver            74                  979de99851915       kube-apiserver-debian
836675095007f       b2a5ab7b1d92e       57 seconds ago      Running             kube-controller-manager   79                  6093de386dcdb       kube-controller-manager-debian
root@debian:~# crictl ps
WARN[0000] runtime connect using default endpoints: [unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead. 
WARN[0000] image connect using default endpoints: [unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead. 
CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
1fd76dfe743c5       b2a5ab7b1d92e       2 minutes ago       Running             kube-controller-manager   96                  314ae87bde2a1       kube-controller-manager-debian
root@debian:~# crictl ps
WARN[0000] runtime connect using default endpoints: [unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead. 
WARN[0000] image connect using default endpoints: [unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead. 
CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
1fd76dfe743c5       b2a5ab7b1d92e       2 minutes ago       Running             kube-controller-manager   96                  314ae87bde2a1       kube-controller-manager-debian
root@debian:~#

Unfortunately, somtimes a container become up and become down, no difference between them,
I don’t know how to solve this problem.

Cluster information:

root@debian:~# kubectl cluster-info 
Kubernetes control plane is running at https://192.168.1.106:6443
CoreDNS is running at https://192.168.1.106:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

Kubernetes version:

1.31

Installation method:

kubeadm

Host OS:

Debian SID

Can you check the container logs, which might contain information about the issue ?

According to api server is down, every command for get information from api server, I get connection refused.

It’s my syslog:

2024-12-03T07:47:03.647884+03:30 debian kubelet[129072]: E1203 07:47:03.647634  129072 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
2024-12-03T07:47:05.048081+03:30 debian rtkit-daemon[1885]: Supervising 8 threads of 5 processes of 1 users.
2024-12-03T07:47:05.048484+03:30 debian rtkit-daemon[1885]: Supervising 8 threads of 5 processes of 1 users.
2024-12-03T07:47:05.194731+03:30 debian kubelet[129072]: E1203 07:47:05.193561  129072 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.1.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/debian?timeout=10s\": dial tcp 192.168.1.106:6443: connect: connection refused" interval="7s"
2024-12-03T07:47:05.442510+03:30 debian kubelet[129072]: I1203 07:47:05.442221  129072 scope.go:117] "RemoveContainer" containerID="b562a728bb7276e8a30e80e985d3ea2e4b6d9510ef3cf3ab3b92e81d4544b151"
2024-12-03T07:47:05.442873+03:30 debian kubelet[129072]: E1203 07:47:05.442704  129072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=etcd pod=etcd-debian_kube-system(5c9eb61296e3d3bcba41ed75ca1e42de)\"" pod="kube-system/etcd-debian" podUID="5c9eb61296e3d3bcba41ed75ca1e42de"
2024-12-03T07:47:05.451767+03:30 debian kubelet[129072]: I1203 07:47:05.451592  129072 scope.go:117] "RemoveContainer" containerID="995ccf22b813b294ea51d31abde964d6e5e38ca3931ddfa2f0a490abc282859b"
2024-12-03T07:47:05.452325+03:30 debian kubelet[129072]: E1203 07:47:05.452060  129072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-c6xcx_kube-system(fc5aa814-67b5-4c7f-a888-95502f831eaa)\"" pod="kube-system/kube-proxy-c6xcx" podUID="fc5aa814-67b5-4c7f-a888-95502f831eaa"
2024-12-03T07:47:06.423919+03:30 debian kubelet[129072]: I1203 07:47:06.422599  129072 scope.go:117] "RemoveContainer" containerID="ebbb1f43b05af258721235acf5ea09f028e84837b0f49d58ea8bd5832cc6416e"
2024-12-03T07:47:06.424187+03:30 debian kubelet[129072]: E1203 07:47:06.422982  129072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-apiserver pod=kube-apiserver-debian_kube-system(8fe25ee6e12bfd0a219531254c7bbe9f)\"" pod="kube-system/kube-apiserver-debian" podUID="8fe25ee6e12bfd0a219531254c7bbe9f"
2024-12-03T07:47:08.651005+03:30 debian kubelet[129072]: E1203 07:47:08.649852  129072 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
2024-12-03T07:47:11.380278+03:30 debian kubelet[129072]: I1203 07:47:11.378835  129072 status_manager.go:851] "Failed to get status for pod" podUID="5c9eb61296e3d3bcba41ed75ca1e42de" pod="kube-system/etcd-debian" err="Get \"https://192.168.1.106:6443/api/v1/namespaces/kube-system/pods/etcd-debian\": dial tcp 192.168.1.106:6443: connect: connection refused"
2024-12-03T07:47:11.380665+03:30 debian kubelet[129072]: I1203 07:47:11.379596  129072 status_manager.go:851] "Failed to get status for pod" podUID="8fe25ee6e12bfd0a219531254c7bbe9f" pod="kube-system/kube-apiserver-debian" err="Get \"https://192.168.1.106:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-debian\": dial tcp 192.168.1.106:6443: connect: connection refused"
2024-12-03T07:47:11.380959+03:30 debian kubelet[129072]: I1203 07:47:11.380224  129072 status_manager.go:851] "Failed to get status for pod" podUID="04054150d846e1a0398c77bf4254644d" pod="kube-system/kube-controller-manager-debian" err="Get \"https://192.168.1.106:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-debian\": dial tcp 192.168.1.106:6443: connect: connection refused"
2024-12-03T07:47:11.381163+03:30 debian kubelet[129072]: I1203 07:47:11.380888  129072 status_manager.go:851] "Failed to get status for pod" podUID="44c0fcdfd3d72e33ef564e2cb876a4e7" pod="kube-system/kube-scheduler-debian" err="Get \"https://192.168.1.106:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-debian\": dial tcp 192.168.1.106:6443: connect: connection refused"
2024-12-03T07:47:11.381957+03:30 debian kubelet[129072]: I1203 07:47:11.381667  129072 status_manager.go:851] "Failed to get status for pod" podUID="fc5aa814-67b5-4c7f-a888-95502f831eaa" pod="kube-system/kube-proxy-c6xcx" err="Get \"https://192.168.1.106:6443/api/v1/namespaces/kube-system/pods/kube-proxy-c6xcx\": dial tcp 192.168.1.106:6443: connect: connection refused"
2024-12-03T07:47:12.195054+03:30 debian kubelet[129072]: E1203 07:47:12.194807  129072 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.1.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/debian?timeout=10s\": dial tcp 192.168.1.106:6443: connect: connection refused" interval="7s"
2024-12-03T07:47:13.652541+03:30 debian kubelet[129072]: E1203 07:47:13.651427  129072 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
2024-12-03T07:47:18.378972+03:30 debian kubelet[129072]: I1203 07:47:18.377828  129072 scope.go:117] "RemoveContainer" containerID="ebbb1f43b05af258721235acf5ea09f028e84837b0f49d58ea8bd5832cc6416e"
2024-12-03T07:47:18.379546+03:30 debian kubelet[129072]: E1203 07:47:18.378201  129072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-apiserver pod=kube-apiserver-debian_kube-system(8fe25ee6e12bfd0a219531254c7bbe9f)\"" pod="kube-system/kube-apiserver-debian" podUID="8fe25ee6e12bfd0a219531254c7bbe9f"
2024-12-03T07:47:18.653998+03:30 debian kubelet[129072]: E1203 07:47:18.653621  129072 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
2024-12-03T07:47:19.196733+03:30 debian kubelet[129072]: E1203 07:47:19.196479  129072 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.1.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/debian?timeout=10s\": dial tcp 192.168.1.106:6443: connect: connection refused" interval="7s"
2024-12-03T07:47:19.380093+03:30 debian kubelet[129072]: I1203 07:47:19.378762  129072 scope.go:117] "RemoveContainer" containerID="b562a728bb7276e8a30e80e985d3ea2e4b6d9510ef3cf3ab3b92e81d4544b151"
2024-12-03T07:47:19.380474+03:30 debian kubelet[129072]: E1203 07:47:19.379320  129072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=etcd pod=etcd-debian_kube-system(5c9eb61296e3d3bcba41ed75ca1e42de)\"" pod="kube-system/etcd-debian" podUID="5c9eb61296e3d3bcba41ed75ca1e42de"
2024-12-03T07:47:20.418750+03:30 debian kubelet[129072]: I1203 07:47:20.417532  129072 scope.go:117] "RemoveContainer" containerID="995ccf22b813b294ea51d31abde964d6e5e38ca3931ddfa2f0a490abc282859b"
2024-12-03T07:47:20.419055+03:30 debian kubelet[129072]: E1203 07:47:20.417965  129072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-c6xcx_kube-system(fc5aa814-67b5-4c7f-a888-95502f831eaa)\"" pod="kube-system/kube-proxy-c6xcx" podUID="fc5aa814-67b5-4c7f-a888-95502f831eaa"
2024-12-03T07:47:21.378935+03:30 debian kubelet[129072]: I1203 07:47:21.378650  129072 status_manager.go:851] "Failed to get status for pod" podUID="8fe25ee6e12bfd0a219531254c7bbe9f" pod="kube-system/kube-apiserver-debian" err="Get \"https://192.168.1.106:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-debian\": dial tcp 192.168.1.106:6443: connect: connection refused"
2024-12-03T07:47:21.379675+03:30 debian kubelet[129072]: I1203 07:47:21.379494  129072 status_manager.go:851] "Failed to get status for pod" podUID="04054150d846e1a0398c77bf4254644d" pod="kube-system/kube-controller-manager-debian" err="Get \"https://192.168.1.106:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-debian\": dial tcp 192.168.1.106:6443: connect: connection refused"
2024-12-03T07:47:21.380338+03:30 debian kubelet[129072]: I1203 07:47:21.380165  129072 status_manager.go:851] "Failed to get status for pod" podUID="44c0fcdfd3d72e33ef564e2cb876a4e7" pod="kube-system/kube-scheduler-debian" err="Get \"https://192.168.1.106:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-debian\": dial tcp 192.168.1.106:6443: connect: connection refused"
2024-12-03T07:47:21.381164+03:30 debian kubelet[129072]: I1203 07:47:21.380958  129072 status_manager.go:851] "Failed to get status for pod" podUID="fc5aa814-67b5-4c7f-a888-95502f831eaa" pod="kube-system/kube-proxy-c6xcx" err="Get \"https://192.168.1.106:6443/api/v1/namespaces/kube-system/pods/kube-proxy-c6xcx\": dial tcp 192.168.1.106:6443: connect: connection refused"
2024-12-03T07:47:21.381782+03:30 debian kubelet[129072]: I1203 07:47:21.381620  129072 status_manager.go:851] "Failed to get status for pod" podUID="5c9eb61296e3d3bcba41ed75ca1e42de" pod="kube-system/etcd-debian" err="Get \"https://192.168.1.106:6443/api/v1/namespaces/kube-system/pods/etcd-debian\": dial tcp 192.168.1.106:6443: connect: connection refused"
2024-12-03T07:47:23.657268+03:30 debian kubelet[129072]: E1203 07:47:23.655582  129072 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"