Kube-apiserver container is down

I init a cluster such as :

kubeadm init --control-plane-endpoint "192.168.0.120:6443" --upload-certs --apiserver-cert-extra-sans=192.168.0.120  --apiserver-advertise-address=192.168.0.120  --pod-network-cidr=192.168.0.0/16  --v=5  --cri-socket=/run/containerd/containerd.sock

My iptables rules:

# iptables -L -t nat
Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination         
DOCKER     all  --  anywhere             anywhere             ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT)
target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         
DOCKER     all  --  anywhere            !127.0.0.0/8          ADDRTYPE match dst-type LOCAL

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination         
MASQUERADE  all  --  172.17.0.0/16        anywhere            
MASQUERADE  all  --  172.23.0.0/16        anywhere            
KUBE-POSTROUTING  all  --  anywhere             anywhere             /* kubernetes postrouting rules */

Chain DOCKER (2 references)
target     prot opt source               destination         
RETURN     all  --  anywhere             anywhere            
RETURN     all  --  anywhere             anywhere            

Chain KUBE-KUBELET-CANARY (0 references)
target     prot opt source               destination         

Chain KUBE-MARK-DROP (0 references)
target     prot opt source               destination         
MARK       all  --  anywhere             anywhere             MARK or 0x8000

Chain KUBE-MARK-MASQ (0 references)
target     prot opt source               destination         
MARK       all  --  anywhere             anywhere             MARK or 0x4000

Chain KUBE-POSTROUTING (1 references)
target     prot opt source               destination         
RETURN     all  --  anywhere             anywhere             mark match ! 0x4000/0x4000
MARK       all  --  anywhere             anywhere             MARK xor 0x4000
MASQUERADE  all  --  anywhere             anywhere             /* kubernetes service traffic requiring SNAT */ random-fully

My images are:

# crictl images
WARN[0000] image connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead. 
ERRO[0000] unable to determine image API version: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: no such file or directory" 
IMAGE                                                      TAG                 IMAGE ID            SIZE
docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin   v1.1.0              fcecffc7ad4af       3.82MB
docker.io/rancher/mirrored-flannelcni-flannel              v0.19.2             8b675dda11bb1       20.5MB
k8s.gcr.io/pause                                           3.6                 6270bb605e12e       302kB
registry.k8s.io/coredns/coredns                            v1.9.3              5185b96f0becf       14.8MB
registry.k8s.io/etcd                                       3.5.4-0             a8a176a5d5d69       102MB
registry.k8s.io/kube-apiserver                             v1.25.0             4d2edfd10d3e3       34.2MB
registry.k8s.io/kube-controller-manager                    v1.25.0             1a54c86c03a67       31.3MB
registry.k8s.io/kube-proxy                                 v1.25.0             58a9a0c6d96f2       20.3MB
registry.k8s.io/kube-scheduler                             v1.25.0             bef2cf3115095       15.8MB
registry.k8s.io/pause                                      3.8                 4873874c08efc       311kB

My problem are illustraded in the two ps command:

root@debian:~> ps ax |grep kube-apiserver
  17217 ?        Ssl    0:07 kube-apiserver --advertise-address=192.168.0.120 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/pki/ca.crt --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-account-signing-key-file=/etc/kubernetes/pki/sa.key --service-cluster-ip-range=192.168.0.0/16 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
  17799 pts/0    S+     0:00 grep kube-apiserver
root@debian:~> ps ax |grep kube-apiserver
  17863 pts/0    S+     0:00 grep kube-apiserver

Most of the times kube-apiserver container is down. When I restart kubelet it will be up, But after some second it will be down.

How can I solve this problem?

Kubernetes version: 1.25.0
Cloud being used: (put bare-metal if not on a public cloud)
Installation method: packages
Host OS: Debian Unstable

Does it work if you disable the firewalls?

Unfortunately, when I disable iptables via iptables -F -t nat ; iptables -F After some second kube-apiserver container will be down.
Oh, kube-controller-manager container will be down.

After restarting kubelet via : systemctl restart kubelet
output of tail -f /var/log/syslog |grep kube-apiserver is :

tail -f /var/log/syslog |grep kube-apiserver
Sep 20 05:06:12 debian kubelet[105206]: I0920 05:06:12.600166  105206 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20e594b5a0991d49a542000cc0edaa0-ca-certs\") pod \"kube-apiserver-debian\" (UID: \"b20e594b5a0991d49a542000cc0edaa0\") " pod="kube-system/kube-apiserver-debian"
Sep 20 05:06:12 debian kubelet[105206]: I0920 05:06:12.600203  105206 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20e594b5a0991d49a542000cc0edaa0-usr-share-ca-certificates\") pod \"kube-apiserver-debian\" (UID: \"b20e594b5a0991d49a542000cc0edaa0\") " pod="kube-system/kube-apiserver-debian"
Sep 20 05:06:12 debian kubelet[105206]: I0920 05:06:12.600388  105206 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-pki\" (UniqueName: \"kubernetes.io/host-path/b20e594b5a0991d49a542000cc0edaa0-etc-pki\") pod \"kube-apiserver-debian\" (UID: \"b20e594b5a0991d49a542000cc0edaa0\") " pod="kube-system/kube-apiserver-debian"
Sep 20 05:06:12 debian kubelet[105206]: I0920 05:06:12.600423  105206 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20e594b5a0991d49a542000cc0edaa0-usr-local-share-ca-certificates\") pod \"kube-apiserver-debian\" (UID: \"b20e594b5a0991d49a542000cc0edaa0\") " pod="kube-system/kube-apiserver-debian"
Sep 20 05:06:12 debian kubelet[105206]: I0920 05:06:12.600689  105206 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20e594b5a0991d49a542000cc0edaa0-etc-ca-certificates\") pod \"kube-apiserver-debian\" (UID: \"b20e594b5a0991d49a542000cc0edaa0\") " pod="kube-system/kube-apiserver-debian"
Sep 20 05:06:12 debian kubelet[105206]: I0920 05:06:12.600735  105206 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20e594b5a0991d49a542000cc0edaa0-k8s-certs\") pod \"kube-apiserver-debian\" (UID: \"b20e594b5a0991d49a542000cc0edaa0\") " pod="kube-system/kube-apiserver-debian"
Sep 20 05:06:42 debian kubelet[105206]: E0920 05:06:42.841487  105206 kubelet.go:1712] "Failed creating a mirror pod for" err="Post \"https://192.168.0.120:6443/api/v1/namespaces/kube-system/pods\": dial tcp 192.168.0.120:6443: connect: connection refused" pod="kube-system/kube-apiserver-debian"
Sep 20 05:06:43 debian kubelet[105206]: I0920 05:06:43.462915  105206 status_manager.go:667] "Failed to get status for pod" podUID=b20e594b5a0991d49a542000cc0edaa0 pod="kube-system/kube-apiserver-debian" err="Get \"https://192.168.0.120:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-debian\": dial tcp 192.168.0.120:6443: connect: connection refused"
Sep 20 05:06:43 debian kubelet[105206]: E0920 05:06:43.463023  105206 kubelet.go:1712] "Failed creating a mirror pod for" err="Post \"https://192.168.0.120:6443/api/v1/namespaces/kube-system/pods\": dial tcp 192.168.0.120:6443: connect: connection refused" pod="kube-system/kube-apiserver-debian"
Sep 20 05:06:43 debian containerd[906]: time="2022-09-20T05:06:43.464252351+04:30" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-debian,Uid:b20e594b5a0991d49a542000cc0edaa0,Namespace:kube-system,Attempt:61,}"
Sep 20 05:06:43 debian containerd[906]: time="2022-09-20T05:06:43.634998707+04:30" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-debian,Uid:b20e594b5a0991d49a542000cc0edaa0,Namespace:kube-system,Attempt:61,} returns sandbox id \"1f631e254d53c5409298e6cc6dc2de0d05a9ac174d4fe93976aba41b9d5e3300\""
Sep 20 05:06:43 debian containerd[906]: time="2022-09-20T05:06:43.637676237+04:30" level=info msg="CreateContainer within sandbox \"1f631e254d53c5409298e6cc6dc2de0d05a9ac174d4fe93976aba41b9d5e3300\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:87,}"
Sep 20 05:06:43 debian containerd[906]: time="2022-09-20T05:06:43.652068369+04:30" level=info msg="CreateContainer within sandbox \"1f631e254d53c5409298e6cc6dc2de0d05a9ac174d4fe93976aba41b9d5e3300\" for &ContainerMetadata{Name:kube-apiserver,Attempt:87,} returns container id \"3d8e6d1f054f80853a849b21cccbe8e85e866f24e1469882b7f63bb9174c860d\""
Sep 20 05:06:48 debian kubelet[105206]: E0920 05:06:48.178509  105206 kubelet.go:1712] "Failed creating a mirror pod for" err="pods \"kube-apiserver-debian\" already exists" pod="kube-system/kube-apiserver-debian"
Sep 20 05:06:48 debian kubelet[105206]: E0920 05:06:48.484907  105206 kubelet.go:1712] "Failed creating a mirror pod for" err="pods \"kube-apiserver-debian\" already exists" pod="kube-system/kube-apiserver-debian"

I think containerd has problem, Because etcd , kube-proxy and kube-controller-manager will be down.

debian/ubuntu need swap disabled to work:

sudo swapoff -a

Turn it off on all cluster nodes.

Not only I used swapoff -a but also I removed swap line from fstab