CoreDNS pods stuck in ContainerCreating

Cluster information:

Kubernetes version: 1.22.2
Cloud being used: bare-metal
Installation method: yum repo
Host OS: Oracle Linux Server 7.9
CNI and version: weave 0.3.0

You can format your yaml by highlighting it and pressing Ctrl-Shift-C, it will make your output easier to read.

I am still new to Kubernetes and I was trying to set up a cluster on bare metal servers according to the official docu.

Right now I am running a one worker and one master node configuration, but I am struggling to run all the pods once the cluster initializes. The main problem is the coredns pods, that are stuck in the ContainerCreating state.

 NAMESPACE     NAME                                     READY   STATUS              RESTARTS   AGE
kube-system   coredns-78fcd69978-4vtsp                 0/1     ContainerCreating   0          5s
kube-system   coredns-78fcd69978-wtn2c                 0/1     ContainerCreating   0          12h
kube-system   etcd-dcpoth24213118                      1/1     Running             4          12h
kube-system   kube-apiserver-dcpoth24213118            1/1     Running             0          12h
kube-system   kube-controller-manager-dcpoth24213118   1/1     Running             0          12h
kube-system   kube-proxy-8282p                         1/1     Running             0          12h
kube-system   kube-scheduler-dcpoth24213118            1/1     Running             0          12h
kube-system   weave-net-6zz2j                          2/2     Running             0          12h

After checking the logs I’ve noticed this error. The problem is I don’t really know what the error is refering to.

Events:
  Type     Reason                  Age                From               Message
  ----     ------                  ----               ----               -------
  Normal   Scheduled               19s                default-scheduler  Successfully assigned kube-system/coredns-78fcd69978-4vtsp to dcpoth24213118
  Warning  FailedCreatePodSandBox  13s                kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "2521c9dd723f3fc50b3510791a8c35cbc9ec19768468eb3da3367274a4dfcbba" network for pod "coredns-78fcd69978-4vtsp": networkPlugin cni failed to set up pod "coredns-78fcd69978-4vtsp_kube-system" network: error getting ClusterInformation: Get "https://[10.43.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default": dial tcp 10.43.0.1:443: connect: no route to host, failed to clean up sandbox container "2521c9dd723f3fc50b3510791a8c35cbc9ec19768468eb3da3367274a4dfcbba" network for pod "coredns-78fcd69978-4vtsp": networkPlugin cni failed to teardown pod "coredns-78fcd69978-4vtsp_kube-system" network: error getting ClusterInformation: Get "https://[10.43.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default": dial tcp 10.43.0.1:443: connect: no route to host]
  Normal   SandboxChanged          10s (x2 over 12s)  kubelet            Pod sandbox changed, it will be killed and re-created.

I’ve running the kuberenetes cluster behind a corporate proxy. I’ve set the environmental variables as follows.

export https_proxy=http://proxyIP:PORT
export http_proxy=http://proxyIP:PORT
export HTTP_PROXY="${http_proxy}"
export HTTPS_PROXY="${https_proxy}"
export NO_PROXY=localhost,127.0.0.1,master_node_IP,worker_node_IP,10.0.0.0/8,10.96.0.0/16
[root@dcpoth24213118 ~]# kubectl get svc -A
NAMESPACE     NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
default       kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP                  12h
kube-system   kube-dns     ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   12h
[root@dcpoth24213118 ~]# ip r s
default via 6.48.248.129 dev eth1
6.48.248.128/26 dev eth1 proto kernel scope link src 6.48.248.145
10.32.0.0/12 dev weave proto kernel scope link src 10.32.0.1
10.155.0.0/24 via 6.48.248.129 dev eth1
10.228.0.0/24 via 6.48.248.129 dev eth1
10.229.0.0/24 via 6.48.248.129 dev eth1
10.250.0.0/24 via 6.48.248.129 dev eth1

I’ve got weave network plugin installed. The issue is that I cannot create any other pods, all will get stuck in the ContainerCreating state.

I’ve run out of ideas how to fix it. Can someone give me a hint ?

@Martin_C were you able to resolve your issue?
I ran into the same and am looking for a solution.

Hi.

Check the logs either in kubernetes or in docker for the failed containers/pods.

When I checked, the logs said that a route is overlaping with an existing one. So I delete the route that was in the logs and the problem was solved.

Hi
@Martin_C what logs? I have the same problem, but can’t seem to find where the logs are. All useful information I can acquire are from describes.