Coredns stuck at ContainerCreating and not coming into ready state

Coredns stuck at ContainerCreating and not coming into ready state

Cluster information:

Kubernetes version: v1.23.7
Cloud being used: (put bare-metal if not on a public cloud)
Installation method:
Host OS: CentOS Linux 7 (Core)
CNI and version:
CRI and version:

Below is POD description:

Name: coredns-64897985d-jr2xx
Namespace: kube-system
Priority: 2000000000
Priority Class Name: system-cluster-critical
Node: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Start Time: Fri, 28 Apr 2023 23:39:30 +0530
Labels: k8s-app=kube-dns
pod-template-hash=64897985d
Annotations:
Status: Pending
IP:
IPs:
Controlled By: ReplicaSet/coredns-64897985d
Containers:
coredns:
Container ID:
Image: k8s.gcr.io/coredns/coredns:v1.8.6
Image ID:
Ports: 53/UDP, 53/TCP, 9153/TCP
Host Ports: 0/UDP, 0/TCP, 0/TCP
Args:
-conf
/etc/coredns/Corefile
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Limits:
memory: 170Mi
Requests:
cpu: 100m
memory: 70Mi
Liveness: http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
Readiness: http-get http://:8181/ready delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
Mounts:
/etc/coredns from config-volume (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-c8sfd (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: coredns
Optional: false
kube-api-access-c8sfd:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional:
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: kubernetes.io/os=linux
Tolerations: CriticalAddonsOnly op=Exists
node-role.kubernetes.io/control-plane:NoSchedule
node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message


Warning FailedScheduling 38m default-scheduler 0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn’t tolerate.
Normal Scheduled 38m default-scheduler Successfully assigned kube-system/coredns-64897985d-jr2xx to
Warning FailedCreatePodSandBox 38m kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container “6796448dc98547b1e813b4412e106aa338ec2fe2e832797b08d480785b3545ae” network for pod “coredns-64897985d-jr2xx”: networkPlugin cni failed to set up pod “coredns-64897985d-jr2xx_kube-system” network: error getting ClusterInformation: Get “https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default”: x509: certificate signed by unknown authority (possibly because of “crypto/rsa: verification error” while trying to verify candidate authority certificate “kubernetes”), failed to clean up sandbox container “6796448dc98547b1e813b4412e106aa338ec2fe2e832797b08d480785b3545ae” network for pod “coredns-64897985d-jr2xx”: networkPlugin cni failed to teardown pod “coredns-64897985d-jr2xx_kube-system” network: error getting ClusterInformation: Get “https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default”: x509: certificate signed by unknown authority (possibly because of “crypto/rsa: verification error” while trying to verify candidate authority certificate “kubernetes”)]
Normal SandboxChanged 3m24s (x165 over 38m) kubelet Pod sandbox changed, it will be killed and re-created.