Deployment CrashLoopBackOff

Hi Expert,
I trying to deploy my own image in my home lab cluster. I am always getting “CrashLoopBackOff” error.

Cluster information:

Kubernetes version: v1.23.6
Cloud being used: No
Installation method: Hosts are on VMware workstattion
Host OS: Ubuntu
CNI and version: Flannel , 0.3.1
CRI and version: docker://20.10.12
Running environemnt : one master with 2 nodes

Here is my deployment yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
name: squid-proxy
spec:
replicas: 2
selector:
matchLabels:
name: squid-proxy
template:
metadata:
labels:
name: squid-proxy
spec:
containers:
- name: squid-proxy
image: yananthan/proxy-squid:2.0

root@kubemaster:~/PROXY-SQUID# kubectl get pod
NAME READY STATUS RESTARTS AGE
hello-world-687c894465-5kfjq 1/1 Running 9 (4h28m ago) 50d
hello-world-687c894465-f55jj 1/1 Running 7 (4h28m ago) 50d
hello-world-687c894465-hmg8s 1/1 Running 1 (4h28m ago) 4h34m
hello-world-687c894465-q8zdf 1/1 Running 1 (4h28m ago) 4h34m
hello-world-687c894465-rh67s 1/1 Running 1 (4h28m ago) 4h34m
hello-world-687c894465-vzzhn 1/1 Running 7 (4h28m ago) 50d
squid-proxy-5f45d6df6b-p49h5 0/1 CrashLoopBackOff 1 (9s ago) 11s
squid-proxy-5f45d6df6b-pncbv 0/1 CrashLoopBackOff 1 (9s ago) 11s

root@kubemaster:~/PROXY-SQUID# kubectl describe deployment squid-proxy
Name: squid-proxy
Namespace: default
CreationTimestamp: Sat, 23 Jul 2022 06:51:32 +0000
Labels:
Annotations: deployment.kubernetes.io/revision: 1
Selector: name=squid-proxy
Replicas: 2 desired | 2 updated | 2 total | 0 available | 2 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: name=squid-proxy
Containers:
squid-proxy:
Image: yananthan/proxy-squid:2.0
Port:
Host Port:
Environment:
Mounts:
Volumes:
Conditions:
Type Status Reason


Available False MinimumReplicasUnavailable
Progressing True ReplicaSetUpdated
OldReplicaSets:
NewReplicaSet: squid-proxy-5f45d6df6b (2/2 replicas created)
Events:
Type Reason Age From Message


Normal ScalingReplicaSet 36s deployment-controller Scaled up replica set squid-proxy-5f45d6df6b to 2

root@kubemaster:~/PROXY-SQUID# kubectl describe pod squid-proxy-5f45d6df6b-pncbv
Name: squid-proxy-5f45d6df6b-pncbv
Namespace: default
Priority: 0
Node: kubeworker2/192.168.48.167
Start Time: Sat, 23 Jul 2022 06:51:32 +0000
Labels: name=squid-proxy
pod-template-hash=5f45d6df6b
Annotations:
Status: Running
IP: 10.244.2.92
IPs:
IP: 10.244.2.92
Controlled By: ReplicaSet/squid-proxy-5f45d6df6b
Containers:
squid-proxy:
Container ID: docker://04505368ba8b815fd5a37c46213a76c2ee2f3c004c72d0c5349db7330af28bfa
Image: yananthan/proxy-squid:2.0
Image ID: docker-pullable://yananthan/proxy-squid@sha256:a32e5e05ac85437b31e3d9ad168b5bb4392a8a865f51d6e5a1b1cbef50c00fc9
Port:
Host Port:
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Sat, 23 Jul 2022 06:53:00 +0000
Finished: Sat, 23 Jul 2022 06:53:00 +0000
Ready: False
Restart Count: 4
Environment:
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-z5vlj (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-z5vlj:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional:
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message


Normal Scheduled 103s default-scheduler Successfully assigned default/squid-proxy-5f45d6df6b-pncbv to kubeworker2
Normal Pulled 15s (x5 over 102s) kubelet Container image “yananthan/proxy-squid:2.0” already present on machine
Normal Created 15s (x5 over 102s) kubelet Created container squid-proxy
Normal Started 15s (x5 over 102s) kubelet Started container squid-proxy
Warning BackOff 1s (x9 over 101s) kubelet Back-off restarting failed container

PS: My container image working fine in docker.