Rabbitmq pod fails to deploy

Hello people i need your little help.

I have deployed awx on my Kubernetes cluster using helm chart.
All pods are running except the rabbitmq pod. It is showing “CrashLoopBackOff” in the status.
the output of command “kubectl describe pod awx-rabbitmq-0” is below

             node.kubernetes.io/unreachable:NoExecute for 300s

Type Reason Age From Message

Warning FailedScheduling default-scheduler running “VolumeBinding” filter plugin for p od “awx-rabbitmq-0”: pod has unbound immediate PersistentVolumeClaims
Normal Scheduled default-scheduler Successfully assigned default/awx-rabbitmq- 0 to k8s-worker-1
Normal Pulling 26m kubelet, k8s-worker-1 Pulling image “docker.io/bitnami/rabbitmq:3 .7.17-debian-9-r0”
Normal Pulled 19m kubelet, k8s-worker-1 Successfully pulled image “docker.io/bitnam i/rabbitmq:3.7.17-debian-9-r0”
Normal Created 18m (x5 over 19m) kubelet, k8s-worker-1 Created container rabbitmq
Normal Pulled 18m (x4 over 19m) kubelet, k8s-worker-1 Container image “docker.io/bitnami/rabbitmq :3.7.17-debian-9-r0” already present on machine
Normal Started 18m (x5 over 19m) kubelet, k8s-worker-1 Started container rabbitmq
Warning BackOff 4m45s (x74 over 19m) kubelet, k8s-worker-1 Back-off restarting failed container

Please help me in resolving this issue.


Seems to be failing because no persistent volume is being created?
Where are you running the K8s cluster?

Kind regards,

Hello Stephen
I am running k8s cluster on my laptop with one master node and one worker node.
The pvc of rabbitmq is mapped with the PV. I have attached the snap of PV and PVC for reference.
also the exit code of pod 1.

Oh my bad. I got the ordering of your output backwards.
This is actually the line of interest:

 Warning BackOff 4m45s (x74 over 19m) kubelet, k8s-worker-1 Back-off restarting failed container

This means the container was probably started successfully by Kubernetes but the process inside the container then crashed. Can you check the logs for the pod?

i tried to fetch the logs but using kubectl logs but is shows this.

That’s the actual log output. That is probably the reason for the pod crashing when starting up.

Is that folder path on the persistent volumes?
Seems like you may have some permissions issue with the pod writing to persistent volume.
Does the user the pod is running as have permissions at file system level?

Kind regards,

I have finally able to fetched the logs of pod, below is the snap of logs. please let me know what could be the problem here.


RabbitMQ is starting but then failing to query K8s for a list of nodes.
This isn’t a Kubernetes issue right now though, its specific to the RabbitMQ helm chart you are using.
I know a bit about RabbitMQ but not how it’s performing service discovery when running on K8s.

I think you will have a better response if you post this on the Github of where you got the helm chart.
Or even Google searching for this error reveals lots of results that might be relevant to your issue

Failed to get nodes from K8s

Kind regards,