Deployment

Hello,

I am getting the following error when deploying services on kubernetes master
–ImagePullBackOff
–Crashloopbackoff

I am able to deploy only one docker image on Kubernetes master i.e nginx but starts to get the above error when I try to deploy other services.

Kindly help me with this error. Thanks in Advance!

Are you able to take a look at the kubelet logs? There may be more clues in there as to what is going on.

It seems very unlikely to be related to deploying new services.

Can you check pods logs and output of kubernetes describe?

1.kubectl get pod -n “your namespace” -o wide ->to get scheduled node.
2.connect to the node ,use docker pull to pull the image you want.If still can’t pull the image,that you docker have some problem.like network problem.

Thanks for your reply!. Will surely look at the kubelet logs.

1 Like

I used the following command to get the logs of a pod on which CrashLoopBackOff error occurred.
kubectl -n kube-system describe pod app-ubuntu-7f68b59b46-6824b

Following came in the event logs:-

Events:
Type Reason Age From Message


Normal Scheduled 4m29s default-scheduler Successfully assigned kube-system/app-ubuntu-7f68b59b46-6824b to knode
Normal Created 2m59s (x4 over 4m12s) kubelet, knode Created container app-ubuntu
Normal Started 2m59s (x4 over 4m12s) kubelet, knode Started container app-ubuntu
Warning BackOff 2m28s (x7 over 3m59s) kubelet, knode Back-off restarting failed container
Normal Pulling 2m13s (x5 over 4m26s) kubelet, knode Pulling image “ubuntu”
Normal Pulled 2m7s (x5 over 4m13s) kubelet, knode Successfully pulled image “ubuntu”

All I found in the above event log was the message - Back-off restarting failed container.
But there is no further details as to why it happened.

I deployed nginx image in the similar way but no issue occurred.

What command is the Ubuntu container running?

You don’t see logs from the pod itself when doing kubectl logs ?

Yes, when I ran the following command to get the logs of the pod on which CrashLoopBackOff occurred :-
kubectl -n kubernetes-dashboard logs app-ubuntu3-55f4898cb6-j9zwz
where namespace: kubernetes dashboard
pod name : app-ubuntu3-55f4898cb6-j9zwz

I didnt get anything in the logs. Even I tried to save the output of the above command in the file but the file comes out to be empty.

Is the above command correct?
Is there any other command to check the logs of a pod?

Hi,

What exactly are you asking?

Just wondering what that Ubuntu container supposed to be doing, launching nginx, apache or bash? If that command is failing is might explain why it’s going into a crashbackloop and not producing any logs. If you can provide the pod spec that would be great as well.

I was trying to do a small deployment test. So I have chosen to deploy the two docker images - nginx and ubuntu separately. Nginx image was deployed successfully with no issues. But an issue occurred (CrashLoopBackOff ) when I tried to deploy ubuntu image.

Below is the output of the following command:-
kubectl -n kubernetes-dashboard describe pod app-ubuntu3-55f4898cb6-j9zwz
where
kubernetes-dashboard is the namespace
app-ubuntu3-55f4898cb6-j9zwz is the pod name

OUTPUT:-

Name: app-ubuntu3-55f4898cb6-j9zwz
Namespace: kubernetes-dashboard
Priority: 0
PriorityClassName:
Node: knode/172.20.10.5
Start Time: Mon, 15 Jul 2019 12:21:05 +0530
Labels: k8s-app=app-ubuntu3
pod-template-hash=55f4898cb6
Annotations: cni.projectcalico.org/podIP: 192.168.177.208/32
Status: Running
IP: 192.168.177.208
Controlled By: ReplicaSet/app-ubuntu3-55f4898cb6
Containers:
app-ubuntu3:
Container ID: docker://83aa8d5cb9ee6c78b2b9b099ed04d40aa70ffcd275acb60d2f3fe200fb381733
Image: ubuntu:latest
Image ID: docker-pullable://ubuntu@sha256:9b1702dcfe32c873a770a32cfd306dd7fc1c4fd134adfb783db68defc8894b3c
Port:
Host Port:
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Tue, 16 Jul 2019 11:33:23 +0530
Finished: Tue, 16 Jul 2019 11:33:23 +0530
Ready: False
Restart Count: 50
Environment:
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-5xps2 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-5xps2:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-5xps2
Optional: false
QoS Class: BestEffort
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message


Normal Pulled 18h (x28 over 20h) kubelet, knode Successfully pulled image “ubuntu:latest”
Normal Pulling 18h (x31 over 20h) kubelet, knode Pulling image “ubuntu:latest”
Warning BackOff 17h (x674 over 20h) kubelet, knode Back-off restarting failed container
Normal SandboxChanged 18m (x7 over 20m) kubelet, knode Pod sandbox changed, it will be killed and re-created.
Normal Pulling 17m (x3 over 18m) kubelet, knode Pulling image “ubuntu:latest”
Normal Pulled 16m (x3 over 18m) kubelet, knode Successfully pulled image “ubuntu:latest”
Normal Created 16m (x3 over 18m) kubelet, knode Created container app-ubuntu3
Normal Started 16m (x3 over 17m) kubelet, knode Started container app-ubuntu3
Warning BackOff 0s (x82 over 17m) kubelet, knode Back-off restarting failed container

I think there is no problem with Ubuntu, just that it runs something and stops, so it is restarted. See this part of what you pasted:

Reason: Completed
Exit Code: 0

That is, the container exit status was 0 (no error) and the reason is it run until completion (i.e. no problem).

Let me explain why this happens in this case.

The idea of a deployment is that is something that needs to be running constantly, like a web server. When you run the nginx image, the nginx process is started and the container is alive as long as that process is alive. If thar process stops running, the container is not alive anymore.

In the case of a deployment, as the assumption is that the containers should be running all the time, they are restarted time after time. After some time, an exponential back off is used to restart it more spaced in time.

I think that is what you are seeing. If, for example, you try to add in the kubernetes yaml a command for the Ubuntu container, that will run for a long time, the container will be running just fine.

For example, you can add something like:

command: [ “sleep”, 30m" ]

And the Ubuntu container will run fine for 30 mins.

Of course, when you run your application this won’t be a problem because you will be starting your application :slight_smile:

Thanks for the detailed explanation!

I tried to add the following code to the yaml file but the error isn’t going away. I added the following code to the ‘containers’ section of the yaml file.

livenessProbe:
exec:
command:
- “sleep”
- 30m"

Any insights on what changes to be made in the yaml file to make the container run for specific period of time in this case?

No problem!

Yes, not a livenessprobe, that are for another thing. Sorry I wasn’t clear, but this is apart from the livenessprobe.

The thing is: the Ubuntu image probably doesn’t run any command or a command that exits. As the command exits, the process dies and so the container. When the container is detected dead, is automatically restarted.

The livenessprobe is to detect issues while the container hasn’t crashed.

So, the deployment will work, if I understand correctly, if you add a “command” section with the sleep. Not a livenessprobe section. Sorry I wasn’t clear before :slight_smile:

1 Like

No worries. You have provided a good explanation. Atleast I am now able to understand why the ubuntu image was not running and this is not an error. It is working as designed. Today , I tried with postgres docker image and it has been deployed successfully.

Now I am trying to deploy the springboot services that I have created. To start with, I have created a simple hello world springboot application and trying to deploying it in kubernetes.

However, following exception is coming after it is deployed :-
Failed to pull image “hello-world-4:latest”: rpc error: code = Unknown desc = Error response from daemon: pull access denied for hello-world-4, repository does not exist or may require ‘docker login’

I will probably have to push this to docker hub and then do the deployment.

Great it is working!

And yes, for the other deployment you will need to push the image to a registry. That should make it reachable for workers :slight_smile:

This will help if you’re using a private registry. Pull an Image from a Private Registry - Kubernetes

I have tried to deploy a small spring boot service (hello-world example) on kubernetes and created two pods for it. One pod of this service is running fine on Master but the other pod is giving me an error below:-

Failed to pull image “172.20.10.3:30123/hello-world”: rpc error: code = Unknown desc = Error response from daemon: Get https://172.20.10.3:30123/v2/: http: server gave HTTP response to HTTPS client

I have tried made docker registry as insecure on both master and node.

Any idea why the same service is not running on the Node?