Laravel App + Nginx CrashLoopBackOff

Asking for help? Comment out what you need so we can get more information to help you!

Cluster information:

Kubernetes version:1.18.3
Cloud being used: (put bare-metal if not on a public cloud)
Installation method:homebrew
Host OS: OSX
CNI and version: no idea
CRI and version: no idea

Deploy I ran kubectl apply -f web_deployment.yml was on this file:

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web
spec:
  replicas: 3
  selector:
    matchLabels:
      app: web
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
  minReadySeconds: 5
  template:
    metadata:
      labels:
        app: web
    spec:
      containers:
        - name: laravel
          image: smart48/smt-laravel:latest
          ports:
            - containerPort: 9000
          resources:
            requests:
              cpu: 250m
            limits:
              cpu: 500m
        - name: nginx
          image: smart48/smt-nginx:latest
          ports:
            - containerPort: 80
---
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  name: web
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: web
  minReplicas: 3
  maxReplicas: 20
  targetCPUUtilizationPercentage: 50
---
apiVersion: v1
kind: Service
metadata:
  name: loadbalancer
spec:
  type: LoadBalancer
  ports:
    - port: 80
  selector:
    app: web

Then I checked the pods

kubectl get po --namespace default 
NAME                   READY   STATUS             RESTARTS   AGE
web-848fb4c7dc-5m2fp   1/2     CrashLoopBackOff   8          21m
web-848fb4c7dc-ffv7n   1/2     CrashLoopBackOff   8          21m
web-848fb4c7dc-mg65j   1/2     CrashLoopBackOff   8          21m

and kubectl describe po web-848fb4c7dc-5m2fp showed me

vents:
  Type     Reason     Age                   From               Message
  ----     ------     ----                  ----               -------
  Normal   Scheduled  22m                   default-scheduler  Successfully assigned default/web-848fb4c7dc-5m2fp to minikube
  Normal   Pulling    22m                   kubelet, minikube  Pulling image "smart48/smt-laravel:latest"
  Normal   Pulled     21m                   kubelet, minikube  Successfully pulled image "smart48/smt-laravel:latest"
  Normal   Created    21m                   kubelet, minikube  Created container laravel
  Normal   Started    21m                   kubelet, minikube  Started container laravel
  Normal   Pulling    20m (x4 over 21m)     kubelet, minikube  Pulling image "smart48/smt-nginx:latest"
  Normal   Pulled     20m (x4 over 21m)     kubelet, minikube  Successfully pulled image "smart48/smt-nginx:latest"
  Normal   Created    20m (x4 over 21m)     kubelet, minikube  Created container nginx
  Normal   Started    20m (x4 over 21m)     kubelet, minikube  Started container nginx
  Warning  BackOff    2m23s (x86 over 21m)  kubelet, minikube  Back-off restarting failed container

Now I think there may be something wrong with the yml or the images. But how can I figure that out? Does anyone see issues with the yaml here? The images are Laradock PHP FPM and Nginx and they built fine and are publicly accessible on Docker Hub so no idea what the issue is here…

Did read

This message says that it is in a Back-off restarting failed container . This most likely means that Kubernetes started your container, then the container subsequently exited. As we all know, the Docker container should hold and keep pid 1 running or the container exits. When the container exits, Kubernetes will try to restart it. After restarting it a few times, it will declare this BackOff state. However, Kubernetes will keep on trying to restart it.

source Managed Kube

So perhaps it is the image then?

I think I either need to roll my own PHP FPM and Nginx or make these two Laradock images work together. They are now all in different pods it seems. So perhaps that is the issue. Laradock needs several images to work with PHP FPM and so perhaps Nginx cannot reach PHP FPM?

So I rebuild the images using different specs and now we did new rollout and when I checked the pods all are running again:

kubectl rollout restart deployments
deployment.apps/web restarted
kubectl get po --namespace default 
NAME                  READY   STATUS    RESTARTS   AGE
web-fbd58c4c7-4rdxw   2/2     Running   0          38s
web-fbd58c4c7-rmmm2   2/2     Running   0          38s
web-fbd58c4c7-snzdx   2/2     Running   0          21s

Only when I check for the containers running now I only see the Minikube one

docker container ls
CONTAINER ID        IMAGE                                 COMMAND                  CREATED             STATUS              PORTS                                                                                                      NAMES
1cd35ad94423        gcr.io/k8s-minikube/kicbase:v0.0.10   "/usr/local/bin/entr…"   5 hours ago         Up 2 hours          127.0.0.1:32771->22/tcp, 127.0.0.1:32770->2376/tcp, 127.0.0.1:32769->5000/tcp, 127.0.0.1:32768->8443/tcp   minikube

Also the load balancer does not generate properly. Not even when I use port 3000 instead

---
apiVersion: v1
kind: Service
metadata:
  name: loadbalancer
spec:
  type: LoadBalancer
  ports:
    - port: 3000
  selector:
    app: web

I mean I cannot reach it at 10.105.166.110 and in Minikube dashboard Services section it shows as not fully built. And checking services it is still pending:

kubectl get svc                                                                               
NAME           TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes     ClusterIP      10.96.0.1        <none>        443/TCP        5h42m
loadbalancer   LoadBalancer   10.105.166.110   <pending>     80:31931/TCP   5h40m

Still confused here.

Did a deletion of deployment and service and a new deployment of both deployment and service:

kubectl delete deployment web
deployment.apps "web" deleted
kubectl get svc              
NAME           TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes     ClusterIP      10.96.0.1        <none>        443/TCP        5h44m
loadbalancer   LoadBalancer   10.105.166.110   <pending>     80:31931/TCP   5h42m
kubectl delete service loadbalancer
service "loadbalancer" deletedkubectl apply -f web_deployment.yml
deployment.apps/web created
horizontalpodautoscaler.autoscaling/web unchanged
service/loadbalancer created

With the changed load balancer port to 8080 - perhaps conflicting with Laravel Valet Nginx - I still have pending for ip!

kubectl get svc                    
NAME           TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
kubernetes     ClusterIP      10.96.0.1       <none>        443/TCP          5h47m
loadbalancer   LoadBalancer   10.97.102.160   <pending>     8080:31326/TCP   117s

Then I read

On cloud providers that support load balancers, an external IP address would be provisioned to access the Service. On Minikube, the LoadBalancer type makes the Service accessible through the minikube service command. Run the following command:minikube service hello-node

at this K8 source.

So I ran

and was told terminal needed to open browser

kubectl get svc                    
NAME           TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
kubernetes     ClusterIP      10.96.0.1       <none>        443/TCP          5h47m
loadbalancer   LoadBalancer   10.97.102.160   <pending>     8080:31326/TCP   117s
➜  smt-deploy git:(master) ✗ minikube service loadbalancer
|-----------|--------------|-------------|-------------------------|
| NAMESPACE |     NAME     | TARGET PORT |           URL           |
|-----------|--------------|-------------|-------------------------|
| default   | loadbalancer |        8080 | http://172.17.0.2:31326 |
|-----------|--------------|-------------|-------------------------|
🏃  Starting tunnel for service loadbalancer.
|-----------|--------------|-------------|------------------------|
| NAMESPACE |     NAME     | TARGET PORT |          URL           |
|-----------|--------------|-------------|------------------------|
| default   | loadbalancer |             | http://127.0.0.1:53536 |
|-----------|--------------|-------------|------------------------|
🎉  Opening service default/loadbalancer in default browser...
❗  Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

and it opened at http://127.0.0.1:53536/ and not localhost:8080 and connection was reset.

So I changed the port back to 80, removed deployment and service and tried again and go a page file not found :slight_smile:

kubectl delete deployment web      
kubectl apply -f web_deployment.yml
deployment.apps/web created
horizontalpodautoscaler.autoscaling/web unchanged
service/loadbalancer created
minikube service loadbalancer      
|-----------|--------------|-------------|-------------------------|
| NAMESPACE |     NAME     | TARGET PORT |           URL           |
|-----------|--------------|-------------|-------------------------|
| default   | loadbalancer |          80 | http://172.17.0.2:31297 |
|-----------|--------------|-------------|-------------------------|
🏃  Starting tunnel for service loadbalancer.
|-----------|--------------|-------------|------------------------|
| NAMESPACE |     NAME     | TARGET PORT |          URL           |
|-----------|--------------|-------------|------------------------|
| default   | loadbalancer |             | http://127.0.0.1:53607 |
|-----------|--------------|-------------|------------------------|
🎉  Opening service default/loadbalancer in default browser...
❗  Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

So that only leaves me the containers themselves to run and access.

Now, I can start laravel container using

docker run -p 4000:80 --name laravel smart48/smt-nginx 

and then I get blocked accessing it using the browser… I guess because access should happen via load balancer… but why should I have to start the containers separately using docker run? Should they not start inside the three pods?

Funnily enough when I load the load balancer via minikube service loadbalancer by ip http://127.0.0.1:54489 I do see

Connection keep-alive
Content-Type text/html; charset=UTF-8
Date Wed, 03 Jun 2020 08:34:30 GMT
Server nginx/1.17.10
Transfer-Encoding chunked
X-Powered-By PHP/7.4.6

So there must be some knowledge I am lacking on Pods and Containers here…?

Okay worked out checking out the containers. First list all pods:

get po --namespace default
NAME                   READY   STATUS    RESTARTS   AGE
web-84c8f5c8df-5bb7t   2/2     Running   0          7m26s
web-84c8f5c8df-b9hhd   2/2     Running   0          7m26s
web-84c8f5c8df-t82x2   2/2     Running   0          7m26s

then enter/execute the first instance:

kubectl exec -it web-84c8f5c8df-5bb7t -- /bin/bash
Defaulting container name to laravel.
Use 'kubectl describe pod/web-84c8f5c8df-5bb7t -n default' to see all of the containers in this pod.
root@web-84c8f5c8df-5bb7t:/var/www# ls
html
root@web-84c8f5c8df-5bb7t:/var/www# exit
exit

And to list them all containers to choose one I did:

kubectl describe pod/web-84c8f5c8df-5bb7t -n default
Name:         web-84c8f5c8df-5bb7t
Namespace:    default
Priority:     0
Node:         minikube/172.17.0.2
Start Time:   Wed, 03 Jun 2020 15:33:36 +0700
Labels:       app=web
              pod-template-hash=84c8f5c8df
Annotations:  <none>
Status:       Running
IP:           172.18.0.3
IPs:
  IP:           172.18.0.3
Controlled By:  ReplicaSet/web-84c8f5c8df
Containers:
  laravel:
    Container ID:   docker://f440b3b4fe8b3f1721a83b547e09d577d39e51d32aee2d73389723c867dc3bd2
    Image:          smart48/smt-laravel:latest
    Image ID:       docker-pullable://smart48/smt-laravel@sha256:35202976150b7d80dc84124bdc6753e2c88b954ce6d0ae4e1eb47145f822bb03
    Port:           9000/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Wed, 03 Jun 2020 15:33:41 +0700
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:  500m
    Requests:
      cpu:        250m
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-7lxgn (ro)
  nginx:
    Container ID:   docker://b4377cfff6113051a134ce1832b337680bc067bb06e920a29ebd37288e6a923a
    Image:          smart48/smt-nginx:latest
    Image ID:       docker-pullable://smart48/smt-nginx@sha256:68d5e204bb05a91f8e1dadbd2f995ee0ea92516d89d59e23931869a2aa59bc89
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Wed, 03 Jun 2020 15:33:51 +0700
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-7lxgn (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/www
    HostPathType:
  default-token-7lxgn:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-7lxgn
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age        From               Message
  ----    ------     ----       ----               -------
  Normal  Scheduled  <unknown>  default-scheduler  Successfully assigned default/web-84c8f5c8df-5bb7t to minikube
  Normal  Pulling    9m26s      kubelet, minikube  Pulling image "smart48/smt-laravel:latest"
  Normal  Pulled     9m23s      kubelet, minikube  Successfully pulled image "smart48/smt-laravel:latest"
  Normal  Created    9m22s      kubelet, minikube  Created container laravel
  Normal  Started    9m22s      kubelet, minikube  Started container laravel
  Normal  Pulling    9m22s      kubelet, minikube  Pulling image "smart48/smt-nginx:latest"
  Normal  Pulled     9m12s      kubelet, minikube  Successfully pulled image "smart48/smt-nginx:latest"
  Normal  Created    9m12s      kubelet, minikube  Created container nginx
  Normal  Started    9m12s      kubelet, minikube  Started container nginx

And we can use kubectl exec -it web-84c8f5c8df-b9hhd -c nginx -- /bin/bash to pick a specific container inside a pod for example.