Ingress Nginx SSH access and forwarding to Workspace container/pod

How can I access a workspace container inside a pod from the Minikube Ingress Nginx or any Ingress Nginx for that matter?

For basic port 80 access we use

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-resource
  namespace: smt-local
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
  rules:
    # host can be set in /etc/host or with laravel valet `valet link`
  - host: smart48k8.local
    http:
      paths:
        # https://kubernetes.io/docs/concepts/services-networking/ingress/#path-types 
        # `/` is all paths using Prefix
      - pathType: Prefix
        path: /
        backend:
          service:
            # refers to our app service exposed to the cluster
            name: nginx 
            port:
              number: 80


but I need to access the workspace container to deploy code via port 22 on Ingress and then 222 on the container

Workspace service:

apiVersion: v1
kind: Service
metadata:
  name: workspace-service
  namespace: smt-local
spec:
  # type not chosen so ClusterIP
  # type: NodePort
  selector:
    app: workspace
  ports:
    - protocol: TCP
      port: 22
      # targetport is the internal port where traffic is sent to
      # By default and for convenience, the targetPort is set to the same value
      # as the port field.
      # targetPort: 22
      # minikube service --url workspace-service -n smt-local
      # nodePort: 30007

Endpoints

kubectl get endpoints                                                     
NAME                ENDPOINTS         AGE
nginx               172.17.0.4:80     22h
php                 172.17.0.7:9000   22h
workspace-service   172.17.0.5:22     22h

workspace deployment

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: workspace
  labels:
    tier: backend
  namespace: smt-local
spec:
  # replicas: 2
  selector:
    matchLabels:
      app: workspace
      tier: backend
  template:
    metadata:
      labels:
        app: workspace
        tier: backend
    spec:
      containers:
        - name: workspace
          image: smart48/smt-workspace:1.0
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 22
          volumeMounts:
            - name: code-storage
              mountPath: /code
      volumes:
        - name: code-storage
          persistentVolumeClaim:
            claimName: code-pv-claim

The ingress service needs to be exposed via a NodePort or LoadBalancer service. Otherwise traffic remains within the cluster (ClusterIP). There are a few exceptions to this in cloud or bare metal deployments - e.g. using BGP.

The default port range for NodePort services is 30000-32767. If you need to expose a port on something other than that (e.g. 80 or 443) than you want to use a LoadBalancer service.

LoadBalancer service types work with an external entity (often a cloud provider) that will provision an external IP and your desired port. That will then direct traffic to your service (in your example, an ingress).

Minikube has a built in load balancer provider that mimics how it can be used in a real kubernetes deployment. They cover this (and NodePort) in their docs:

1 Like

Changed the service to NodePort

apiVersion: v1
kind: Service
metadata:
  name: workspace-service
  namespace: smt-local
spec:
  # type not chosen so ClusterIP
  type: NodePort
  selector:
    app: workspace
  ports:
    - protocol: TCP
      port: 22
      # targetport is the internal port where traffic is sent to
      # By default and for convenience, the targetPort is set to the same value
      # as the port field.
      # targetPort: 22
      # https://minikube.sigs.k8s.io/docs/handbook/accessing/#getting-the-nodeport-using-kubectl
      # minikube service --url workspace-service -n smt-local
      nodePort: 30007

and could find it:

kubectl get service workspace-service --output='jsonpath="{.spec.ports[0].nodePort}"' 
"30007"%                   

Then I tried to access the container from my macOS host

minikube ip                                                                          
192.168.64.21
ssh laradock@192.168.64.21 -p 30007
ssh: connect to host 192.168.64.21 port 30007: Connection refused

I check the Minikube access point using nmap

nmap 192.168.64.21    
Starting Nmap 7.91 ( https://nmap.org ) at 2020-12-24 08:28 +07
Nmap scan report for smart48k8.local (192.168.64.21)
Host is up (0.10s latency).
Not shown: 994 closed ports
PORT     STATE SERVICE
22/tcp   open  ssh
80/tcp   open  http
111/tcp  open  rpcbind
443/tcp  open  https
2049/tcp open  nfs
8443/tcp open  https-alt

and, yes port 22 is listed for which I got password request but the one stored for container Workspace failed. However NodePort 30007 is not listed and did not work. Did do

nmap 192.168.64.21 -p 30007
Starting Nmap 7.91 ( https://nmap.org ) at 2020-12-24 08:33 +07
Nmap scan report for smart48k8.local (192.168.64.21)
Host is up (0.00062s latency).

PORT      STATE    SERVICE
30007/tcp filtered unknown

Nmap done: 1 IP address (1 host up) scanned in 0.31 seconds

and it states filtered and on “deeper” scan closed:

nmap -sT 192.168.64.21 -p 30007
Starting Nmap 7.91 ( https://nmap.org ) at 2020-12-24 08:35 +07
Nmap scan report for smart48k8.local (192.168.64.21)
Host is up (0.013s latency).

PORT      STATE  SERVICE
30007/tcp closed unknown

Nmap done: 1 IP address (1 host up) scanned in 0.30 seconds

Is the access issue perhaps because I used minikube addons enable ingress ?

fyi:

kubectl get all -n kube-system 
NAME                                            READY   STATUS      RESTARTS   AGE
pod/coredns-74ff55c5b-7lfth                     1/1     Running     3          43h
pod/etcd-minikube                               1/1     Running     3          43h
pod/ingress-nginx-admission-create-w9rfg        0/1     Completed   0          43h
pod/ingress-nginx-admission-patch-fwr4s         0/1     Completed   2          43h
pod/ingress-nginx-controller-558664778f-gdkps   1/1     Running     4          43h
pod/kube-apiserver-minikube                     1/1     Running     3          43h
pod/kube-controller-manager-minikube            1/1     Running     3          43h
pod/kube-proxy-fjc4v                            1/1     Running     3          43h
pod/kube-scheduler-minikube                     1/1     Running     3          43h
pod/storage-provisioner                         1/1     Running     5          43h

NAME                                         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                  AGE
service/ingress-nginx-controller-admission   ClusterIP   10.109.85.175   <none>        443/TCP                  43h
service/kube-dns                             ClusterIP   10.96.0.10      <none>        53/UDP,53/TCP,9153/TCP   43h

NAME                        DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
daemonset.apps/kube-proxy   1         1         1       1            1           kubernetes.io/os=linux   43h

NAME                                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/coredns                    1/1     1            1           43h
deployment.apps/ingress-nginx-controller   1/1     1            1           43h

NAME                                                  DESIRED   CURRENT   READY   AGE
replicaset.apps/coredns-74ff55c5b                     1         1         1       43h
replicaset.apps/ingress-nginx-controller-558664778f   1         1         1       43h

NAME                                       COMPLETIONS   DURATION   AGE
job.batch/ingress-nginx-admission-create   1/1           27s        43h
job.batch/ingress-nginx-admission-patch    1/1           31s        43h

I patched access for port 22

kubectl patch configmap tcp-services -n kube-system --patch '{"data":{"22":"smt-local/workspace-service:22"}}'
configmap/tcp-services patched

Then I checked if it had been applied:

kubectl get configmap tcp-services -n kube-system -o yaml                                                     
apiVersion: v1
data:
  "22": smt-local/workspace-service:22
kind: ConfigMap
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"ConfigMap","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"tcp-services","namespace":"kube-system"}}
  creationTimestamp: "2020-12-22T06:39:45Z"
  labels:
    addonmanager.kubernetes.io/mode: EnsureExists
  managedFields:
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:kubectl.kubernetes.io/last-applied-configuration: {}
        f:labels:
          .: {}
          f:addonmanager.kubernetes.io/mode: {}
    manager: kubectl-client-side-apply
    operation: Update
    time: "2020-12-22T06:39:45Z"
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:data:
        .: {}
        f:22: {}
    manager: kubectl-patch
    operation: Update
    time: "2020-12-24T01:55:08Z"
  name: tcp-services
  namespace: kube-system
  resourceVersion: "55902"
  uid: 2a1051ed-ee8f-40dd-9060-6322054de00f

And yes… ssh port 22 TCP still open:

nmap -sT 192.168.64.21                                                                                        
Starting Nmap 7.91 ( https://nmap.org ) at 2020-12-24 08:56 +07
Nmap scan report for smart48k8.local (192.168.64.21)
Host is up (0.11s latency).
Not shown: 994 closed ports
PORT     STATE SERVICE
22/tcp   open  ssh
80/tcp   open  http
111/tcp  open  rpcbind
443/tcp  open  https
2049/tcp open  nfs
8443/tcp open  https-alt

Nmap done: 1 IP address (1 host up) scanned in 7.07 seconds

but it was also before…

and still same ssh port 22 issue with password asked but password given not accepted:

ssh laradock@192.168.64.21
The authenticity of host '192.168.64.21 (192.168.64.21)' can't be established.
ECDSA key fingerprint is SHA256:0fHgPZ+gJoihIKZ/T0Ic1ZGi/zMNgaAasBhavZ3zjUo.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '192.168.64.21' (ECDSA) to the list of known hosts.
laradock@192.168.64.21's password: 
Permission denied, please try again.
laradock@192.168.64.21's password: 
laradock@192.168.64.21: Permission denied (publickey,password,keyboard-interactive).

Perhaps this whole setup still does not work whereas ssh -i ~/.minikube/machines/minikube/id_rsa docker@$(minikube ip) does because that is the standard Minikube access way?

Update

Well I found out that with the setup I have with Ingress enabled I can access ssh using user docker and key as stored by Minikube to try to deploy:

host('192.168.64.23')
  ->user('docker')
  ->identityFile('~/.minikube/machines/minikube/id_rsa')
  ->set('deploy_path', '/tmp/hostpath-provisioner/smt-local/code-pv-claim/code')
  ->set('bin/php', 'kubectl exec -t deployments/workspace -- php')
  ->set('bin/composer', 'kubectl exec -t deployments/workspace -- composer -d={{release_path}}');

Only once in the kubectl commands do not work… because they need to work in the container of course.

Do hope I can use a NodePort once I have Ingress Nginx up and running on DigitalOcean and access port via forwarding. And was hoping to mimic this here as well. But now I am just ssh-ing into the VM to add a release which works:

$ pwd
/tmp/hostpath-provisioner/smt-local/code-pv-claim/code
$ ls -la
total 20
drwxr-xr-x 5 docker docker 4096 Dec 24 02:35 .
drwxrwxrwx 3 root   root   4096 Dec 24 02:35 ..
drwxr-xr-x 2 docker docker 4096 Dec 24 02:35 .dep
lrwxrwxrwx 1 docker docker   10 Dec 24 02:35 release -> releases/1
drwxr-xr-x 3 docker docker 4096 Dec 24 02:35 releases
drwxr-xr-x 4 docker docker 4096 Dec 24 02:35 shared

but then I need to run access the workspace to run commands and that does not yet work this way:

...
[192.168.64.23] > cd /tmp/hostpath-provisioner/smt-local/code-pv-claim/code/releases/1 && kubectl exec -t deployments/workspace -- composer -d=/tmp/hostpath-provisioner/smt-local/code-pv-claim/code/releases/1 install --verbose --prefer-dist --no-progress --no-interaction --no-dev --optimize-autoloader --no-suggest
[192.168.64.23] < bash: line 1: kubectl: command not found
➤ Executing task deploy:failed
• done on [192.168.64.23]
âś” Ok [0ms]
In Client.php line 103:
[Deployer\Exception\RuntimeException (127)]                                                                                                      
  The command "cd /tmp/hostpath-provisioner/smt-local/code-pv-claim/code/releases/1 && kubectl exec -t deployments/workspace -- composer -d=/tmp/  
  hostpath-provisioner/smt-local/code-pv-claim/code/releases/1 install --verbose --prefer-dist --no-progress --no-interaction --no-dev --optimize  
  -autoloader --no-suggest" failed.                                                                                                                
                                                                                                                                                   
  Exit Code: 127 (Command not found)                                                                                                               
                                                                                                                                                   
  Host Name: 192.168.64.23                                                                                                                         
                                                                                                                                                   
  ================                                                                                                                                 
  bash: line 1: kubectl: command not found      
...

Perhaps I can just run kubectl commands without being ssh-ed in as you normally do… still thinking about this.

  1. With access to Kubernetes on DigitalOcean via LoadBalancer or NodePort this command kubectl .... should work, right?

  2. Here in Minikube it does not however so it needs to run outside of the VM… How to do that with PHP Deployer?

Update II

Perhaps I should use docker command instead which do work in Minikube… Will Test.

This block

host('192.168.64.23')
  ->user('docker')
  ->identityFile('~/.minikube/machines/minikube/id_rsa')
  ->set('deploy_path', '/tmp/hostpath-provisioner/smt-local/code-pv-claim/code')
  // docker exec -it $(docker ps | grep smart48/smt-workspace | awk '{print $1}') /bin/bash
  ->set('bin/php', "docker exec -t $(docker ps | grep smart48/smt-workspace | awk '{print $1}')  php")
  ->set('bin/composer', "docker exec -t $(docker ps | grep smart48/smt-workspace | awk '{print $1}') composer -d={{release_path}}");

now almost works. Commands like docker exec -t $(docker ps | grep smart48/smt-workspace | awk '{print $1}') php do run in the Hyperkit Minikube VM. I only am having this path error:

In Client.php line 103:
                                                                                                                                                                                                          
  [Deployer\Exception\RuntimeException (1)]                                                                                                                                                               
  The command "cd /tmp/hostpath-provisioner/smt-local/code-pv-claim/code/releases/1 && docker exec -t $(docker ps | grep smart48/smt-workspace | awk '{print $1}') composer -d=/tmp/hostpath-provisioner  
  /smt-local/code-pv-claim/code/releases/1 install --verbose --prefer-dist --no-progress --no-interaction --no-dev --optimize-autoloader --no-suggest" failed.                                            
                                                                                                                                                                                                          
  Exit Code: 1 (General error)                                                                                                                                                                            
                                                                                                                                                                                                          
  Host Name: 192.168.64.23                                                                                                                                                                                
                                                                                                                                                                                                          
  ================                                                                                                                                                                                                                                                                                                                                                                                   
                                                                                                                                                                                          
    [RuntimeException]                                                                                                                                                                    
    Invalid working directory specified, =/tmp/hostpath-provisioner/smt-local/code-pv-claim/code/releases/1 does not exist.   

Weirdly enough I do have the releaes:

$ pwd
/tmp/hostpath-provisioner/smt-local/code-pv-claim/code/releases/1
$ ls -la
total 920
drwxr-xr-x 15 docker docker   4096 Dec 24 04:04 .
drwxr-xr-x  3 docker docker   4096 Dec 24 04:04 ..
drwxr-xr-x  2 docker docker   4096 Dec 24 04:04 .circleci
-rw-r--r--  1 docker docker    171 Dec 24 04:04 .dockerignore
-rw-r--r--  1 docker docker    220 Dec 24 04:04 .editorconfig
lrwxrwxrwx  1 docker docker     17 Dec 24 04:04 .env -> ../../shared/.env
-rw-r--r--  1 docker docker    777 Dec 24 04:04 .env.example
-rw-r--r--  1 docker docker    935 Dec 24 04:04 .env.smt.docker.example
...

Update

Ah wait. In the container it does have code as directory twice now

docker exec -it $(docker ps | grep smart48/smt-workspace | awk '{print $1}') /bin/bash
root@workspace-566b747498-rsfqz:/code# ls -la
total 12
drwxrwxrwx 3 root     root     4096 Dec 24 02:35 .
drwxr-xr-x 1 root     root     4096 Dec 24 02:19 ..
drwxr-xr-x 5 laradock laradock 4096 Dec 24 04:04 code
root@workspace-566b747498-rsfqz:/code# cd code/
root@workspace-566b747498-rsfqz:/code# cd code/
root@workspace-566b747498-rsfqz:/code/code# ll
total 20
drwxr-xr-x 5 laradock laradock 4096 Dec 24 04:04 ./
drwxrwxrwx 3 root     root     4096 Dec 24 02:35 ../
drwxr-xr-x 2 laradock laradock 4096 Dec 24 04:04 .dep/
lrwxrwxrwx 1 laradock laradock   10 Dec 24 04:04 release -> releases/1/
drwxr-xr-x 3 laradock laradock 4096 Dec 24 04:04 releases/
drwxr-xr-x 4 laradock laradock 4096 Dec 24 02:35 shared/

NB For the production environment on DigitalOcean I will probably need a LoadBalancer or Ingress Nginx as LoadBalancer and or NodePort. Will look into that as soon as this local Minikube testing works.

Using

host('192.168.64.23')
  ->user('docker')
  ->identityFile('~/.minikube/machines/minikube/id_rsa')
  ->set('deploy_path', '/tmp/hostpath-provisioner/smt-local/code-pv-claim')
  // docker exec -it $(docker ps | grep smart48/smt-workspace | awk '{print $1}') /bin/bash
  ->set('bin/php', "docker exec -t $(docker ps | grep smart48/smt-workspace | awk '{print $1}')  bash -c 'cd release && php'")
  ->set('bin/composer', "docker exec -t $(docker ps | grep smart48/smt-workspace | awk '{print $1}') bash -c 'cd release && composer'")

runs all in the proper locations. Had to do some bash -c 'chained commands. But… it now hangs at

• done on [192.168.64.23]
âś” Ok [490ms]
➤ Executing task artisan:storage:link
[192.168.64.23] > cd /tmp/hostpath-provisioner/smt-local/code-pv-claim/releases/1 && docker exec -t $(docker ps | grep smart48/smt-workspace | awk '{print $1}')  bash -c 'cd release && php' artisan --version

Perhaps a RAM issue or something… Not sure why it would hang add php artisan version now…

Ah, timeout exceeded as well

[Symfony\Component\Process\Exception\ProcessTimedOutException]                                                                                                                                          
  The process "ssh -A -i ~/.minikube/machines/minikube/id_rsa -o ControlMaster=auto -o ControlPersist=60 -o ControlPath=/Users/jasper/.ssh/deployer_docker@192.168.64.23 docker@192.168.64.23  'bash -s;  
   printf "[exit_code:%s]" $?;'" exceeded the timeout of 300 seconds.   

What is it you’re trying to achieve? It looks like you’re wanting to SSH into a pod to then get at the host?

Yes - this is how it is intended if you want ssh access to the host with minikube.

You shouldn’t be ssh’ing into the minikube vm / docker exec’ing at all unless its for troubleshooting purposes.

Yes I wanted to ssh into the Minikube, which I successfully did now.

You shouldn’t be ssh’ing into the minikube vm / docker exec’ing at all unless its for troubleshooting purposes.

To use PHP Deployer ssh access is needed so I need it for that purpose. Otherwise I do use it for troubleshooting only. I did try a NodePort solution as you saw but it did not work with Ingress enabled.

Decided to keep thread open as you may have other tips to have NodePort work or another way to have PHP Deployer ssh in to add a code release @mrbobbytables . Thanks for all the help so far and Merry Christmas!

I would strongly advise against that type of workflow for containers/k8s. It really goes against the general model of container based application. You build a container with your application and dependencies (often tied to a specific git commit) and push that out. After that, its incrementing the container image and a deployment will take care of rolling it out.

Yeah, did some more thinking and am now trying to “just” use docker image tagging for versions so I can use deployments more properly. Working on Laravel PHP FPM image in which I include the demo code now:

FROM php:7.4-fpm

WORKDIR /code

# https://learnk8s.io/blog/kubernetes-deploy-laravel-the-easy-way
# COPY . app to copy all laravel app files to this working directory
# Only use this option when this container is in a private repository
# Composer is added to https://github.com/smart48/smt-workspace 

RUN apt-get update && apt-get install -y libmcrypt-dev zip unzip git \
    libmagickwand-dev --no-install-recommends \
    && pecl install imagick \
    && docker-php-ext-enable imagick \
    && docker-php-ext-install pdo_mysql pcntl bcmath \
    && docker-php-ext-install opcache

# Configure non-root user.
ARG PUID=1000
ENV PUID ${PUID}
ARG PGID=1000
ENV PGID ${PGID}

RUN groupmod -o -g ${PGID} www-data && \
    usermod -o -u ${PUID} -g www-data www-data

COPY ./laravel.ini /usr/local/etc/php/conf.d
COPY ./opcache.ini /usr/local/etc/php/conf.d
COPY ./xlaravel.pool.conf /usr/local/etc/php-fpm.d/

COPY laravel /code

# Install Composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer

RUN composer install \
&&  php artisan optimize \
&&  php artisan route:cache

This builds works well locally (outside of Minikube running Docker for Mac) and when I enter the container I can see the code added. I tagged the image and pushed it to Docker Hub:

docker tag smart48/smt-laravel smart48/smt-laravel:1.2

and it is on Docker Hub here.

and locally I see this on my Mac’s container:

docker run --name smt-laravel -d smart48/smt-laravel:1.2   
53e4d1fc52042b6923be79d4590f5babe29b82efcb2cf04cfba7b6e95dd4b062
âžś  smt-laravel git:(master) âś— docker exec -it smt-laravel bash 
root@53e4d1fc5204:/code# ll
bash: ll: command not found
root@53e4d1fc5204:/code# ls -la
total 916
drwxr-xr-x  1 root root   4096 Dec 28 08:20 .
drwxr-xr-x  1 root root   4096 Dec 28 08:24 ..
drwxr-xr-x  2 root root   4096 Dec 28 05:46 .circleci
-rw-r--r--  1 root root    171 Dec 28 05:46 .dockerignore
-rw-r--r--  1 root root    220 Dec 28 05:46 .editorconfig
-rw-r--r--  1 root root    777 Dec 28 05:46 .env.example
-rw-r--r--  1 root root    935 Dec 28 05:46 .env.smt.docker.example
...

However when I build Minikube with all the deployments the PHP deployment

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: php
  namespace: smt-local
  labels:
    tier: backend
spec:
  # autoscale using `kubectl autoscale deployment x --cpu-percent=50 --min=1 --max=10` instead of setting replicas
  # https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/
  # replicas: 2 
  selector:
    matchLabels:
      app: php
      tier: backend
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
  minReadySeconds: 5
  template:
    metadata:
      labels:
        app: php
        tier: backend
    spec:
      containers:
        - name: php
          # image: php:7-fpm
          image: smart48/smt-laravel:1.2
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 9000
          resources:
            requests:
              cpu: 250m
            limits:
              cpu: 500m
          volumeMounts:
          - name: code-storage
          # may use /var/www/html at a later stage as this is php container base path 
          # and logical web root
            mountPath: /code
        # - name: laravel-horizon
        #   image: smart48/smt-horizon:latest
        #   # not sure if we need the port and volume here
        #   imagePullPolicy: IfNotPresent
        #   ports:
        #     - containerPort: 9377
        #   volumeMounts:
        #     - name: code-storage
        #       mountPath: /code
        #   command: ["/usr/local/bin/php", "artisan", "horizon"]
        #   lifecycle:
        #     preStop:
        #       exec:
        #         command: ["/usr/local/bin/php", "artisan", "horizon:terminate"]
      # Laravel Code Download so we can run Horizon without issues 
      # and have a codebase to start with For private repo better to add code to laravel image 
      # https://codepre.com/how-to-perform-git-clone-in-kubernetes-pod-deployment.html
      # commented out as we only need to fire it on first deployment
      # initContainers:
      #   - name: git-cloner
      #     image: alpine/git
      #     args:
      #         - clone
      #         - --single-branch
      #         - --
      #         - https://github.com/smart48/smt-demo
      #         - /data
      #     volumeMounts:
      #     - mountPath: /data
      #       name: code-storage
      volumes:
        - name: code-storage
          persistentVolumeClaim:
            claimName: code

is not loading with the code:

kubectl exec -it php-5bb756d65-rxp4h -- /bin/bash
root@php-5bb756d65-rxp4h:/code# ll
bash: ll: command not found
root@php-5bb756d65-rxp4h:/code# ls -la
total 8
drwxrwxrwx 2 root root 4096 Dec 28 07:41 .
drwxr-xr-x 1 root root 4096 Dec 28 08:23 ..

and

docker inspect --format='{{json .Config}}' d3cb7bf6eb60:

{"Hostname":"php-5bb756d65-v6482","Domainname":"","User":"0","AttachStdin":false,"AttachStdout":false,"AttachStderr":false,"ExposedPorts":{"9000/tcp":{}},"Tty":false,"OpenStdin":false,"StdinOnce":false,"Env":["PHP_PORT_9000_TCP_PROTO=tcp","KUBERNETES_SERVICE_PORT=443","NGINX_SERVICE_HOST=10.111.105.227","NGINX_PORT_80_TCP_PROTO=tcp","PHP_PORT_9000_TCP=tcp://10.100.113.2:9000","WORKSPACE_SERVICE_PORT_2222_TCP_PROTO=tcp","KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1","PHP_PORT_9000_TCP_PORT=9000","NGINX_SERVICE_PORT=80","WORKSPACE_SERVICE_PORT_2222_TCP=tcp://10.102.125.50:2222","WORKSPACE_SERVICE_PORT=tcp://10.102.125.50:2222","KUBERNETES_SERVICE_PORT_HTTPS=443","PHP_SERVICE_HOST=10.100.113.2","PHP_SERVICE_PORT=9000","WORKSPACE_SERVICE_SERVICE_HOST=10.102.125.50","KUBERNETES_PORT=tcp://10.96.0.1:443","KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443","NGINX_PORT_80_TCP_ADDR=10.111.105.227","WORKSPACE_SERVICE_SERVICE_PORT=2222","KUBERNETES_PORT_443_TCP_PORT=443","WORKSPACE_SERVICE_PORT_2222_TCP_ADDR=10.102.125.50","PHP_PORT=tcp://10.100.113.2:9000","NGINX_PORT_80_TCP=tcp://10.111.105.227:80","NGINX_PORT_80_TCP_PORT=80","WORKSPACE_SERVICE_PORT_2222_TCP_PORT=2222","KUBERNETES_PORT_443_TCP_PROTO=tcp","PHP_PORT_9000_TCP_ADDR=10.100.113.2","NGINX_PORT=tcp://10.111.105.227:80","KUBERNETES_SERVICE_HOST=10.96.0.1","PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin","PHPIZE_DEPS=autoconf \t\tdpkg-dev \t\tfile \t\tg++ \t\tgcc \t\tlibc-dev \t\tmake \t\tpkg-config \t\tre2c","PHP_INI_DIR=/usr/local/etc/php","PHP_EXTRA_CONFIGURE_ARGS=--enable-fpm --with-fpm-user=www-data --with-fpm-group=www-data --disable-cgi","PHP_CFLAGS=-fstack-protector-strong -fpic -fpie -O2 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64","PHP_CPPFLAGS=-fstack-protector-strong -fpic -fpie -O2 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64","PHP_LDFLAGS=-Wl,-O1 -pie","GPG_KEYS=42670A7FE4D0441C8E4632349E4FDC074A4EF02D 5A52880781F755608BF815FC910DEB46F53EA312","PHP_VERSION=7.4.13","PHP_URL=https://www.php.net/distributions/php-7.4.13.tar.xz","PHP_ASC_URL=https://www.php.net/distributions/php-7.4.13.tar.xz.asc","PHP_SHA256=aead303e3abac23106529560547baebbedba0bb2943b91d5aa08fff1f41680f4","PUID=1000","PGID=1000"],"Cmd":["php-fpm"],"Healthcheck":{"Test":["NONE"]},"Image":"smart48/smt-laravel@sha256:882fa1297a8680f1d2c6d600999aa5523f76b2b6b1f2ab512bf57fc3fda72f66","Volumes":null,"WorkingDir":"/code","Entrypoint":["docker-php-entrypoint"],"OnBuild":null,"Labels":{"annotation.io.kubernetes.container.hash":"22738de7","annotation.io.kubernetes.container.ports":"[{\"containerPort\":9000,\"protocol\":\"TCP\"}]","annotation.io.kubernetes.container.restartCount":"0","annotation.io.kubernetes.container.terminationMessagePath":"/dev/termination-log","annotation.io.kubernetes.container.terminationMessagePolicy":"File","annotation.io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.container.logpath":"/var/log/pods/smt-local_php-5bb756d65-v6482_7529674a-7c8f-482d-9777-d9aa9f1f918f/php/0.log","io.kubernetes.container.name":"php","io.kubernetes.docker.type":"container","io.kubernetes.pod.name":"php-5bb756d65-v6482","io.kubernetes.pod.namespace":"smt-local","io.kubernetes.pod.uid":"7529674a-7c8f-482d-9777-d9aa9f1f918f","io.kubernetes.sandbox.id":"b5301fb5bc20692e3a93bf05574f780b95374731d96850bff9384162f0a2b5fb"},"StopSignal":"SIGQUIT"}

Any idea what I am missing?

Perhaps the issue is that I now add the code to the image on build using

COPY laravel /code

# Install Composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer

RUN composer install \
&&  php artisan optimize \
&&  php artisan route:cache

It does load in the container without issue outside of Minikube, but then when I run a deployment with a storage class, new PVCs and mountPoints that /code directory is probably being overwitten… or at least /code in the deployment is no longer the same as the code in the container.

So how do I do this then? How do I add code to an image and load that code in the volume (Minikube storageClass pvc/pv or DO object storage)? Or should I add the code addition to the deployment instead. Seems like the latter is more likely. Not sure yet how one normally does this so pointers would be great.

I do see I can add code to temporary image location

# Add code to temporary location on image

COPY laravel /var/www

and then use Kubernetes BusyBox to copy the code to the persistent volume:

...
initContainers:
- name: install
  image: busybox
  volumeMounts:
  - name: dir
    mountPath: /code
  command:
  - cp
  - "-r"
  - "/var/www/."
  - "/code"

NB the code above does not work yet in updated deployment:

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: php
  namespace: smt-local
  labels:
    tier: backend
spec:
  # autoscale using `kubectl autoscale deployment x --cpu-percent=50 --min=1 --max=10` instead of setting replicas
  # https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/
  # replicas: 2 
  selector:
    matchLabels:
      app: php
      tier: backend
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
  minReadySeconds: 5
  template:
    metadata:
      labels:
        app: php
        tier: backend
    spec:
      containers:
        - name: php
          # image: php:7-fpm
          image: smart48/smt-laravel:1.6
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 9000
          resources:
            requests:
              cpu: 250m
            limits:
              cpu: 500m
          volumeMounts:
          - name: code-storage
          # may use /var/www/html at a later stage as this is php container base path 
          # and logical web root
            mountPath: /code
      initContainers:
      - name: install
        image: busybox
        volumeMounts:
        - name: code-storage
          mountPath: /code
        command:
        - cp
        - "-r"
        - "/var/www/."
        - "/code"
        # - name: laravel-horizon
        #   image: smart48/smt-horizon:latest
        #   # not sure if we need the port and volume here
        #   imagePullPolicy: IfNotPresent
        #   ports:
        #     - containerPort: 9377
        #   volumeMounts:
        #     - name: code-storage
        #       mountPath: /code
        #   command: ["/usr/local/bin/php", "artisan", "horizon"]
        #   lifecycle:
        #     preStop:
        #       exec:
        #         command: ["/usr/local/bin/php", "artisan", "horizon:terminate"]
      # Laravel Code Download so we can run Horizon without issues 
      # and have a codebase to start with For private repo better to add code to laravel image 
      # https://codepre.com/how-to-perform-git-clone-in-kubernetes-pod-deployment.html
      # commented out as we only need to fire it on first deployment
      # initContainers:
      #   - name: git-cloner
      #     image: alpine/git
      #     args:
      #         - clone
      #         - --single-branch
      #         - --
      #         - https://github.com/smart48/smt-demo
      #         - /data
      #     volumeMounts:
      #     - mountPath: /data
      #       name: code-storage
      volumes:
        - name: code-storage
          persistentVolumeClaim:
            claimName: code

Code is in /var/www/ but does not get copied over yet…

So we only need to see how I can

  • copy code from /var/www/ to /code in php deployment initContainer
  • run composer install in image from /var/www and run some commands - done
  • how I can deploy a new version every time - perhaps not an issue with image tags?

Well this code did work in the end:

initContainers:
- name: install
  image: busybox
  volumeMounts:
  - name: code-storage
    mountPath: /code
  command:
  - cp
  - "-RT"
  - "/var/www/."
  - "/code/"

does seem to run now. Only the chown rights are not 1000:1000 or docker:docker No solution for that yet. Also not sure why there were issues earlier. I do however need this not to run just ones, but every deployment. Tried a command added to php fpm container but it seemed to stop once command was done.

  1. how can I run this copy command simply on every deployment? Do I perhaps needs a job I can run instead?
  2. how can I use this command and copy over as user 1000 or Docker?
  3. will a new run and new copy overwrite all the old code in /code?

On one I perhaps overthink and when I change the image tag the initContainer may run again anew. But I do not know yet.

Update initContainer trigger

initContainers do update on tag change so that would mean that if I add a new Laravel image with a new tag and run kubectl apply -f deployments/php.yml that the initContainer will copy over the code. Will need to test this to see if it overwrites properly though.

Information based on:

What triggers init container to be run?

Basically initContainers are run every time a Pod , which has such containers in its definition, is created and reasons of creation of a Pod can be quite different. As you can read in official documentation init containers run before app containers in a Pod and they always run to completion . If a Pod’s init container fails, Kubernetes repeatedly restarts the Pod until the init container succeeds. So one of the things that trigger starting an initContainer is, among others, previous failed attempt of starting it.

Will editing deployment descriptor (or updating it with helm), for example, changing the image tag, trigger the init container?

Yes, basically every change to Deployment definition that triggers creation/re-creation of Pods managed by it, also triggers their initContainers to be run. It doesn’t matter if you manage it by helm or manually. Some slight changes like adding for example a new set of labels to your Deployment don’t make it to re-create its Pods but changing the container image for sure causes the controller ( Deployment , ReplicationController or ReplicaSet ) to re-create its Pods .

With newly added element to initContainer I managed to add files AND change the user and group settings:

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: php
  namespace: smt-local
  labels:
    tier: backend
spec:
  # autoscale using `kubectl autoscale deployment x --cpu-percent=50 --min=1 --max=10` instead of setting replicas
  # https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/
  # replicas: 2 
  selector:
    matchLabels:
      app: php
      tier: backend
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
  minReadySeconds: 5
  template:
    metadata:
      labels:
        app: php
        tier: backend
    spec:
      containers:
        - name: php
          # image: php:7-fpm
          image: smart48/smt-laravel:1.6
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 9000
          resources:
            requests:
              cpu: 250m
            limits:
              cpu: 500m
          volumeMounts:
          - name: code-storage
          # may use /var/www/html at a later stage as this is php container base path 
          # and logical web root
            mountPath: /code
      initContainers:
      - name: install
        image: busybox
        volumeMounts:
        - name: code-storage
          mountPath: /code
        command:
        - cp
        - "-RT"
        - "/var/www/."
        - "/code/"
      - name: chown
        image: busybox
        volumeMounts:
        - name: code-storage
          mountPath: /code
        command:
        - chown
        - "-R"
        - "1000:1000"
        - "/code/"
        # - name: laravel-horizon
        #   image: smart48/smt-horizon:latest
        #   # not sure if we need the port and volume here
        #   imagePullPolicy: IfNotPresent
        #   ports:
        #     - containerPort: 9377
        #   volumeMounts:
        #     - name: code-storage
        #       mountPath: /code
        #   command: ["/usr/local/bin/php", "artisan", "horizon"]
        #   lifecycle:
        #     preStop:
        #       exec:
        #         command: ["/usr/local/bin/php", "artisan", "horizon:terminate"]
      # Laravel Code Download so we can run Horizon without issues 
      # and have a codebase to start with For private repo better to add code to laravel image 
      # https://codepre.com/how-to-perform-git-clone-in-kubernetes-pod-deployment.html
      # commented out as we only need to fire it on first deployment
      # initContainers:
      #   - name: git-cloner
      #     image: alpine/git
      #     args:
      #         - clone
      #         - --single-branch
      #         - --
      #         - https://github.com/smart48/smt-demo
      #         - /data
      #     volumeMounts:
      #     - mountPath: /data
      #       name: code-storage
      volumes:
        - name: code-storage
          persistentVolumeClaim:
            claimName: code

User and group 1000 is www-data here due to image setup whereas in workspace user 1000 is laradock. Perhaps I need to do this for other containers as well. Will look into this.

Whether the code gets neatly overwritten on each new image tag triggering the initContainer cp -RT command I am not sure of yet.

Just ran a new deployment post image tag change. Did see /var/www/ showing the latest composer vendor rebuild. This is the directory the image build ads the code to.

But in the PHP deployment container /code base from where the site should load the vendor folder was from the day before:

root@php-5bcd4fc88d-xcdkc:/code# ls -la |grep vendor
drwxr-xr-x 53 www-data www-data   4096 Dec 29 07:21 vendor
root@php-5bcd4fc88d-xcdkc:/code# ls -la /var/www/ |grep vendor
drwxr-xr-x 53 root     root   4096 Dec 30 02:33 vendor
root@php-5bcd4fc88d-xcdkc:/code# date

current date on Minikube VM on checking this was:

Wed Dec 30 02:49:45 UTC 2020

So… it seems the cp command is not overwriting the code. The deployment of the PHP pod and container did run again. Did see all this and new image was added / used.

The cp command did not run however. This would contradict the news that initContainer runs when then main container image tag is updated in the deployment

When I check the pod it did seems the initContainer ran:

kubectl describe pod php-5bcd4fc88d-xcdkc
Name:         php-5bcd4fc88d-xcdkc
Namespace:    smt-local
Priority:     0
Node:         minikube/192.168.64.27
Start Time:   Wed, 30 Dec 2020 09:43:29 +0700
Labels:       app=php
              pod-template-hash=5bcd4fc88d
              tier=backend
Annotations:  <none>
Status:       Running
IP:           172.17.0.6
IPs:
  IP:           172.17.0.6
Controlled By:  ReplicaSet/php-5bcd4fc88d
Init Containers:
  install:
    Container ID:  docker://42aa4fd44f2c22bc981435bb970bb7d08fc878d2f2584b6c2ae61d9d7e012412
    Image:         busybox
    Image ID:      docker-pullable://busybox@sha256:49dae530fd5fee674a6b0d3da89a380fc93746095e7eca0f1b70188a95fd5d71
    Port:          <none>
    Host Port:     <none>
    Command:
      cp
      -RT
      /var/www/.
      /code/
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Wed, 30 Dec 2020 09:43:41 +0700
      Finished:     Wed, 30 Dec 2020 09:43:41 +0700
    Ready:          True
    Restart Count:  0
    Environment:    <none>
..

so why no new vendor directory then?

Do need the initContainer to run the copy command on a deployment update:

...
initContainers:
- name: install
  image: busybox
  volumeMounts:
  - name: code-storage
    mountPath: /code
  command:
  - cp
  - "-RT"
  - "/var/www/."
  - "/code/"
- name: chown
  image: busybox
  volumeMounts:
  - name: code-storage
    mountPath: /code
  command:
  - chown
  - "-R"
  - "1000:1000"
  - "/code/"
..

And it seems it does run. Though the start and completion time seems to be the same… ? I do however also need code to be overwritten. How can I achieve this?

I decided to use lifecycle to run the command for the container post start:

...
lifecycle:
  postStart:
    exec:
      command: ["/bin/bash", "-c", "cp -rt /var/www/. /code/"]
...

But that did not run. There was a new deployment but the /code base was again not overwritten. I guess because post start was not the case?

This hook is executed immediately after a container is created. However, there is no guarantee that the hook will execute before the container ENTRYPOINT. No parameters are passed to the handler.

The above would mean that it should run after a new container tag has been added, right?

Well, initContainer only works on container start it seems,

A Pod can restart, causing re-execution of init containers, for the following reasons:

* A user updates the Pod specification, causing the init container image to change. Any changes to the init container image restarts the Pod. App container image changes only restart the app container.
* The Pod infrastructure container is restarted. This is uncommon and would have to be done by someone with root access to nodes.
* All containers in a Pod are terminated while restartPolicy is set to Always, forcing a restart, and the init container completion record has been lost due to garbage collection.

not when I deploy anew, which is correct then. The lifecycle seems to also only run after a container start.

So how can I run a command to copy the latest to /code after I updated the image in the php deployment?

Seems I had to solve the code copying at the Docker container level

FROM php:7.4-fpm

# Install Composer
COPY --from=composer /usr/bin/composer /usr/bin/composer 

WORKDIR /code

# Add Necessary PHP Packages

RUN apt-get update && apt-get install -y libmcrypt-dev zip unzip git \
    libmagickwand-dev --no-install-recommends \
    && pecl install imagick \
    && docker-php-ext-enable imagick \
    && docker-php-ext-install pdo_mysql pcntl bcmath \
    && docker-php-ext-install opcache

# Configure non-root user.
ARG PUID=1000
ENV PUID ${PUID}
ARG PGID=1000
ENV PGID ${PGID}

# WORKDIR www-data
RUN groupmod -o -g ${PGID} www-data && \
    usermod -o -u ${PUID} -g www-data www-data

# Copy PHP config files
COPY ./laravel.ini /usr/local/etc/php/conf.d
COPY ./opcache.ini /usr/local/etc/php/conf.d
COPY ./xlaravel.pool.conf /usr/local/etc/php-fpm.d/

# Add code to temporary location on image
COPY laravel /code

# Install Composer Packages
RUN cd /code && composer install \
&& chown -R 1000:1000 /code

and then add a new tag to deployment:

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: php
  namespace: smt-local
  labels:
    tier: backend
spec:
  # autoscale using `kubectl autoscale deployment x --cpu-percent=50 --min=1 --max=10` instead of setting replicas
  # https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/
  # replicas: 2 
  selector:
    matchLabels:
      app: php
      tier: backend
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
  minReadySeconds: 5
  template:
    metadata:
      labels:
        app: php
        tier: backend
    spec:
      containers:
        - name: php
          # image: php:7-fpm
          image: smart48/smt-laravel:2.0.2
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 9000
          resources:
            requests:
              cpu: 250m
            limits:
              cpu: 500m
          # lifecycle:
          #   postStart:
          #     exec:
          #       command: ["/bin/bash", "-c", "cp -rt /var/www/. /code/"]
          volumeMounts:
          - name: code-storage
          # may use /var/www/html at a later stage as this is php container base path 
          # and logical web root
            mountPath: /code
      # initContainers:
      # - name: install
      #   image: busybox
      #   volumeMounts:
      #   - name: code-storage
      #     mountPath: /code
      #   command:
      #   - cp
      #   - "-RT"
      #   - "/var/www/."
      #   - "/code/"
      # - name: chown
      #   image: busybox
      #   volumeMounts:
      #   - name: code-storage
      #     mountPath: /code
      #   command:
      #   - chown
      #   - "-R"
      #   - "1000:1000"
      #   - "/code/"
        # - name: laravel-horizon
        #   image: smart48/smt-horizon:latest
        #   # not sure if we need the port and volume here
        #   imagePullPolicy: IfNotPresent
        #   ports:
        #     - containerPort: 9377
        #   volumeMounts:
        #     - name: code-storage
        #       mountPath: /code
        #   command: ["/usr/local/bin/php", "artisan", "horizon"]
        #   lifecycle:
        #     preStop:
        #       exec:
        #         command: ["/usr/local/bin/php", "artisan", "horizon:terminate"]
      # Laravel Code Download so we can run Horizon without issues 
      # and have a codebase to start with For private repo better to add code to laravel image 
      # https://codepre.com/how-to-perform-git-clone-in-kubernetes-pod-deployment.html
      # commented out as we only need to fire it on first deployment
      # initContainers:
      #   - name: git-cloner
      #     image: alpine/git
      #     args:
      #         - clone
      #         - --single-branch
      #         - --
      #         - https://github.com/smart48/smt-demo
      #         - /data
      #     volumeMounts:
      #     - mountPath: /data
      #       name: code-storage
      volumes:
        - name: code-storage
          persistentVolumeClaim:
            claimName: code

without the initContainer or lifecycle. This did work now. Not sure why it did not work like a week ago. Lost a lot of time just getting the code to be copied over in Kubernetes. Seems I can better cp, chown and run php artisan commands on container build. Closing this thread now as I have mainly achieved what I tried to do the hard way first. What I am doing now is:

  • Copying of code in container
  • Code preparation commands in container
  • Deployment via image with code tagged with a version and no PHP Deployer via SSH port
  • K8 Scaling and general Pod setup
  • K8 including Laravel PHP image
  • K8 Mounting of data
  • K8 port setup
  • K8 for Persistent Volume Claim and setup

Will need to check a second deployment with a code change which should load new code in mounted path / PVC. But I do hope it will.

No, I can add it to the container at /var/www but if I add it to the mounted volume /code which is also the container WORKDIR and then try to load in in the deployment I get ZERO code . So

FROM php:7.4-fpm

# Install Composer
COPY --from=composer /usr/bin/composer /usr/bin/composer 

WORKDIR /code

# Add Necessary PHP Packages

RUN apt-get update && apt-get install -y libmcrypt-dev zip unzip git \
    libmagickwand-dev --no-install-recommends \
    && pecl install imagick \
    && docker-php-ext-enable imagick \
    && docker-php-ext-install pdo_mysql pcntl bcmath \
    && docker-php-ext-install opcache

# Configure non-root user.
ARG PUID=1000
ENV PUID ${PUID}
ARG PGID=1000
ENV PGID ${PGID}

# WORKDIR www-data
RUN groupmod -o -g ${PGID} www-data && \
    usermod -o -u ${PUID} -g www-data www-data

# Copy PHP config files
COPY ./laravel.ini /usr/local/etc/php/conf.d
COPY ./opcache.ini /usr/local/etc/php/conf.d
COPY ./xlaravel.pool.conf /usr/local/etc/php-fpm.d/

# Add code to temporary location on image
COPY laravel /var/www

# Install Composer Packages
RUN cd /var/www && composer install \
&& chown -R 1000:1000 /var/www

Is needed. But copying in Kubernetes to the PVC /code volume so far has only succeeded on the start of the container with initContainer but I need to have it done each time a new image tag is added. How?

Update

Looking into rolling updates to load new code in a new pod with new container. Perhaps with kubectl set image.. I can update to latest image and trigger new pods that then will run `initContainer and replace the old pods. Never worked with this before. Also reading Interactive Tutorial - Updating Your App | Kubernetes and Deploying and Updating Apps with Kubernetes - Manning

I ran an image update and it did work:

kubectl set image deployments/php php=smart48/smat-laravel:2.0.5
deployment.apps/php image updated

However I hit an imagepullbackoff initially:

kubectl get pods
NAME                         READY   STATUS             RESTARTS   AGE
mysql-686f78b8dd-mhzfs       1/1     Running            5          47h
nginx-5648d7d44b-dcwc8       1/1     Running            13         47h
php-6858d569b8-59qwf         0/1     ImagePullBackOff   0          65s
workspace-598b5ff496-bk8wk   1/1     Running            5          47h

and

kubectl logs -f php-6858d569b8-59qwf     
Error from server (BadRequest): container "php" in pod "php-6858d569b8-59qwf" is waiting to start: trying and failing to pull image

tag 2.0.5 does exist Docker Hub … odd. Events also showed the issue:

kubectl get events
LAST SEEN   TYPE      REASON              OBJECT                      MESSAGE
5m5s        Normal    Scheduled           pod/php-6858d569b8-59qwf    Successfully assigned smt-local/php-6858d569b8-59qwf to minikube
3m28s       Normal    Pulling             pod/php-6858d569b8-59qwf    Pulling image "smart48/smat-laravel:2.0.5"
3m23s       Warning   Failed              pod/php-6858d569b8-59qwf    Failed to pull image "smart48/smat-laravel:2.0.5": rpc error: code = Unknown desc = Error response from daemon: pull
...

And the pod did not start anew:

 kubectl get pods
NAME                         READY   STATUS             RESTARTS   AGE
mysql-686f78b8dd-mhzfs       1/1     Running            5          47h
nginx-5648d7d44b-dcwc8       1/1     Running            13         47h
php-6858d569b8-59qwf         0/1     ImagePullBackOff   0          6m12s
workspace-598b5ff496-bk8wk   1/1     Running            5          47h

Update

My bad, I did not spell the image correctly:

kubectl set image deployments/php php=smart48/smt-laravel:2.0.5
deployment.apps/php image updated

Although the new image was pulled

Normal  Scheduled  2m10s  default-scheduler  Successfully assigned smt-local/php-5f798c7847-h6kp7 to minikube
  Normal  Pulling    2m9s   kubelet            Pulling image "smart48/smt-laravel:2.0.5"
  Normal  Pulled     2m6s   kubelet            Successfully pulled image "smart48/smt-laravel:2.0.5" in 3.75711883s
  Normal  Created    2m6s   kubelet            Created container php
  Normal  Started    2m5s   kubelet            Started container php

the new code was not copied over to /code . Perhaps because the old applied deployment did not have the initContainer as the new one has now…?

Update 2

Applied latest yaml with initContainer, pushed a new container version. Then set the new image to set up a new pod:

smt-larakube git:(main) âś— kubectl set image deployments/php php=smart48/smt-laravel:2.0.6
deployment.apps/php image updated
âžś  smt-larakube git:(main) âś— kubectl get pods
NAME                         READY   STATUS        RESTARTS   AGE
mysql-686f78b8dd-mhzfs       1/1     Running       5          2d
nginx-5648d7d44b-dcwc8       1/1     Running       13         2d
php-5f595c846b-bw4c6         0/1     Init:0/2      0          4s
php-7b868d5f59-l4w9p         0/1     Terminating   0          7m53s
workspace-598b5ff496-bk8wk   1/1     Running       5          2d

but again initContainer did run but terminated at the same time saying all was done:

Init Containers:
  install:
    Container ID:  docker://82c25c5e57e2be98352381bfe5f6c85e59d1b27786d9ea9084813e847832e159
    Image:         busybox
    Image ID:      docker-pullable://busybox@sha256:49dae530fd5fee674a6b0d3da89a380fc93746095e7eca0f1b70188a95fd5d71
    Port:          <none>
    Host Port:     <none>
    Command:
      cp
      -RT
      /var/www/.
      /code/
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Thu, 31 Dec 2020 08:59:45 +0700
      Finished:     Thu, 31 Dec 2020 08:59:45 +0700

… but no code was copied over to /code:

kubectl exec -it workspace-598b5ff496-bk8wk -- /bin/bash
root@workspace-598b5ff496-bk8wk:/code# ll
total 8
drwxr-xr-x 2 laradock laradock 4096 Dec 30 08:26 ./
drwxr-xr-x 1 root     root     4096 Dec 30 06:42 ../

It was still in /var/www:

kubectl exec -it php-5f595c846b-bw4c6 -- /bin/bash 
root@php-5f595c846b-bw4c6:/code# ls
root@php-5f595c846b-bw4c6:/code# cd /var/www/
root@php-5f595c846b-bw4c6:/var/www# ls -la
total 912
drwxr-xr-x  1 www-data www-data   4096 Dec 30 08:38 .
drwxr-xr-x  1 root     root       4096 Dec 11 07:16 ..
drwxr-xr-x  1 www-data www-data   4096 Dec 28 05:46 .circleci
-rw-r--r--  1 www-data www-data    171 Dec 28 05:46 .dockerignore
-rw-r--r--  1 www-data www-data    220 Dec 28 05:46 .editorconfig
-rw-r--r--  1 www-data www-data    777 Dec 28 05:46 .env.example
-rw-r--r--  1 www-data www-data    935 Dec 28 05:46 .env.smt.docker.example
-rw-r--r--  1 www-data www-data    944 Dec 28 05:46 .env.smt.example
-rw-r--r--  1 www-data www-data    111 Dec 28 05:46 .gitattributes
-rw-r--r--  1 www-data www-data    197 Dec 28 05:46 .gitignore
-rw-r--r--  1 www-data www-data    144 Dec 30 01:51 .gitmodules
...

I tested the copy command and it does work in the container:

root@php-5f595c846b-bw4c6:/code# cp -RT /var/www/. /code/
root@php-5f595c846b-bw4c6:/code# ls -la
total 912
drwxr-xr-x 14 www-data www-data   4096 Dec 31 02:07 .
drwxr-xr-x  1 root     root       4096 Dec 31 01:59 ..
drwxr-xr-x  2 root     root       4096 Dec 31 02:07 .circleci
-rw-r--r--  1 root     root        171 Dec 31 02:07 .dockerignore
-rw-r--r--  1 root     root        220 Dec 31 02:07 .editorconfig
-rw-r--r--  1 root     root        777 Dec 31 02:07 .env.example
-rw-r--r--  1 root     root        935 Dec 31 02:07 .env.smt.docker.example
...

So why is it not working??

NB Reading https://itnext.io/scaling-your-symfony-application-and-preparing-it-for-deployment-on-kubernetes-c102bf246a93 and Running Dockerized Laravel Applications On Top Of Kubernetes but they seems to use a similar method…

Found the solution I think. Needed not Busybox to copy the code but my own image as I could otherwise not copy over the code!

initContainers:
      - name: install
        image: smart48/smt-laravel:2.0.9
        volumeMounts:
        - name: code-storage
          mountPath: /code
          # https://itnext.io/scaling-your-symfony-application-and-preparing-it-for-deployment-on-kubernetes-c102bf246a93
          # https://www.magalix.com/blog/running-dockerized-laravel-applications-on-top-of-kubernetes
        command: ["cp", "-r", "/var/www/." , "/code/"]

This seems to finally work.