Pods recreation rules

Cluster information:

Kubernetes version: v1.28.2
Cloud being used: bare-metal
Installation method: os packages from kubernetes repo
Host OS: Centos 8 Stream

I have CI/CD with gitlab to k8 cluster with it`s integrations.
With my pipeline on deploy stage i have command:

- ./kubectl apply -f ./kubernetes/deployment.yml

It’s regular deployment with deploy section using :latest image from my own private docker repo.
All good when pods not yet created. They’ll create from manifest properly.
But!
When i updated my code repo and ci/cd pipeline works, and pods with older version running, the pods are not recreated. I have same old version.
I have “unchanged” status in my stdout block.
A think it`s not a pull problem. Pods just not yet recreated.

Whats a best practice for this works? May be some paragraph in manifest? I can’t find some info about it in docs, or i’m bad seeker.

Please help!
Thanks!

My Deployment section:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: accesses
  labels:
    app.kubernetes.io/version: "0.2.003"
spec:
  replicas: 2
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
  selector:
    matchLabels:
      app: accesses
  template:
    metadata:
      labels:
        app: accesses
    spec:
      volumes:
      - name: storage-for-static
        nfs:
          server: nfs.ekord.loc
          path: /storage
      containers:
      - image: IMAGE - this replaces with gitlab code hack
        name: accesses
        ports:
        - containerPort: 5000
        volumeMounts:
        - name: storage-for-static
          mountPath: /app/app/static/store
        envFrom:
          - configMapRef:
              name: database-config
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /
            port: 5000
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        startupProbe:
          failureThreshold: 3
          periodSeconds: 10
          httpGet:
            path: /
            port: 5000

Hi,

Can you please describe CI/CD process with more details. What is the actual image tag that ends up in the deployment manifest? Do you have a versioning strategy or just set “latest” tag to the newest image that is being built?

Hi!

This is my build, push and deploy stages from ci/cd pipeline
Stages for tests and for stage deploy works are cuted.

stages:
  - build
  - push
  - deploy
variables:
  DOCKER_SERVER: appl.ekord.loc
  DOCKER_REGISTRY: dockerhub.ekord.loc:6000
  DOCKER_HOST: "tcp://${DOCKER_SERVER}:2376"
  SSH_PRIVATE_KEY: $SSH_PRIVATE_KEY

# Extends

.push:
  stage: push
  script:
    - docker push $IMAGE

.deploy:
  stage: deploy
  before_script:
    - export KUBECONFIG=$KUBECONFIG
    - apk add --no-cache curl
    - curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
    - chmod +x ./kubectl

# PRODUCTION

build prod docker image:
  tags:
    - did
  stage: build
  variables:
    ENV: $ENV_PROD
    IMAGE: $DOCKER_REGISTRY/$CI_PROJECT_NAME:latest
  before_script:
    - cp ${ENV_PROD} accesses/.env
    - cp ${SSH_PRIVATE_KEY} ./SSH_PRIVATE_KEY
  script:
    - docker build . -t $IMAGE
  only:
    - master


push production image to repository:
  extends: .push
  tags:
    - did
  variables:
    IMAGE: $DOCKER_REGISTRY/$CI_PROJECT_NAME:latest
  stage: push
  only:
    - master


deploy on production:
  extends: .deploy
  variables:
    IMAGE: $DOCKER_REGISTRY/$CI_PROJECT_NAME:latest
  tags:
    - did
  script:
    - sed -i "s+IMAGE+${IMAGE}+g" ./kubernetes/deployment.yml
    - ./kubectl apply -f ./kubernetes/deployment.yml
  stage: deploy
  only:
    - master

I don’t have a version strategy yet. Only a app.kubernetes.io/version: "0.2.xyz" label just for my own.
For prod image I used a “:latest” tag for kubernetes pull policy.
In manifest i have IMAGE mark for replacement it in ci/cd pipeline with sed. I need it for stage images that marked with $CI_PIPELINE_ID variable when stage deployment will be replaced to own k8s cluster in future (now it’s placed in swarm).

All works good when no pods a created. Pipeline works fine, i have my pods and app is running properly.
How i can say to kubernetes to recreate pods when pulled image updates with “:latest” tag policy?

Thanks!

Thank you for sharing.

This way update will never work. You need to have image versioning to trigger deployment upgrade. Kubeapi never checks whether latest tag in container repo has been updated or not. From its perspective there is no need to create a new ReplicaSet as there are no changes in deployment manifest. Consider using build number for your image versioning. Something similar to:

IMAGE: $DOCKER_REGISTRY/$CI_PROJECT_NAME:${{ github.run_number }}

HTH

Hm.
It’s like my misunderstanding of the process.
In docs we have:

If the image tag is :latest, the imagePullPolicy will be automatically set to Always. 

I thought that new pulled image is the trigger for pods recreation.
Like in Docker Swarm with stack deploy it’s always recreates new replic when the image was updated.

I think I need version strategy:)

Thank you!

imagePullPolicy: Always instructs kubelet to always try to pull the latest image during pod creation. However in your case, pod creation is not happening as deployment manifest has no changes.