Does replicas set id has a meaning?

Hey, Im using Kubernetes 1.20 .

Im applying a deployment that is failing because of a secret that does not exist. (the placeholders are filled by a script that generates random values which do not exist).
My question is does anyone might have a clue why every time that I apply this deployment. the replica set always has the same id for when it failed, and even after I remove the secret and it works it always uses the same replica id .

Does the replica set id has a meaning ?

Cluster information:

Kubernetes version: 1.20
Bare metal.
Installation method: kubeadm

the yaml:

apiVersion: v1
kind: List
metadata: {}
items:
- apiVersion: apps/v1
  kind: Deployment
  metadata:
    labels:
      app: missing-secret
    name: <name>
    namespace: <namespace>
  spec:
    selector:
      matchLabels:
        app: missing-secret
    replicas: 2
    revisionHistoryLimit: 3
    strategy:
      rollingUpdate:
        maxSurge: 25%
        maxUnavailable: 25%
      type: RollingUpdate
    template:
      metadata:
        creationTimestamp: null
        labels:
          app: missing-secret
      spec:
        containers:
        - image: k8s.gcr.io/echoserver:1.4
          imagePullPolicy: Always
          name: <name>
          env:
          - name: <envvvarname>
            valueFrom:
              secretKeyRef:
                name: <secret-name>
                key: <secret-key>
          resources:
            limits:
              cpu: 100m
              memory: 40Mi
            requests:
              cpu: 70m
              memory: 10Mi
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
        dnsPolicy: ClusterFirst
        restartPolicy: Always
        schedulerName: default-scheduler
        serviceAccount: sa-name
        securityContext: {}
        terminationGracePeriodSeconds: 30

When posting pasted blocks of text, it is recommended to use put your text inside code blocks using ```.

When you say “replica id”, are you asking about pod template hashes?

Yes, great . Thank you . And I started getting crazy why is it happening. I had the feeling that it might be the reason.

The reason I asked this question is to be sure that there is no bug in our deployments and pipelines that deploy them. (because the replica set id always was the same while I tested it even after changing it.) I wanted to know if my feeling is correct.

Yep, definitely not a bug. It’s a documented feature.

If you change the pod template and look at kubectl get replicasets, you will see the deployment controller uses it to manage multiple replicasets.