When kubernetes deployment is deleted, ReplicaSets are not being deleted

I am using Kubernetes version: 1.25.8.
I have a yaml to create a deployment with 2 pods of nginx. When I apply the yaml (kubecl apply), it creates the below resources:
1 Deployment
1 ReplicaSet
2 Pods.
When I delete the yaml (kubectl delete), deployment is deleted but replicaSet and pods are still there.

Note: I also tried spec.revisionHistoryLimit but it works only when I update the deployment with some change (for example nginx version) but on deployment deletion, replicaSets and pods are still there.
This behavior is causing a problem, let’s say if I delete and re-create deployment four times with different nginx versions. There are 4 replicas set and each one is creating 2 pods (total 8) with different versions.

Please post complete, working YAML and console logs, so people can see exactly what is happening and try to repro.

Here is a full description of the problem.

# create test namespace
kubectl create namespace test

# change yaml file (image: nginx:1.16.0) and apply
kubectl -f nginx-deployment.yaml -n test apply

# 1 service, 1 deployment, 1 replicaset and 2 pods created.
kubectl get all -n test
NAME READY STATUS RESTARTS AGE
pod/my-nginx-5899859b88-8l8t8 1/1 Running 0 14m
pod/my-nginx-5899859b88-r88rr 1/1 Running 0 14m

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/my-nginx 2/2 2 2 14m

NAME DESIRED CURRENT READY AGE
replicaset.apps/my-nginx-5899859b88 2 2 2 14m

# delete resource with yaml
kubectl -f nginx-deployment.yaml -n test delete

# deployment and service are removed, replicasSet and pods are still there.
kubectl get all -n test
NAME READY STATUS RESTARTS AGE
pod/my-nginx-5899859b88-8l8t8 1/1 Running 0 15m
pod/my-nginx-5899859b88-r88rr 1/1 Running 0 15m

NAME DESIRED CURRENT READY AGE
replicaset.apps/my-nginx-5899859b88 2 2 2 15m
nb028:gl-helm-enhancements arif.hussain$

# change yaml file (image: nginx:1.17.0) and apply
kubectl -f nginx-deployment.yaml -n test apply

# 1 service, 1 deployment, 1 replicaset and 2 pods created. (old replicaSet and Pods are also there)

kps@pockube-ma01:~$ kubectl get all -n test
NAME READY STATUS RESTARTS AGE
pod/my-nginx-5899859b88-cftq5 1/1 Running 0 5m36s
pod/my-nginx-5899859b88-jv5c7 1/1 Running 0 5m36s
pod/my-nginx-6c8fb56bf7-6m8nv 1/1 Running 0 2m31s
pod/my-nginx-6c8fb56bf7-mbrpg 1/1 Running 0 2m31s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/my-nginx LoadBalancer 10.98.42.5 10.10.20.182 80:32541/TCP 2m31s

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/my-nginx 2/2 2 2 2m32s

NAME DESIRED CURRENT READY AGE
replicaset.apps/my-nginx-5899859b88 2 2 2 5m37s
replicaset.apps/my-nginx-6c8fb56bf7 2 2 2 2m32s

Problem: When we delete deployment, replicaSet and Pods are not being deleted. Hence there is on services and 4 four pods. 2 pods are on version 1.16.0 and two pods are on version 1.17.0. ReplicaSet and Pods must be deleted along with deployment.

nginx-deployment.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-nginx
spec:
  revisionHistoryLimit: 0
  selector:
    matchLabels:
      run: my-nginx
  replicas: 2
  template:
    metadata:
      labels:
        run: my-nginx
    spec:
      containers:
      - name: my-nginx
        image: nginx:1.16.0
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: my-nginx
  labels:
    run: my-nginx
spec:
  type: LoadBalancer
  ports:
  - port: 80
    targetPort: 80  
    protocol: TCP
  selector:
    run: my-nginx

@thockin just posted in reply. Thank you

Thanks. Does not reproduce for me:

$ kubectl create namespace test
namespace/test created

$ kubectl -n test apply -f - << EOF
> apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-nginx
spec:
  revisionHistoryLimit: 0
  selector:
    matchLabels:
      run: my-nginx
  replicas: 2
  template:
    metadata:
      labels:
        run: my-nginx
    spec:
      containers:
      - name: my-nginx
        image: nginx:1.16.0
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: my-nginx
  labels:
    run: my-nginx
spec:
  type: LoadBalancer
  ports:
  - port: 80
    targetPort: 80  
    protocol: TCP
  selector:
    run: my-nginx
> EOF
deployment.apps/my-nginx created
service/my-nginx created

$ kubectl get all -n test
NAME                           READY   STATUS    RESTARTS   AGE
pod/my-nginx-6db65dfc4-dxtwl   1/1     Running   0          10s
pod/my-nginx-6db65dfc4-zvzlr   1/1     Running   0          10s

NAME               TYPE           CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
service/my-nginx   LoadBalancer   10.0.192.30   <pending>     80:30608/TCP   9s

NAME                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/my-nginx   2/2     2            2           10s

NAME                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/my-nginx-6db65dfc4   2         2         2       10s

$ kubectl -n test delete -f - << EOF
> apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-nginx
spec:
  revisionHistoryLimit: 0
  selector:
    matchLabels:
      run: my-nginx
  replicas: 2
  template:
    metadata:
      labels:
        run: my-nginx
    spec:
      containers:
      - name: my-nginx
        image: nginx:1.16.0
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: my-nginx
  labels:
    run: my-nginx
spec:
  type: LoadBalancer
  ports:
  - port: 80
    targetPort: 80  
    protocol: TCP
  selector:
    run: my-nginx
> EOF
deployment.apps "my-nginx" deleted
service "my-nginx" deleted

$ kubectl get all -n test
No resources found in test namespace.

There’s something special happening in your cluster - a bug of this magnitude would have all of the test dashboards lit up.

@Karan_Kumar Were you able to figure out the problem? I’m facing same issue with my cluster

@usman I don’t remember exactly, because it’s been more than 7 months.
But I guess this issue was happening because of my network in the Kubernetes cluster. As I was using Calico for networking, so I updated the Calico version. and it was resolved.