Kubernetes Deployment Restart and Delete Deployment has different behavior with Kafka

Cluster information:

Kubernetes version: v1.20.9
Cloud being used: Openstack
Installation method: Rancher Kubernetes
Host OS: Ubuntu 20.04
CNI and version: v3.17.2

Previously we have deployed one python microservice and a single kafka and zookeeper pod on the Kubernetes with 1 Kafka topic partition. After that, we decided to deploy one more python microservice subscribe to the same kafka topic and change the Kafka topic partition to 2.

We use kubectl restart deploy to restart both the deployment. However, we find that there is heartbeat issue for both of the deployment. We troubleshoot by deleting both deployment and start the deployment again.

We wonder if there is different behavior using kubectl rollout restart deployment and deleting the deployment and redeploy again. The image used for both method are the same.

We tried to simulate the scenario by creating another set of deployment with other Kafka topic and it proves that whenever we have changed the configuration on zookeeper. We need to delete the deployment first and redeploy again rather than simply restarting the pod.

Both way are using the same image and creating different pod. Why is there a different?