Restart all deployment in a namespace

Hi All, Is there a way I can restart all the deployments in a particular namespace.
For example, I am using istio in my EKS cluster, once the upgrade happens I have to restart all the deployments in my application namespace to start use the new sidecars. They gave command like below but it is not working.

kubectl rollout restart deployment --namespace apps

But when I use this command, I get the below error. Is there a way I can do a rolling restart of my deployments.

You need to specify which deployment to restart:

kubectl rollout restart deployments/isam-config-deploy -n apps
kubectl rollout restart deployments/isam-config-deploy-test -n apps

Note that the pods in your deployment will be terminated and then re-
created.

Best,
-jay

2 Likes

Thanks Jay for the response. Is there a way to restart all the deployments in a particular namespace with single command in rolling fashion

Not that I’m aware of. You’d want to do something like this:

deploys=`kubectl get deployments -n apps | tail -n1 | cut -d ' ' -f 1`
for deploy in $deploys; do
  kubectl rollout restart deployments/$deploy -n apps
done

Best,
-jay

3 Likes

Hi Jay, I tried even with the deployment name but still the same error.

What version of k8s are you using? I’m using 1.15.3 and the commands I
listed work properly. You can see the pods in the nginx deployment get
terminated and recreated when I run kubectl rollout restart here:

$ kubectl get pods
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-57f8b7cdfb-54s4b   1/1     Running   0          22h
nginx-deployment-57f8b7cdfb-5b5dd   1/1     Running   0          22h
$ deploys=`kubectl get deployments | tail -n1 | cut -d ' ' -f 1`
$ for deploy in $deploys; do kubectl rollout restart
deployments/$deploy; done
deployment.extensions/nginx-deployment restarted
$ kubectl get pods
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-75cfbd6cf5-nvqhc   1/1     Running   0          20s
nginx-deployment-75cfbd6cf5-tdfw2   1/1     Running   0          21s

Note that I don’t have an “apps” namespace, so I left off “-n apps” to
use the default namespace. Other than that, it’s the exact commands I’d
listed.

Best,
-jay

Thanks Jay, that was exactly the issue I was facing. My kubectl version was 1.14, I got to know that this restart feature was added from 1.15. It is working fine now, even the below command is working that I am able to restart the deployment using single command

Kubectl rollout restart deployment -n apps

But want to check with you one thing, we are using AWS EKS, which has Kubernetes version 1.14, can we use a kubectl client of higher version to take to eks api of 1.14?

Thanks again for fixing my issue :slightly_smiling_face:

kc rollout restart deploy
That’s all you need. See https://qvault.io/2020/10/26/how-to-restart-all-pods-in-a-kubernetes-namespace/ also for a working bash script if needed for backwards compatibility.

1 Like

rollout restart will not terminate pods first and then recreate new pods. That scenario will happen if you delete any pod.
Instead, rollout restart will create new pods first then terminate old pods assuring your new pods are ready to serve.

Command to restart all deployments:
kubectl rollout restart deployment -n

I am using version 1.18.

2 Likes