if in deployment of pods i like to update only application,then how i perform it?
if in pod yaml i change image name from vers1 to vers2,is this enough?
Hi lelunicu:
Assuming that:
- you have updated your application code, compile it if necessary and so on
- you have created a container image and tagged it with
vers2
- you have uploaded the container image to a registry
- your cluster can access the container registry and retrieve the image tagged with
vers2
in pod yaml i change image name from vers1 to vers2
And then, execute kubectl apply -f ${updated-deployment-with-ver2-tag-for-image}.yaml
(and kubectl
is properly configured to access the cluster and the permissions to do so)…
Yes, that’s enough
The official documentation explains how to update a deployment and how Kubernetes acts (and on which changes on the pod specification).
Best regards,
Xavi
in this way the pods run on the same workers(they are not destroyed) and only the application version is updated.
Hi lelunicu:
Pods (based on the vers1
) of your image will be destroyed and new pods (based on the image tagged with vers2
) will be created.
By default, this replacement will be performed using the rolling update strategy; that means that if you have 2 pods running the vers1
, a new pod with vers2
will be created, running alonside your 2 vers1
pods (totaling 3 pods). Once Kubernetes makes sure the vers2
pod is running ok, it destroys one vers1
pod (choosen randomly) (2 pods total, 1 vers1
+ 1 vers2
). The process repeats, until all the vers1
pods have been destroyed and the replica
count specified in your deployment file is satisfied (with pods based on your new desired state, that it, based on your vers2
image)…
Best regards,
Xavi
hi,
this mean that the new pod can be scheduled on same worker or different works as in vers 1.
New pods will be scheduled to the most suitable node (see Kubernetes Scheduler | Kubernetes); that means that you cannot know on which node a pod will be scheduled (by default).
There are several mechanisms to influence the Scheduler, like taints, tolerations and affinity. If you really need to schedule your pods in a certain node, you can tag the node and then, in your Deployment, specify affinity to the tag.
Best regards,
Xavi
is there any possibility when upgrade application- only application to up upgraded and pod are not recreated?i mean application run in the initial pod.
HI Ielunicu:
Maybe, but I don’t think so.
A container is just a process (literally, you can see running containers using ps
)
ps aux | grep docker
root 1839 0.1 0.4 1528480 78452 ? SNsl 07:34 0:00 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
root 2954 0.0 0.0 1222052 2944 ? SNl 07:34 0:00 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 3000 -container-ip 172.17.0.2 -container-port 3000
root 2975 0.0 0.0 1222308 3204 ? SNl 07:34 0:00 /usr/bin/docker-proxy -proto tcp -host-ip :: -host-port 3000 -container-ip 172.17.0.2 -container-port 3000
root 2979 0.0 0.0 1222308 3116 ? SNl 07:34 0:00 /usr/bin/docker-proxy -proto tcp -host-ip :: -host-port 51413 -container-ip 172.20.0.2 -container-port 51413
root 3028 0.0 0.0 1296040 3204 ? SNl 07:34 0:00 /usr/bin/docker-proxy -proto udp -host-ip 0.0.0.0 -host-port 51413 -container-ip 172.20.0.2 -container-port 51413
Think of a container as an universal flag that you can add to any application; so, the application running in a container is just your regular application but with this imaginary --isolated
flag added. That --isolated
flag makes the application run without being aware of any other process running in the same host (that’s the isolation level provided by the container).
When you create the container image, you define the entrypoint
as the process to run “inside” the container; if this process stops, the container also stops.
If your application can be upgraded without being restarted, then yes, it can be updated in-place, but most applications need to be restarted for changes to take effect…
Best regards,
Xavi