Say a deployment.
The reflector watches the deployment actions and syncs everything to the local store. So when a delete action comes in, the deployment is deleted from the local store, and then the event gets retrieved by the controller and pushed to its work queue. In the end, the controller can not retrieve the deployment object since the deployment has already been deleted from the local store.
But in this process, WHO(which component) really deletes the deployment? Is it Kubectl? How does it know when and where to delete the deployment?
The deployment controller lives inside kube-controller-manager
Thanks for the reply.
The way I see the deployment controller is not actually remove deployment
from the node. Here is the reason:
The event handler func
of all types is to add the obj key to the work queue. And what the controller does after it gets the key from the queue is trying to retrieve the deployment from the local store
. Hence, it never gets the obj but also returns no error.
So I think the controller is not the one who really deletes the resources.
If you mean to ask who stop who stops the running pods on nodes, The answer to that is kubelet. Cubelet is the primary agent on nodes which actuates the starting stopping etc of pods and their constituent containers.
But kubelet doesn’t know anything about deployments, per se. It only knows about pods. The deployment controller is responsible for deleting it s replica sets. The replica set controller is responsible for deleting pods in those now deleted replica sets. The kubelet is responsible for stopping the pods on the nodes.