How to handle deletion in a declarative system

Hey guys,

i have an architectural question on how kubernetes, or in general declarative systems handle deletion of resources. I am also about to develop such a declarative system. As far as i know, deletion of resources in Kubernetes is only possible via kubectl delete, right? But from a declarative system point of view i am wondering why Kubernetes chose this imperative way for deleting resources.

I am comparing it to something like Terraform, where you can also delete resources by removing them from your Terraform file and running terraform apply. If the resource is present in the tfstate file but no longer in the tf file, it will be deleted. But with kubectl apply -f my-file, this is not the case, is it? Sure, Terraform can only do this by adding quite a lot of complexity with the tfstate file. Still, from an IaC perspective it makes sense to me, that deletion is also handled declaratively.

In general i’d like to hear about your thoughts on how to handle deletion in declarative systems and why Kubernetes decided to handle it with an explicit command only.

Looking forward to your thoughts and ideas,

Ok, i actually found out that kubectl apply --prune is an option to achieve the declarative behavior regarding deletion. But it is for many years in alpha status and contains a lot of warnings in the documentation. Declarative Management of Kubernetes Objects Using Configuration Files | Kubernetes . So i am still interested in thoughts and ideas around this topic.

I can’t speak for why K8s is the way it is and I haven’t used Terraform, but I’m interested in your question :thinking:

I’m guessing from looking at Terraform that when you do an apply you are applying the entire desired state each time so Terraform can compare it with the stored tfstate and remove things that no longer exist.

In K8s you are always applying a subset of desired state, for example user1 might apply a deployment and a service. These can either be in a single file separated by --- or in multiple different files. User2 might deploy some other resources. So there is no way to determine that something that was previously defined is no longer defined. To do that you would need to send the entire desired state of the cluster each time which become awkward.

The --prune feature seems to work using tags and will remove/replace all object with matching tags with the new objects. To me that seems sort of awkward as you would need to have all objects with the specific tags defined together and always applied together.

Kind regards,