Delete ClusterRoleBinding and Namespace from process running in same Namespace

I create a Namespace on a kube cluster and install my app within that Namespace. The app sends regular heartbeat requests to the server and if it gets a “remove yourself” response from the server, it deletes itself by calling delete on the entire kube Namespace. I also give the app cluster wide access by creating a ClusterRoleBinding, making a ServiceAccount subject to the ClusterRoleBinding and running the pod with this ServiceAccount.

The problem is that I want to remove the ClusterRoleBinding as part of app’s self removal process (if that’s possible). If I remove the ClusterRoleBinding before then the app won’t be able to issue a delete on the Namespace so it seems like a chicken-and-egg problem. Is there a way to do this?

This is what I have already tried to to avail:

  • Added a PreStop handler in the app container. So now when the app calls delete on the entire Namespace, kube calls this handler before killing the container. In this PreStop handler if I sleep for more than 5 sec before calling delete on ClusterRoleBinding I get “Unauthorized” response back from kubernetes.

  • This lead me to think that maybe the ServiceAccount that’s linked to the ClusterRoleBinding gets deleted before the app has had a chance to delete the ClusterRoleBinding in the PreStop handler. So to test this, before issuing delete on the Namespace I add a finalizer to the ServiceAccount, then in the PreStop handler I wait for 5 sec, issue delete on the ClusterRoleBinding (again get “Unauthorized” error back), then I get the ServiceAccount object by name (get “Unauthorized” error back), remove the finalizer from the ServiceAccount (get error "error=“finalizer doesn’t exist for object ‘’”) because it can’t remove the finalizer on an empty object.

When I use kubectl I find that the ServiceAccount exists but is in “Terminating” state as expected with the finalizer still set.

Does kube revoke access when ServiceAccount is in "Terminating" state even though its not yet hard deleted?

Is there a way to remove the ClusterRoleBinding and the Namespace from the same process that is running in the Namespace that needs to be deleted?

Any help would be much appreciated!

The YAML definitions for ClusterRoleBinding and ServiceAccounts are as below:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
    creationTimestamp: null
    name: xyz-myapp-cluster-admin
roleRef:
    apiGroup: rbac.authorization.k8s.io
    kind: ClusterRole
    name: cluster-admin
subjects:
- kind: ServiceAccount
  name: xyz
  namespace: xyz-myapp

apiVersion: v1
kind: ServiceAccount
metadata:
    creationTimestamp: null
    name: xyz
    namespace: xyz-myapp

Relevant application logs:
time="2020-02-18T16:08:33Z" level=info msg="App instructed to remove itself"
time="2020-02-18T16:08:33Z" level=info msg="Created finalizer 'xyz.myapp.com/my-finalizer' on ServiceAccount"
time="2020-02-18T16:08:33Z" level=info msg="Called delete on Namespace"
time="2020-02-18T16:08:38Z" level=info msg="PreStop handler called"
time="2020-02-18T16:08:38Z" level=info msg="----- sleeping for 5 sec -----"
time="2020-02-18T16:08:43Z" level=info msg="Deleting ClusterRoleBinding"
time="2020-02-18T16:08:43Z" level=warning msg="Failed to delete ClusterRoleBinding" error="Unexpected error removing dmt ClusterRolebinding: Unauthorized"
time="2020-02-18T16:08:43Z" level=warning msg="Failed to get ServiceAccount" error=Unauthorized
time="2020-02-18T16:08:43Z" level=warning msg="Failed to remove finalizer from ServiceAccount"
error="finalizer 'xyz.myapp.com/my-finalizer' doesn't exist for object ''"

After digging through kubernetes documentation, the most reliable way I found to do this is:

  1. When the app gets a “remove yourself” response back from server it makes the ClusterRoleBinding the owner of the Namespace the app runs in.
  2. This can be done by adding ClusterRoleBinding under Namespace’s medatadata.owerReferences using patch.
  3. Once ClusterRoleBinding is successfully added as an owner for Namespace, then the app can call delete on the ClusterRoleBinding using DeletePropagationBackground.

Below is an example of how a patch that adds a ClusterRoleBinding to ownerReferences of a Namespace can be applied (in Golang).

import (
	"encoding/json"
	"k8s.io/client-go/kubernetes"
	v1 "k8s.io/api/core/v1"
	rbacv1 "k8s.io/api/rbac/v1"
	metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)

type ownerReferencePatch struct {
	Op    string                  `json:"op"`
	Path  string                  `json:"path"`
	Value []metav1.OwnerReference `json:"value"`
}

func AddClusterRoleBindingOwnerReferenceToNamespace(client kubernetes.Interface, crb *rbacv1.ClusterRoleBinding, ns *v1.Namespace) (*v1.Namespace, error) {
	temp := true
	patch, err := json.Marshal([]ownerReferencePatch{
		{
			Op:   "add",
			Path: "/metadata/ownerReferences",
			Value: []metav1.OwnerReference{
				{
					APIVersion:         crb.RoleRef.APIGroup,
					BlockOwnerDeletion: &temp,
					Kind:               "ClusterRoleBinding",
					Name:               crb.GetName(),
					UID:                crb.GetUID(),
				},
			},
		},
	})
	if err != nil {
		return nil, err
	}

	return client.CoreV1().Namespaces().Patch(ns.GetName(), types.JSONPatchType, patch)
}