I have a simple helm chart with a Deployment and Job Object
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}-parent
spec:
selector:
matchLabels:
app: {{ .Release.Name }}-parent
template:
metadata:
labels:
app: {{ .Release.Name }}-parent
spec:
containers:
- name: nginx
image: nginx:latest
# Add other container configuration as needed
and the job object
apiVersion: batch/v1
kind: Job
metadata:
name: {{ .Release.Name }}-job
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: Deployment
name: {{ .Release.Name }}-parent
uid: $(kubectl get deployment my-release-parent -o jsonpath='{.metadata.uid}')
spec:
template:
metadata:
labels:
app: {{ .Release.Name }}-job
spec:
containers:
- name: busybox
image: busybox:latest
restartPolicy: Never
# Add other container configuration as needed
# Add other Job configuration as needed
I’m trying to dynamically parse the UID of the deployment object as the onwnerReference in the job manifest. Right now, if I deploy this chart, everything works fine except for the job that wont get deployed. When I remove the ownerReference from the job manifest, the job deploys perfectly, so I know the issue has to do with the way I parsed the UID…
I want a situation that whenever i delete my deployment, it deletes all the child resources. See more: onwerReference
i tried different ways to dynamically parse the UID, and this codes works if i directly run the commands after the deployment.
kubectl get deployment deploymentexample -o yaml | grep uid | cut -d ’ ’ -f 4
kubectl get deployment my-release-parent -o jsonpath=‘{.metadata.uid}’