Hi I’m currently playing around with minikube to check how to run our application in k8s.
When we update our app we also have to update the database (schem and data).
For this task I’ve decieed to use a Job and because there are multiple processes started I want to use a sidecar-container to tail the logs.
But because tail -F
runs forever the sidecar never terminates and therefore the pod stucks in NotReady state.
I’ve already played around with command vs args. prepend tail with exec
, tried sharedProcessNamespace but all with no luck.
Here is a simplified Job:
apiVersion: batch/v1
kind: Job
metadata:
name: test
spec:
backoffLimit: 0
template:
metadata:
labels:
app: test
spec:
restartPolicy: Never
#shareProcessNamespace: true
containers:
- name: main
image: busybox
args: ["sleep", "10"]
imagePullPolicy: "IfNotPresent"
volumeMounts:
- name: foobar
mountPath: /foobar
- name: sidecar
image: busybox
args:
- /bin/sh
- -c
- |
exec tail -n+1 -F /dev/null
lifecycle:
preStop:
exec:
command: [ sh, -c, 'pkill -INT tail' ]
imagePullPolicy: "IfNotPresent"
volumeMounts:
- name: foobar
mountPath: /foobar
volumes:
- name: foobar
hostPath:
path: /data/foobar
type: DirectoryOrCreate
the main-container os the one that does the actual work.
The sidecar is only used to tail the logs.
My goal ismto terminate the sidecar after the main-container terminated.
I don’t want to override command/args of the main-container - I want to treat them as readonly because I don’t want to check everytime if the maintainer has made changes to the entrypoint.
Has someone an idea how to terminate sidecars with the main container?
Thanks
Al