I see a failed job that created no pods.And also there is no information in the events.Since there are no pods,I could not check the logs.
Here is the description of the job which failed.
kubectl describe job time-limited-rbac-1604010900 -n add-ons
Name: time-limited-rbac-1604010900
Namespace: add-ons
Selector: controller-uid=0816b9b3-814c-4802-83cf-5d5f3456701d
Labels: controller-uid=0816b9b3-814c-4802-83cf-5d5f3456701d
job-name=time-limited-rbac-1604010900
Annotations: <none>
Controlled By: CronJob/time-limited-rbac
Parallelism: 1
Completions: <unset>
Start Time: Thu, 29 Oct 2020 15:35:08 -0700
Active Deadline Seconds: 280s
Pods Statuses: 0 Running / 0 Succeeded / 1 Failed
Pod Template:
Labels: controller-uid=0816b9b3-814c-4802-83cf-5d5f3456701d
job-name=time-limited-rbac-1604010900
Service Account: time-limited-rbac
Containers:
time-limited-rbac:
Image: bitnami/kubectl:latest
Port: <none>
Host Port: <none>
Command:
/bin/bash
Args:
/var/tmp/time-limited-rbac.sh
Environment: <none>
Mounts:
/var/tmp/ from script (rw)
Volumes:
script:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: time-limited-rbac-script
Optional: false
Events: <none>
Here is the description of CronJob.
apiVersion: v1
items:
- apiVersion: batch/v1beta1
kind: CronJob
metadata:
annotations:
meta.helm.sh/release-name: time-limited-rbac
meta.helm.sh/release-namespace: add-ons
labels:
app.kubernetes.io/name: time-limited-rbac
name: time-limited-rbac
spec:
concurrencyPolicy: Replace
failedJobsHistoryLimit: 1
jobTemplate:
metadata:
creationTimestamp: null
spec:
activeDeadlineSeconds: 280
backoffLimit: 3
parallelism: 1
template:
metadata:
creationTimestamp: null
spec:
containers:
- args:
- /var/tmp/time-limited-rbac.sh
command:
- /bin/bash
image: bitnami/kubectl:latest
imagePullPolicy: Always
name: time-limited-rbac
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/tmp/
name: script
dnsPolicy: ClusterFirst
restartPolicy: Never
schedulerName: default-scheduler
securityContext: {}
serviceAccount: time-limited-rbac
serviceAccountName: time-limited-rbac
terminationGracePeriodSeconds: 0
volumes:
- configMap:
defaultMode: 356
name: time-limited-rbac-script
name: script
schedule: '*/5 * * * *'
successfulJobsHistoryLimit: 3
suspend: false
Is there any way to tune thie cronjob to avoid such scenarios? We are receiving this issue atleast once or twice everyday.