Cronjob does not create job, manual crated job does not create pod

Cluster information:

Kubernetes version: v1.13.1 on bare-metal (this cluster is used internal only, the pods have to run 24/7 , so I cannot update them)
Installation method: kubeadm init
Host OS: Ubuntu 16.04 LTS
CNI and version: quay.io/coreos/flannel:v0.10.0-amd64
CRI and version: docker 18.06.3-ce

Problem

I have created a cronjob like posted below. The cronjob does not create jobs on its schedule time automatically.

When I create a job from that cronjob manually, the job is created. However, no pod is started.

I have this problem on two single-node-clusters running these kubernetes-version. On a node single-node-cluster with kubernetes v1.13.7, there is not problem.

# k -n kube-system get cronjobs.batch etcd-backup 
NAME          SCHEDULE     SUSPEND   ACTIVE   LAST SCHEDULE   AGE
etcd-backup   47 * * * *   False     0        <none>          24m

# k -n kube-system get cronjobs.batch -o yaml

apiVersion: v1
items:
- apiVersion: batch/v1beta1
  kind: CronJob
  metadata:
    annotations:
      kubectl.kubernetes.io/last-applied-configuration: |
        [...]
    creationTimestamp: "2020-01-16T10:36:12Z"
    name: etcd-backup
    namespace: kube-system
    resourceVersion: "38133902"
    selfLink: /apis/batch/v1beta1/namespaces/kube-system/cronjobs/etcd-backup
    uid: 044955e6-384c-11ea-b0e2-5254008009b9
  spec:
    concurrencyPolicy: Allow
    failedJobsHistoryLimit: 1
    jobTemplate:
      metadata:
        creationTimestamp: null
      spec:
        template:
          metadata:
            creationTimestamp: null
          spec:
            containers:
            - args:
              - -c
              - etcdctl --endpoints=https://127.0.0.1:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt
                --cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.key
                snapshot save /backups/etcd-snapshot-$(date +%Y-%m-%d_%H:%M:%S_%Z).db
                && echo 'delete old backups' && find /backups -type f -mtime +7 -print
                -exec rm {} \;
              command:
              - /bin/sh
              env:
              - name: ETCDCTL_API
                value: "3"
              image: k8s.gcr.io/etcd-amd64:3.1.12
              imagePullPolicy: IfNotPresent
              name: backup
              resources: {}
              terminationMessagePath: /dev/termination-log
              terminationMessagePolicy: File
              volumeMounts:
              - mountPath: /etc/kubernetes/pki/etcd
                name: etcd-certs
                readOnly: true
              - mountPath: /backups
                name: backups
            dnsPolicy: ClusterFirst
            hostNetwork: true
            nodeSelector:
              node-role.kubernetes.io/master: ""
            restartPolicy: OnFailure
            schedulerName: default-scheduler
            securityContext: {}
            terminationGracePeriodSeconds: 30
            tolerations:
            - effect: NoSchedule
              operator: Exists
            volumes:
            - hostPath:
                path: /etc/kubernetes/pki/etcd
                type: Directory
              name: etcd-certs
            - hostPath:
                path: /mnt/nfs/backups/k8-cluster/10.82.0.99/etcd
                type: DirectoryOrCreate
              name: backups
    schedule: 47 * * * *
    successfulJobsHistoryLimit: 3
    suspend: false
  status: {}
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

get jobs:

# k -n kube-system get jobs.batch 
NAME                                   COMPLETIONS   DURATION   AGE
etcd-backup-test-nach-deployment-neu   0/1                      23m

get pods

# k -n kube-system get po
NAME                              READY   STATUS    RESTARTS   AGE
coredns-86c58d9df4-j2dfx          1/1     Running   5          383d
coredns-86c58d9df4-js8dp          1/1     Running   5          383d
etcd-storage                      1/1     Running   5575       383d
kube-apiserver-storage            1/1     Running   4521       383d
kube-controller-manager-storage   1/1     Running   29         383d
kube-flannel-ds-amd64-fs74j       1/1     Running   6          383d
kube-proxy-7f5fg                  1/1     Running   6          383d
kube-scheduler-storage            1/1     Running   31         383d

describe manually created job

# k -n kube-system describe jobs.batch etcd-backup-test-nach-deployment-neu 
Name:           etcd-backup-test-nach-deployment-neu
Namespace:      kube-system
Selector:       controller-uid=3f963dfb-384c-11ea-b0e2-5254008009b9
Labels:         controller-uid=3f963dfb-384c-11ea-b0e2-5254008009b9
                job-name=etcd-backup-test-nach-deployment-neu
Annotations:    cronjob.kubernetes.io/instantiate: manual
Parallelism:    1
Completions:    1
Pods Statuses:  0 Running / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  controller-uid=3f963dfb-384c-11ea-b0e2-5254008009b9
           job-name=etcd-backup-test-nach-deployment-neu
  Containers:
   backup:
    Image:      k8s.gcr.io/etcd-amd64:3.1.12
    Port:       <none>
    Host Port:  <none>
    Command:
      /bin/sh
    Args:
      -c
      etcdctl --endpoints=https://127.0.0.1:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.key snapshot save /backups/etcd-snapshot-$(date +%Y-%m-%d_%H:%M:%S_%Z).db && echo 'delete old backups' && find /backups -type f -mtime +7 -print -exec rm {} \;
    Environment:
      ETCDCTL_API:  3
    Mounts:
      /backups from backups (rw)
      /etc/kubernetes/pki/etcd from etcd-certs (ro)
  Volumes:
   etcd-certs:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/kubernetes/pki/etcd
    HostPathType:  Directory
   backups:
    Type:          HostPath (bare host directory volume)
    Path:          /mnt/nfs/backups/k8-cluster/10.82.0.99/etcd
    HostPathType:  DirectoryOrCreate
Events:            <none>

Hi,

I’m currently facing the same issue, my cronjob had been running fine until 2 days ago. It stopped scheduling jobs and manually created jobs do not create pods. Have you resolved this?

Thanks