Node selector for cronJob

Cluster information:

Kubernetes version: 1.17.0
Cloud being used: bare-metal
Installation method: apt-get?
Host OS: Debian 10
CNI and version: flannel:v0.11.0
CRI and version:

I want a cron job to run on a specific node. I tried to place the nodeSelector in my kubernetes manifest file but I get an error saying that it doesn’t know what the nodeSelector field is.

error: error validating “highres-backup.yaml”: error validating data: ValidationError(CronJob): unknown field “nodeSelector” in io.k8s.api.batch.v1beta1.CronJob; if you choose to ignore these errors, turn validation off with --validate=false

I have tried putting nodeSelector on various indentation levels but got the same errors.
I am posting my mildly sensored yaml file. Essentially, I’m trying to schedule files written on a specific node to be transferred to an nfs-volume at regular intervals.

apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: highres-rsync-cronjob

spec:
  schedule: "*/1 * * * *"
  jobTemplate:
    spec:
      template:
        nodeSelector:
          nfs_backup: yes
        spec:
          containers:
          - name: rsync-cronjob
            image: eeacms/rsync
            args:
            - /bin/sh
            - -c
            -  "find /data/high-res/ -cmin +20 -printf %P\\0 | rsync -zarvhO --remove-source-files /data/high-res/ /mnt/high-res/ --exclude '*_1.egg'"
            volumeMounts:
              - name: local-storage
                mountPath: /data/
              - name: nfs-volume
                mountPath: /mnt/
          restartPolicy: OnFailure
          volumes:
          - name: local-storage
            hostPath:
              path: /data
              type: directory
          - name: nfs-volume
            nfs:
              server: censored.ip.address
              path: /data

Checkout the docs here: Assigning Pods to Nodes - Kubernetes

With the example you provided just move it one level down inside the spec at same indent as containers.

Kind regards,
Stephen

Thank you @stephendotcarter

That was actually my initial guess, but I still get the same api error

raphaelc@tpnotc:~/site_atlas$ kubectl apply -f highres-backup.yaml 
Error from server (BadRequest): error when creating "highres-backup.yaml": CronJob in version "v1beta1" cannot be handled as a CronJob: v1beta1.CronJob.Spec: v1beta1.CronJobSpec.JobTemplate: v1beta1.JobTemplateSpec.Spec: v1.JobSpec.Template: v1.PodTemplateSpec.Spec: v1.PodSpec.NodeSelector: ReadString: expects " or n, but found t, error found in #10 byte of ...|_backup":true},"rest|..., bigger context ...|e":"nfs-volume"}]}],"nodeSelector":{"nfs_backup":true},"restartPolicy":"OnFailure","volumes":[{"host|...

Do you know what could be wrong? I’m posting my manifest file now.

raphaelc@tpnotc:~/site_atlas$ cat highres-backup.yaml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: highres-rsync-cronjob

spec:
  schedule: "*/1 * * * *"
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: rsync-cronjob
            image: eeacms/rsync
            args:
            - /bin/sh
            - -c
            -  "find /data/high-res/ -cmin +20 -printf %P\\0 | rsync -zarvhO --remove-source-files /data/high-res/ /mnt/high-res/ --exclude '*_1.egg'"
            volumeMounts:
              - name: local-storage
                mountPath: /data/
              - name: nfs-volume
                mountPath: /mnt/
          nodeSelector:
            nfs_backup: yes
          restartPolicy: OnFailure
          volumes: 
          - name: local-storage
            hostPath:
              path: /data
              type: directory
          - name: nfs-volume
            nfs:
              server: censored.ip.address
              path: /data

I think this is your issue:

Try changing your selector to:

nfs_backup: "yes"

That will force it to be a string instead of a bool.

Kind regards,
Stephen

1 Like

Using ‘yes’ as a key was the problem. Thank you!

Just ensure you specify nodeSelector at the container level.