Start jobs from API inside cluster

Kubernetes version: 1.19
Cloud being used: DigitalOcean
Installation method: provider
Host OS: Linux
CNI and version: default
CRI and version: default

Hi there, inside my cluster, I have a Node.js container that configures and starts Jobs using “@kubernetes/client-node” These jobs run for 30-180 minutes, are crashy, and need restarting/observation.

    const job = await batchV1Api.createNamespacedJob('default', kubeSpec.job)
    const ext = await coreV1Api.createNamespacedService('default', kubeSpec.externalService)
    const int = await coreV1Api.createNamespacedService('default', kubeSpec.internalService)
      await batchV1Api.deleteNamespacedJob(, 'default')
      await coreV1Api.deleteNamespacedService(, 'default')
      await coreV1Api.deleteNamespacedService(, 'default')

I think the k8s client is using my default .kube/config to auth with the cluster. In development, this is my local machine, but I need a kubeconfig for production on Digital Ocean.

Digital Ocean’s website provisioned me a kubeconfig for kubectl, but its maximum privilege, and fear gives me pause to put it inside the cluster.

How can I create a new kubeconfig with minimum privilege?

Can you recommend an example to build from?

Thank you! Michael

You want to setup Service Accounts for your pods. They go hand-in-hand with ClusterRoles, Roles, ClusterRoleBindings, and RoleBindings.

The things prefixed with Cluster are cluster-wide. Hence the prefix.

Roles define permissions and you use Bindings to attach the permissions to a service account.

Then on your containers, you just put serviceAccountName: the-account-name and also set automountServiceAccountToken: true.

If you just need a simple example, here’s an rbac yaml and deployment I made where I was building an operator:

1 Like