GPU orchestration on Kubernetes with dstack

Hi everyone,

We’ve just announced the beta release of dstack’s Kubernetes integration. This allows ML teams to orchestrate GPU workloads for development, training, and inference directly on Kubernetes — without relying on Slurm.

You can find the announcement and setup guide here: Kubernetes - dstack

We’d be glad to hear your feedback from trying it out.

PS: dstack is an open-source container orchestration project built specifically for GPU-native workflows.