Best storage design for kube cluster

Let’s say you were looking to build your first kubernetes cluster in your own personal data center.

You were going to be using 12-20 blade servers, and spreading them across multiple racks.
There would be multiple separate groups of applications, and even customers, running jobs across them.
You might even have more than one kube cluster.

You wanted to have some kind of standardised storage that would let you be able to write data apparently “locally” to each instance ( so it would appear as a file system most of the time. if not always)

But most importantly…

you wanted the ability to have the data store be transparently mirrored across racks, synchronously.
local raid1 data stores are not sufficient.

How would you choose to implement the common data storage, and why?

points for :

  • price
  • performance
  • ease of maintenance
1 Like

First, at a conceptual level: Assume networked storage, not local. It’ll always be mounted transparently to the pod, but it’ll actually be remote.

I’d take a look at Ceph+Rook, to manage all your non-boot drives though I admit I haven’t used it yet!

Thanks for the reply.
was hoping for some replies with concrete production at-scale experience though.

I used Rook on homelab, it works pretty well with Ceph. It also works with Storage Provider but most of them are in alpha

The other problem I can see, it’s about share filesystem, you can only create one which can be a problem if you want to seperate your projects in differents file system.

i forget, can you mount a subdirectory to a pod or do you need to give the application long paths?