Let’s say you were looking to build your first kubernetes cluster in your own personal data center.
You were going to be using 12-20 blade servers, and spreading them across multiple racks.
There would be multiple separate groups of applications, and even customers, running jobs across them.
You might even have more than one kube cluster.
You wanted to have some kind of standardised storage that would let you be able to write data apparently “locally” to each instance ( so it would appear as a file system most of the time. if not always)
But most importantly…
you wanted the ability to have the data store be transparently mirrored across racks, synchronously.
local raid1 data stores are not sufficient.
How would you choose to implement the common data storage, and why?
The other problem I can see, it’s about share filesystem, you can only create one which can be a problem if you want to seperate your projects in differents file system. https://rook.io/docs/rook/v1.1/ceph-filesystem.html
I installed an NFS server on a seperate machine and used nfs client provisonner in kubernetes to point it to the storage ( as a default storage ). now that i have my volumes in a seperate machine, i made a daily backup to an external disk.
I used Debian 10 for the nfs server since it’s stable and needs very few updates. We have the same thing in production and it’s been working fine for more than a year now.
as for performance, i had no issues or latency, the nfs is quite fast if you use ssd