Persistent Volume Solutions for Bare Metal Deployment

Asking for help? Comment out what you need so we can get more information to help you!

Cluster information:

Kubernetes version: v1.15
Cloud being used: Bare-metal
Host OS: Linux
CNI and version:
CRI and version:


I’ve been working with bare metal setups recently and would like to get some recommendations on persistent volumes. So far, I’ve been using NFS storage and manually creating the volumes. Are the any alternatives or even provisioners that provide these services for a bare metal cluster?

Thanks for any help given.

Are you asking for storage providers?

Look in to glusterfs + heketi. It creates a clustered, high available storage cluster that you can dynamically create PV’s in. I have been playing with it a little myself. You can also setup an NFS provisioner inside kubernetes that uses one of the servers (maybe the master) as an NFS server and then you can dynamically provision PV that way.

Partially, yes. This is more of a general ask for storage solutions for a bare metal cluster. I guess I should also include that I would like the option to be on-premise or part of the cluster rather than a cloud based one.

Thanks for the quick response. I’ll definitely take a look into the glusterfs + heketi setup to see if it offers what I’m looking for.
Quick question about the NFS side. When working NFS, I’ve been using a Docker image hosted inside of one of the pods to provide the server. Is that the way I should go about this?

The only concern I would have with that is that means that one node MUST be up. Typically i have seen for bare metal stuff, if you want to do NFS, you would put it on the master server as that would have lower load, and kind of needed to keep the cluster healthy. If the master goes down, with the NFS server on it then you lose the control plane and NFS.

My baremetal at home I use the NFS off my freenas server. But any nas would work (synology, disk station ect).

I am very soon moving my PV stuff over to gluster. Just to show off a little gluster stuff :slight_smile:
You can see my vol called rep_vol here.

I have played with enabled NFS on it, but that may be something worth looking in to as well since gluster does have NFS built in (i haven’t gotten that far)

You can see the type = Replicate and the Number of Bricks: 1 x 3 = 3

Then the list of the bricks below that. That means ALL the data replicated across all 3 servers. Then on my client.

That bottom one is the gfsvol. It shows that it’s connecting to the ip 81. But that’s a little misleading. When you mount it from the client, the glusterfs talks to that 1 server, and gets a config file with ALL the servers in it. So it can connect to any of the 3 in my replicate vol. I have testing shutting down .81 and it still works no problem.

Even supports adding quotas at the server level.

My plan is to rebuilt my cluster, with any luck, 3 master control plane servers, also acting as replicate glusterfs servers, and then my worker nodes.