The only concern I would have with that is that means that one node MUST be up. Typically i have seen for bare metal stuff, if you want to do NFS, you would put it on the master server as that would have lower load, and kind of needed to keep the cluster healthy. If the master goes down, with the NFS server on it then you lose the control plane and NFS.
My baremetal at home I use the NFS off my freenas server. But any nas would work (synology, disk station ect).
I am very soon moving my PV stuff over to gluster. Just to show off a little gluster stuff
You can see my vol called rep_vol here.
I have played with enabled NFS on it, but that may be something worth looking in to as well since gluster does have NFS built in (i haven’t gotten that far)
You can see the type = Replicate and the Number of Bricks: 1 x 3 = 3
Then the list of the bricks below that. That means ALL the data replicated across all 3 servers. Then on my client.
That bottom one is the gfsvol. It shows that it’s connecting to the ip 81. But that’s a little misleading. When you mount it from the client, the glusterfs talks to that 1 server, and gets a config file with ALL the servers in it. So it can connect to any of the 3 in my replicate vol. I have testing shutting down .81 and it still works no problem.
Even supports adding quotas at the server level.
My plan is to rebuilt my cluster, with any luck, 3 master control plane servers, also acting as replicate glusterfs servers, and then my worker nodes.