What is a good practice to separate different pods on different hard disk of the same node

Hello there,

Suppose the below scenario (there’ one node in this stage):

  1. / is mounted on /dev/sda and sda is a SATA hard disk (/home/ is here too)
  2. /home2/ is mounted on /dev/sdb and sdb is an SSD hard disk
  3. /home3/ is mounted on /dev/sdc and sdc is an NVMe hard disk.

Suppose I’m selling some services and being on SATA is 1$/hour, SSD is 2$/hour and NVMe is 3$/hour.
My questions:

  1. Is that possible to specify a pod run on which one? I mean if you’re a developer paid me 1$, I create run your pod on /home/. So in this case, the I/O speed of these three disks differ and the one pays more can have more I/O speed.

For the first question, my opinion is that since K8S is installed on dev/sda, the performance of pod will be affected by the SATA disk. I mean even if I mount pod’s path in /home3/, I think the person paid will not see any high-speed I/O. Am I right?

  1. Now let us change the scenario. What if / and /home/ are NVMe disk and /home3/` is a SATA one?

  2. Regarding the two above question, suppose I just have one node. Is that possible to do so? And is a good practice?

  3. If the third question’s answer is a No, then what is a good practice to do that? I know I can have three different servers and nodes, but if there’s another solution I’m eagerly hearing:)

  4. I googled but did not find any result regarding limiting I/O in K8S. Is that possible?

  5. If the fourth’s answer is No, then I’ve been thinking about limiting network bandwidth. I found these two: https://github.com/kubernetes/kubernetes/issues/2856 and quota - Kubernetes: How to implement a Network ResourceQuota - Stack Overflow but I’m not sure the value of 10M is a Mega Byte or Bit. I mean by limiting network bandwidth I can somehow manage the speed of read/write of incoming users. I tried to find any K8S docs regarding this but did not find anything.

I hope my questions are clear. If not, please let me know to elaborate or explain in another way (being not a native English speaker may cause this :smiley: )

This as a business model doesn’t make sense. Developers are meant to consume Kubernetes to develop their apps. Controlling the mounts in the containers is something a developer or devops engineer will do. Kubernetes administrators should be providing StorageClasses that developers can use and they should be able to just abstractly specify what they want from that storage class.

You may want to read up a bit more about the Container Storage Interface specification.

There are a ton of CSI’s that already exist. Also every cloud provider out there seems to have one available. And there is this.

If you want to develop a CSI, check this out.

Also there’s this project called longhorn.io, which is a pretty beefy example of how to roll your own CSI if you don’t want to use one by a cloud provider. I don’t recommend using this in production yet. Do some failover tests and you will see why you don’t want to manage your storage.