Hello,
I run a bare-metal Kubernetes cluster on a 2 node, 2 GPU composable system. A type of system that enables PCIe device hotplugging, and in this particular case, GPU hotplugging.
Soon we will have a 24 node, 64 GPU system.
What is the current status of GPU hotplugging in Kubernetes? Do you guys support it?
I read a Github thread from 2017 that discussed GPU hotplugging but ultimately closed the issue and decided against pursuing it.
GPUs are no longer handled directly by Kubernetes byt are handed off to their associated device plugin. For nvidia GPUs – its a combo of their k8s-device-plugin and nvidia-docker. You won’t be able to hot-plug GPUs to containers (like live patching a pod spec), but AFAIK it should be able to detect if a new one is added to the host and its appropriately scanned/added.
All we’re looking to do is have it detect if a new one is added to the host.
What do you mean by appropriately scanned/added?
That’s the heart of my question, how to get Kubernetes / the device plugin to recognize the new GPU added.
So far the only thing that has worked is deleting and rejoining the node.
I’m uncertain if a system reboot will do it as there are hardware issues.
But what you would expect to work, which is a restart of kubelet and or docker to get the node to redetect its hardware, does not work to detect the GPU.
Sorry, meant like rescanning the pcie bus: echo 1 > /sys/bus/pci/rescan
Something to trigger the host to recognize the hot plugged gpu.
Ah, okay
That is something we have figured out, the GPUs can be hotplugged, nvidia-smi ran successfully, all works appropriately.
It’s only that Kubernetes does not pick up the new GPU as a resource. Kubernetes has no clue the GPU exists at all, while the host system sees the GPU and uses it no problem.
Is there a way to prompt Kubernetes or whatever component of it to rescan for GPUs in the same sense?
Or does it poll for them constantly?
Would you have any ideas of where else I can look or ask to help figure this out?
I did a bit more reading and it won’t auto-detect on hotplug, but should detect it when the device-plugin re-registers with kubelet (e.g. kubelet restarts or redeployment of the daemonset etc). I’m not sure why it’s not updating it without deleting and rejoining the node. I unfortunately don’t have any system I can really test it with =/
If no else can chimes in, it might be worth asking in the #sig-node
slack channel or create an issue on the nvidia device plugin repo.
You’ve been a big help. I’ll focus in on the device-plugin.
And yeah, just as confused as you as to why restarting kubelet doesn’t do it. May be a hardware issue for us, as we are running Quadro P6000’s that are not as supported by the hardware. It’ll be a while until hardware arrives to eliminate that variable.
Thank you for the guidance of where else I can go! Appreciate it.
Happy to help If you do happen to find the answer, can you post it back here? Try and help the next person that might stumble across the same or similar issue.
The solution update.
Turns out, the trouble was more hardware based.
Detaching a hotplugged GPU when the GPU is in use causes a failure.Either the system crashes or, when the GPU is reattached it will appear on the machine, but be unusable.
To resolve the issue, we are running in run-level 3 or multi-user.target just to be extra safe from any graphical environment, and, the more important and overlooked part, is docker and kubelet are stopped before detaching the GPU.
Once docker and kubelet are stopped, you can detach the GPU with no problem, and when docker and kubelet are started again, Kubernetes will correctly show 0 GPUs attached.
When the GPU is attached again, interestingly, the nvidia plugin automatically detects the newly attached GPUs, and updates Kubernetes appropriately.
1 Like
Awesome! Good to hear I THOUGHT that was the behavior but when digging it seemed like it would only do it when registered.