-
Notifications
You must be signed in to change notification settings - Fork 835
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PCI Passthrough/Discrete Device Assignment for WSL2. #5492
Comments
nested virtualization would be hard as WSL2 distros don't exactly use multiple VMs per distro, they use one common VM for everything that are namespaced by a custom init (see #994), this would also make PCI passthrough hard since you'll be sharing this for your entire VM, not just one distro, if you're interested how GPU support works, Canonical gave an overview. |
Nested virtualization is available on Dev Channel builds of Windows 10. You have to set nestedVirtualization=true in .wslconfig, terminate any running WSL instances with wsl.exe --terminate, restart LxssManager, and then re-open Ubuntu. You can then check with lscpu or kvm-ok. I have a guide to setting up a WSL environment for maximum KVM performance here. Docker isn't virtualization, it's a container, and it's already working on WSL 2. |
Thanks, but my main use is still with DDA. |
I don't get why you need DDA though @hameerabbasi, we already have a virtualized GPU via |
I want to run KVM as a hypervisor on WSL, and pass through the discrete GPU completely. DieectML doesn’t help, as stuff compiled for ROCm wouldn’t work with it, neither would it aid those wishing to work on the ROCm stack. ROCm support for Windows (if that’s what we’re talking about) was never officially announced. AFAICT it was just one engineer who said it was coming in an unofficial capacity. If ROCm support for WSL is what we’re talking about, would you mind pointing me to a reference? |
If you were in Build 2020, this was announced, if you weren't the highlights about it is available. |
I'm looking to support a custom FPGA accelerator--exposed to the host as a PCIe device--within WSL2. It seems like DDA is the way to do that, but as far as I can tell it's 1) not supported in Windows 10, and 2) not supported for lightweight VMs like those used by WSL2, only full Hyper-V VMs. Is this correct? If so, it would be nice to address these issues. |
This is not the same - passing through, for example, an entire GPU isn't the same as the CUDA translation layer that hits the hosts' driver compute stack. This would be most useful for when a second GPU is plugged into your Windows 11 host, you disable it in device manager, then pass through the entire device to WSL which would then allow it to be passed to QEMU/KVM as a whole PCI device. |
I have exactly the same issue. Looking at other virtualization and even dual-booting but would way rather run in WSL2 if possible. |
I would also like to passthrough a custom FPGA. Did you figure out a solution? |
I would like to piggyback off this also as this is exactly what I would like to accomplish also, specifically a thunderbolt attached Xilinx Alveo PCIe card. |
I also would be interested in this being enabled - my use case is a Linux-only ADC card. I was able to get the module compiled and loaded in WSL2 running Debian 10, but without the device being accessible, it's useless. |
C'mon Microsoft team, make this possible! We need full PCI GPU passthrough in WSL! |
I also would like to access my smartnic on my machine via WSL. This is not going to be possible without PCIe passthrough too. |
Would love access to this feature to test graphics heavy applications with a secondary GPU from within a WSL VM |
Just chiming in here, to say that I also would like to access a custom fpga that is exposed to the host via PCIe from WSL, since a specific version of CentOS is needed to flash it. Dual boot seems to be the only alternative elsewise... |
|
I would really like to have this feature for using a coral device on WSL. |
I would like to play around with a bluefield dpu, being able to pass through arbitrary pcie devices would be really helpful here. |
i also would very much like this |
Is your feature request related to a problem? Please describe.
Related to #1788 but different. I'd like to pass through my AMD GPU in its entirety to the guest VM for ROCm workloads. The solution described only works on CUDA but not anyplace else.
Describe the solution you'd like
I'd like DDA/PCI passthrough for WSL2, this will allow any device to be passed through. Even if it doesn't stay connected to the host, this is exactly the behaviour I want. Since WSL2 is based on Hyper-V, and HyperV has passthrough support, and it's even present in Windows 10 not just Windows server according to some reports, this should be doable as a feature.
Describe alternatives you've considered
Passing through just ROCm won't work as there's no ROCm for Windows 10. In addition, I'd like nested virtualization (Docker/KVM) on the guest.
The text was updated successfully, but these errors were encountered: