-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GPU Passthrough #22
Comments
To pass-through your Intel iGPU, add the following lines to your compose file: environment:
GPU: "Y"
devices:
- /dev/dri However, this feature is mainly so that you can transcode video files in Linux using hardware acceleration. I do not know if it also works as a Display Adaptor (for accelerating the desktop) and I never tried it in Windows, so it might not work at all. But if you want to test it, go ahead! I don't know what you mean by Virtual Machine Manager? If you mean the package that Synology provides on NAS, then that seems normal, as it only connects to its own VM's, not any random QEMU VM. See also my other project: https://github.com/vdsm/virtual-dsm for running Synology DSM in Docker. If you mean something else, then please provide some more details about what you were trying to do. |
I meant this project https://virt-manager.org/ though it requires ssh on the client (in this case the windows-in-docker container) to connect to it. Also, i dont have an intel gpu, just an amd igpu and an nvidia dgpu. I thought maybe it would be possible to passthrough the nvidia one so it could be used as an output device |
You can SSH to the container, if you do something like: environment:
HOST_PORTS: "22"
ports:
- 22:22 But I have no experience with Virt-Manager, so I cannot say if that works. I assume not, because it seems related to As for the nVidia GPU, I am sure it is possible to do it. But its kinda complicated because it needs two pass through both Docker and QEMU. Unfortunately I don't have any nVidia or AMD gpu myself, so someone else has to submit the code, because I have no way to test it. |
Hm, i already tried forwording port 22, but i couldnt connect to the container with ssh. It seems to me like openssh-server is missing, but even then it doesnt work somehow. If you could give me some information on how i would get started to passthrough an nvidia gpu, i could try it myself and provide the code afterwards |
The container forwards all the traffic to the VM, and Windows does not respond on port 22. So this As for getting the passthrough to work: you can add additional QEMU parameters via the |
Any advice on passing through an nvidia gpu? |
i can test tommorow but it should be for Unraid in extra params: --runtime=nvidia new VariableName: Nvidia GPU UUID and Key: NVIDIA_VISIBLE_DEVICES and Value: all <--- or if you have more than one NVidia GPU the GPU UUID |
not working with Unraid. extra Param: --device='/dev/dri' <--- does not work AND new device: /dev/dri/ |
I think this only passes the nvidia gpu capabilities to the container, not to the vm, but i might be wrong |
That's correct. Tried this now and my VM does not see the GPU. |
Almost there (nor not). Using unraid with a Intel iGPU (13g Intel UDH770), modifying the template to include 2 new entries (device and variable). Apparently, everything is detected and drivers installed. Inside the VM nothing is detected or showing indications Can you help with the next step (if possible) ? |
It totally depends on what you are trying to achieve. In the screenshot I see the GPU adapter in Windows device manager, so that means it works and everything is okay. Its a virtual graphic cards that can be used for hardware acceleration when encoding video formats for example, or running certain calculations. All these tasks will be performed by your Intel GPU through this virtual device. But if your goal is to use the HDMI display out to connect a monitor, I do not think this graphics card is fit for that purpose. So it all depends on what you are trying to do? |
i gues.. he is right.. iam in steam link right now... downloading a small game and test it out.. Intel IGPU 13700k. I test it with Pacify it should run fine. i stream from my Unraid server to my Iphone 15pro max over wifi 5 so No the Game needs a DirectX Device which is not installed :D |
@domrockt Its possible that this "GPU DOD" device has no DirectX. QEMU supports many different video devices, and this device is for transcoding videos, so obviously we need to tell QEMU to create a different device that is more suitable for gaming. I will see if I can fix it, but its a bit low on my priority-list so if somebody else has the time to figure out how to do it in QEMU it would be appreciated. |
i gues this one https://github.com/virtio-win/kvm-guest-drivers-windows but ths is my wits end :D for now. |
It's not possible to use any type of acceleration (ex: youtube, decode/encode files, ... ) inside the VM, and zero activity detected by host side. Apparently, the VM is working the same way... and nothing is different with or without iGPU passthrough. |
I know for certain that it works in Linux guests, as I use the same code in my other project ( https://github.com/vdsm/virtual-dsm ) where the GPU is used for accelerating facial recognition in photos, etc. I never tried it in a Windows guest, so its possible that it does not work there (or needs special drivers to be installed). I created this container only one day ago, and even much less advanced features (like an audio device for sound) are not even implemented yet. So its better to focus first on getting the basics finished, and the very complicated/advanced stuff like GPU acceleration will be one of the last things on the list, sorry. |
I also use passthrough in several other containers (plex, jellyfin, frigate,...). Being able to achieve in this container can be big/great thing (applications design to only work with windows) . Sharing the iGPU with containers and not dedicating to a single VM can be a very economical, versatile and power efficient approach. Looking forward to hearing from you in the future on this matter. Despite this "issue", thanks for your hard work. 👍 |
Is definitely reasonable. Appreciate your hard work |
I did some investigation and it seems its possible to have DirectX in a Windows Guest by using the The other option is PCI passthrough, but it is less nice in the sense that it requires exclusive access to the device, so you cannot use the same device for multiple containers. And its very complicated to support, because depending on the generation of the iGPU you will need to use different methods, for example SR-IOV for very recent Intel XE graphics, iGVT-g for others, etc. It will be impossible to add a short and universal set of instructions that will work for most graphics card to the FAQ. |
I have to say that i was intrigued by the idea to run this container as a windows game streaming server and pass my Nvidia GPU through to the VM.... but looking through this and the qemus/qemu-docker project i understand that it would be a huge project :) I'll probably find some other use case for this tho :D |
So using this project as game streaming server is not possible? Is there any other alternative game streaming that is hosted in docker? |
Not that I know of, my plan was to have a windows VM on my server that could run all the games my linux PC can't, but Nvidia and docker is not a fun project :-/ I don't know how to pass an Nvidia card through to a container without an extra nvidia-toolkit container as a layer in-between. At least that's my guess :-) I could look in to it a bit more, if the container can access the GPU qemu should be able to use it with some modification I guess |
@kieeps - Maybe I can be of help... :) |
@tarunx - Maybe try out using KasmVNC - they have a SteamContainer so it might be possible... |
passing a GPU through to Qemu is quite the process, in a container just adds an extra layer of issues. Typically you have to enable vfio_pci and iommu for your cpu type in the kernel modules. Then you use options to pass it through to Qemu. You can remotely connect to a running qemu instance (virt-manager is typically what people use) then add in Docker/Podman and its a whole other thing. I bet someone has done it, but it doesn't sound easy necessarily. What I did was install Nix on a remote machine and followed this guide https://alexbakker.me/post/nixos-pci-passthrough-qemu-vfio.html and there are a lot of articles about the options that qemu needs. Im curious to see if someone tries this on top of Docker/Podman |
Hi all, I managed yesterday to have a successful GPU passthrough (Yay). On my system (Debian, 6.6.13), intel arc A380: Edit I wanted to be able to switch the GPU from host to VM, and therefore decided to have a script instead of having options within the modprobe loads. You can find how to pass a PCI to
As root: Then for both
From now on, My docker-compose file as following:
I was then able to install Intel GPU drivers within Windows with no issue. I'm not an expert, therefore don't hesitate to comment/redact as per your needs. |
@Xav-v If you are not an expert, than its even more impressive that you got this working! I'm sure this will be very useful for other people, as they can follow your steps now. Thanks! |
I'm glad you got it working, I really wish there was a tiny script that could generate the correct vfio passthrough for noobs. But sadly it wouldnt work for everyone... (Like me with my stupid AMD Gpu and its inability to reset itself) |
Hello, been following this issue for a while, couldnt really partecipate @Xav-v's tutorial did the trick for me as well, so I successfully managed to passthrough My configuration is almost identical to his, I kinda tried to configure looking-glass w/ IddSampleDriver as well, One thing I know for sure is that the file is correctly initalized, with the following
Also it is mandatory to have services:
windows:
image: dockurr/windows:latest
container_name: W11-Core
privileged: true
environment:
VERSION: "win11"
RAM_SIZE: "12G"
CPU_CORES: "4"
DEVICE2: "/dev/sda"
ARGUMENTS: >
-device vfio-pci,host=23:00.0,multifunction=on
-device vfio-pci,host=23:00.1,multifunction=on
-device ivshmem-plain,memdev=ivshmem,bus=pcie.0
-object memory-backend-file,id=ivshmem,share=on,mem-path=/dev/shm/looking-glass,size=32M
devices:
- /dev/kvm
- /dev/sda
- /dev/vfio/22
- /dev/vfio/vfio
- /dev/shm/looking-glass
# volumes:
# - /dev/shm/looking-glass:/dev/shm/looking-glass
cap_add:
- NET_ADMIN
ports:
- 8006:8006
- 3389:3389/tcp
- 3389:3389/udp
stop_grace_period: 2m
restart: on-failure As I said, I suppose we are missing a spice server configuration, this is how virt-manager would do it
Just in case someone else would need it Last but not least, I would like to thank both the project owner and all participants! |
Good news everyone, I did in fact manage to make looking-glass work as intended! Of course, there is still something missing (such as audio and the clipboard is not sync'd), My intuition was, in fact, correct; qemu-system-modules-spice package was missing, FROM dockurr/windows:latest
# Add testing repository
RUN echo "deb http://deb.debian.org/debian/ testing main" >> /etc/apt/sources.list.d/sid.list
RUN echo -e "Package: *\nPin: testing n=trixie\nPin-Priority: 350" | tee -a /etc/apt/preferences.d/preferences > /dev/null
RUN apt-get update && \
apt-get --no-install-recommends -y install \
qemu-system-modules-spice
ENTRYPOINT ["/usr/bin/tini", "-s", "/run/entry.sh"] Thus I built the new docker via
I then found some looking-glass documentation By default looking-glass host on windows uses port 5900, not going to change that, As matter of fact, you should NOT disable the display, as it disables all displays, One major difference from yesterday is that I decided to setup
For arch linux there's an AUR package available
My full docker compose services:
windows:
image: windows-spice
container_name: W11-Core
privileged: true
environment:
VERSION: "win11"
RAM_SIZE: "12G"
CPU_CORES: "4"
DEVICE2: "/dev/sda"
ARGUMENTS: >
-device vfio-pci,host=23:00.0,multifunction=on
-device vfio-pci,host=23:00.1,multifunction=on
-device ivshmem-plain,id=shmem0,memdev=looking-glass
-object memory-backend-file,id=looking-glass,mem-path=/dev/kvmfr0,size=32M,share=yes
-device virtio-mouse-pci
-device virtio-keyboard-pci
-device virtio-serial-pci
-spice addr=0.0.0.0,port=5900,disable-ticketing
-device virtio-serial-pci
-chardev spicevmc,id=vdagent,name=vdagent
-device virtserialport,chardev=vdagent,name=com.redhat.spice.0
devices:
- /dev/kvm
- /dev/sda
- /dev/vfio/22
- /dev/vfio/vfio
- /dev/kvmfr0
cap_add:
- NET_ADMIN
ports:
- 60400:5900
- 8006:8006
- 3389:3389/tcp
- 3389:3389/udp
stop_grace_period: 2m
restart: on-failure Of course, also IddSampleDriver and looking-glass requires just the right configuration as well. I'd suggest to install IddSampleDriver at
You'd probably want to configure looking-glass as well on the windows host, by default it [app]
capture=nvfbc It might be fine with the default configuration. You then HAVE to configure looking-glass for the linux client itself, it [app]
shmFile=/dev/kvmfr0
[win]
title=WizariMachine
size=1920x1080
keepAspect=yes
borderless=yes
fullScreen=no
showFPS=yes
[input]
ignoreWindowsKeys=no
escapeKey=97
mouseSmoothing=no
mouseSens=1
[wayland]
warpSupport=yes
fractionScale=yes
[spice]
port=60400 Run the docker, run looking-glass-client from your linux host, at this point you should see your windows machine Finally, connect via VNC like you normally would and change which one is I am also going to attach some screenshots where you can clearly see I am on linux (wayland, hyprland, a plain and simple ags bar on top), I tested both furmark (for the video capabilities) and gzdoom/youtube (mouse, keyboard and display latency, I'd say there is no noticable latency at all) EDIT 1: nvfbc is only supported for "professional grade GPUs", I suppose it is automatically falling back to dxgi then? EDIT 2: lately been busy with studies, I figured out a way to also enable audio via pulseaudio/pipewire; as always I am not an expert. Not sure if Of course, not taking full credits, I had a look into the qemu documentation, this very forum which explained how to setup a pulseaudio socket (which I totally skipped and gave the native socket instead xD), and this stackoverflow thread. TL:DR Add these lines as ARGUMENTS (configuration above)
Also mount the pipewire/pulseaudio as a docker volume
I'd say only clipboard sharing is missing. |
I'm using that, of course to adapt as per your needs (array):
you have to call this script with either |
Hello, I am following this issue since it was created as a silent reader and want to thank everyone that has provided so much information regarding this topic. I`d like to throw another layer into the pit regarding passing gpus to windows running inside docker via this project. I would be highly interested in any information regarding not doing a full gpu passthrough but splitting a gpu into vGPUs using the https://github.com/mbilker/vgpu_unlock-rs project (a detailed tutorial how to do this with a Proxmox Server can be found here https://gitlab.com/polloloco/vgpu-proxmox) and then passing a vGPU to a specific windows docker container. Maybe someone has already tried this. It works like charm on proxmox with Windows VMs using for example enterprise GPUs like the Tesla M40 or Tesla P4. Thanks in advance |
Hi, new to this thread and having a go at the config to get a NVIDIA card passed through to a docker image (dockur/windows) and have it show up in the nested VM. I have the card showing up in nvidia-smi in the docker container and am about to do the passthrough from there to the Windows11 VM. I did this by installing nvidia container tools on the host, then passing through the GPU using portainer and/or command line switches in the docker run command ( i dont use compose ) then installint the nvidia drivers and the nvidia-container toolkit in the docker container. I just wanted to ask, as my server is headless, do I really need to add in vfio-pci and/or looking-glass on the docker image ? from the perspective of the docker image, it is the only thing using the card... so cant I just forward the pci device ? There are other docker images using it for other purposes, but the windows image will be the only one using it for 'display' |
Hi @kroese, |
Would it be possible to create a video teaching how to do "GPU Passthrough"? |
Do I need to have two video cards? Thanks! |
Hello All, I'm not sure if anyone is curious how to passthrough a GPU to the VM directly on an UnRaid system still but, if you are i have a quick hit guide listed below. NOTES: This is an UnRaid Setup w/ NVIDIA | I have 2 GPUs on bare metal (1080 & 3060) & DEDICATING one (3060) to the Windows inside docker | Mileage may vary. On UnRaid Terminal as root:
Output:lspci -nnk | grep -i -A 3 'VGA' 03:00.1 Audio device [0403]: NVIDIA Corporation GA106 High Definition Audio Controller [10de:228e] (rev a1) 81:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP104 [GeForce GTX 1080] [10de:1b80] (rev a1) Make Note of the Device you want to add to the VM, in my case its:
UnRaid Docker Setup:How to Add the 3 Device Types & 1 VariableThe variable as you might expect is well......variable, change the code below based on your system output above in my case its built like this: If i wanted to use the 1080 itd be built like this: Save the docker, it will setup successful but NOT start successfully - this is expected!you should see an error in the logs stating that it cant access the vfio device ect. On UnRaid Terminal as root:
NOTE: Kernel is nvidia
Time to unbind NVIDIA & bind to VFIO-PCI:This will be the based on the above output, my GPU Video device ID is 03:00:0 & Vendor ID is 10de:2503 This will be the based on the above output, my GPU Audio device ID is 03:00:1 & Vendor ID is 10de:228e NOTE: Kernel is VFIO-PCI
Before:After:Start the docker container & see if it boots:Let it run through the install, once you hit the desktop - type ENDING NOTES:The changes made in the unbind NVIDIA & bind to VFIO-PCI section stays in effect until reboot oh the host (UnRaid) after reboot you will need to redo that section. You can however run a script on start up or on-demand to help automate the process. I can add that onto this if enough people ask for it. Hope this helps & I didn't miss anything :) ALSO HUGE THANK YOU FOR THIS PROJECT ITS EXACTLY WHAT I NEEDED!!!!! |
I assume that you were running the docker container from a linux host machine not windows host machine, right? :) |
to those interested in this ive written a script that automatically does this
to those interested in this I've written a script that automatically binds and unbinds #845 its still a work in progress so testers will be helpful, the current version needs to be run in user scripts {with modifications} as I need to find a way to run the script pre start and post stop of the container you will still need to do the variables, except the arguments once i have a gpu for my server i can test further |
environment:
|
Hey, would like to know if this container is capable of passing through a gpu to the vm inside the container. I have looked into the upstream docker container qemus/qemu-docker, which seems to have some logic to work with gpu passthrough, though some documentation for this here would be great, if it is possible.
I also tried to connect to the container using virtual machine manager, but unfortunately i wasnt able to connect to it. Any idea why?
The text was updated successfully, but these errors were encountered: