Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GPU Passthrough #22

Open
Joly0 opened this issue Jan 15, 2024 · 89 comments
Open

GPU Passthrough #22

Joly0 opened this issue Jan 15, 2024 · 89 comments

Comments

@Joly0
Copy link

Joly0 commented Jan 15, 2024

Hey, would like to know if this container is capable of passing through a gpu to the vm inside the container. I have looked into the upstream docker container qemus/qemu-docker, which seems to have some logic to work with gpu passthrough, though some documentation for this here would be great, if it is possible.

I also tried to connect to the container using virtual machine manager, but unfortunately i wasnt able to connect to it. Any idea why?

@kroese
Copy link
Contributor

kroese commented Jan 15, 2024

To pass-through your Intel iGPU, add the following lines to your compose file:

environment:
  GPU: "Y"
devices:
  - /dev/dri

However, this feature is mainly so that you can transcode video files in Linux using hardware acceleration. I do not know if it also works as a Display Adaptor (for accelerating the desktop) and I never tried it in Windows, so it might not work at all. But if you want to test it, go ahead!

I don't know what you mean by Virtual Machine Manager? If you mean the package that Synology provides on NAS, then that seems normal, as it only connects to its own VM's, not any random QEMU VM. See also my other project: https://github.com/vdsm/virtual-dsm for running Synology DSM in Docker. If you mean something else, then please provide some more details about what you were trying to do.

@Joly0
Copy link
Author

Joly0 commented Jan 15, 2024

I meant this project https://virt-manager.org/ though it requires ssh on the client (in this case the windows-in-docker container) to connect to it.

Also, i dont have an intel gpu, just an amd igpu and an nvidia dgpu. I thought maybe it would be possible to passthrough the nvidia one so it could be used as an output device

@kroese
Copy link
Contributor

kroese commented Jan 15, 2024

You can SSH to the container, if you do something like:

    environment:
      HOST_PORTS: "22"
    ports:
      - 22:22

But I have no experience with Virt-Manager, so I cannot say if that works. I assume not, because it seems related to libvirt and virsh. And my container uses QEMU directly, without any help of virsh.

As for the nVidia GPU, I am sure it is possible to do it. But its kinda complicated because it needs two pass through both Docker and QEMU. Unfortunately I don't have any nVidia or AMD gpu myself, so someone else has to submit the code, because I have no way to test it.

@Joly0
Copy link
Author

Joly0 commented Jan 15, 2024

Hm, i already tried forwording port 22, but i couldnt connect to the container with ssh. It seems to me like openssh-server is missing, but even then it doesnt work somehow.

If you could give me some information on how i would get started to passthrough an nvidia gpu, i could try it myself and provide the code afterwards

@kroese
Copy link
Contributor

kroese commented Jan 16, 2024

The container forwards all the traffic to the VM, and Windows does not respond on port 22.

So this HOST_PORTS: "22" is really important to prevent the container from forwarding that port to Windows. Just ports: - 22:22 is not enough in this case. You can get a bash shell via Docker into the container, and run something like apt-get install openssh-server if needed. But I am not sure if its worth the effort as most like virt-manager will not be able to find virsh even if port 22 is open.

As for getting the passthrough to work: you can add additional QEMU parameters via the ARGUMENTS= variable. So I would Google for terms like QEMU+NVidia+passthrough and see if you can find the correct parameters. Then put them in ARGUMENTS and see what effect they have until you discover the correct ones.

@kingkunta88
Copy link

Any advice on passing through an nvidia gpu?

@domrockt
Copy link

Any advice on passing through an nvidia gpu?

i can test tommorow but it should be for Unraid

in extra params: --runtime=nvidia

new VariableName: Nvidia GPU UUID and Key: NVIDIA_VISIBLE_DEVICES and Value: all <--- or if you have more than one NVidia GPU the GPU UUID

@domrockt
Copy link

To pass-through your Intel iGPU, add the following lines to your compose file:

environment:
  GPU: "Y"
devices:
  - /dev/dri

However, this feature is mainly so that you can transcode video files in Linux using hardware acceleration. I do not know if it also works as a Display Adaptor (for accelerating the desktop) and I never tried it in Windows, so it might not work at all. But if you want to test it, go ahead!

I don't know what you mean by Virtual Machine Manager? If you mean the package that Synology provides on NAS, then that seems normal, as it only connects to its own VM's, not any random QEMU VM. See also my other project: https://github.com/vdsm/virtual-dsm for running Synology DSM in Docker. If you mean something else, then please provide some more details about what you were trying to do.

not working with Unraid.

extra Param: --device='/dev/dri' <--- does not work

AND

new device: /dev/dri/

@Joly0
Copy link
Author

Joly0 commented Jan 16, 2024

Any advice on passing through an nvidia gpu?

i can test tommorow but it should be for Unraid

in extra params: --runtime=nvidia

new VariableName: Nvidia GPU UUID and Key: NVIDIA_VISIBLE_DEVICES and Value: all <--- or if you have more than one NVidia GPU the GPU UUID

I think this only passes the nvidia gpu capabilities to the container, not to the vm, but i might be wrong

@Allram
Copy link

Allram commented Jan 16, 2024

Any advice on passing through an nvidia gpu?

i can test tommorow but it should be for Unraid
in extra params: --runtime=nvidia
new VariableName: Nvidia GPU UUID and Key: NVIDIA_VISIBLE_DEVICES and Value: all <--- or if you have more than one NVidia GPU the GPU UUID

I think this only passes the nvidia gpu capabilities to the container, not to the vm, but i might be wrong

That's correct. Tried this now and my VM does not see the GPU.
It works fine for containers like Plex etc, so it must be something in the "link" between the docker-container and vm.

@ladrive
Copy link

ladrive commented Jan 16, 2024

To pass-through your Intel iGPU, add the following lines to your compose file:

environment:
  GPU: "Y"
devices:
  - /dev/dri

However, this feature is mainly so that you can transcode video files in Linux using hardware acceleration. I do not know if it also works as a Display Adaptor (for accelerating the desktop) and I never tried it in Windows, so it might not work at all. But if you want to test it, go ahead!
I don't know what you mean by Virtual Machine Manager? If you mean the package that Synology provides on NAS, then that seems normal, as it only connects to its own VM's, not any random QEMU VM. See also my other project: https://github.com/vdsm/virtual-dsm for running Synology DSM in Docker. If you mean something else, then please provide some more details about what you were trying to do.

not working with Unraid.

extra Param: --device='/dev/dri' <--- does not work

AND

new device: /dev/dri/

Almost there (nor not).

Using unraid with a Intel iGPU (13g Intel UDH770), modifying the template to include 2 new entries (device and variable).

image

Apparently, everything is detected and drivers installed.

image

Inside the VM nothing is detected or showing indications

image

Can you help with the next step (if possible) ?

@kroese
Copy link
Contributor

kroese commented Jan 16, 2024

It totally depends on what you are trying to achieve. In the screenshot I see the GPU adapter in Windows device manager, so that means it works and everything is okay.

Its a virtual graphic cards that can be used for hardware acceleration when encoding video formats for example, or running certain calculations. All these tasks will be performed by your Intel GPU through this virtual device.

But if your goal is to use the HDMI display out to connect a monitor, I do not think this graphics card is fit for that purpose. So it all depends on what you are trying to do?

@domrockt
Copy link

domrockt commented Jan 16, 2024

It totally depends on what you are trying to achieve. In the screenshot I see the GPU adapter in Windows device manager, so that means it works and everything is okay.

Its a virtual graphic cards that can be used for hardware acceleration when encoding video formats for example, or running certain calculations. All these tasks will be performed by your Intel GPU through this virtual device.

But if your goal is to use the HDMI display out to connect a monitor, I do not think this graphics card is fit for that purpose. So it all depends on what you are trying to do?

i gues.. he is right.. iam in steam link right now... downloading a small game and test it out.. Intel IGPU 13700k. I test it with Pacify it should run fine.

i stream from my Unraid server to my Iphone 15pro max over wifi 5

so No the Game needs a DirectX Device which is not installed :D

@kroese
Copy link
Contributor

kroese commented Jan 16, 2024

@domrockt Its possible that this "GPU DOD" device has no DirectX. QEMU supports many different video devices, and this device is for transcoding videos, so obviously we need to tell QEMU to create a different device that is more suitable for gaming.

I will see if I can fix it, but its a bit low on my priority-list so if somebody else has the time to figure out how to do it in QEMU it would be appreciated.

@domrockt
Copy link

@domrockt Its possible that this "GPU DOD" device has no DirectX. QEMU supports many different video devices, and this device is for transcoding videos, so obviously we need to tell QEMU to create a different device that is more suitable for gaming.

I will see if I can fix it, but its a bit low on my priority-list so if somebody else has the time to figure out how to do it in QEMU it would be appreciated.

i gues this one https://github.com/virtio-win/kvm-guest-drivers-windows but ths is my wits end :D for now.

@ladrive
Copy link

ladrive commented Jan 16, 2024

It totally depends on what you are trying to achieve. In the screenshot I see the GPU adapter in Windows device manager, so that means it works and everything is okay.

Its a virtual graphic cards that can be used for hardware acceleration when encoding video formats for example, or running certain calculations. All these tasks will be performed by your Intel GPU through this virtual device.

But if your goal is to use the HDMI display out to connect a monitor, I do not think this graphics card is fit for that purpose. So it all depends on what you are trying to do?

It's not possible to use any type of acceleration (ex: youtube, decode/encode files, ... ) inside the VM, and zero activity detected by host side.

image

Apparently, the VM is working the same way... and nothing is different with or without iGPU passthrough.

@kroese
Copy link
Contributor

kroese commented Jan 16, 2024

I know for certain that it works in Linux guests, as I use the same code in my other project ( https://github.com/vdsm/virtual-dsm ) where the GPU is used for accelerating facial recognition in photos, etc.

I never tried it in a Windows guest, so its possible that it does not work there (or needs special drivers to be installed). I created this container only one day ago, and even much less advanced features (like an audio device for sound) are not even implemented yet. So its better to focus first on getting the basics finished, and the very complicated/advanced stuff like GPU acceleration will be one of the last things on the list, sorry.

@ladrive
Copy link

ladrive commented Jan 16, 2024

I know for certain that it works in Linux guests, as I use the same code in my other project ( https://github.com/vdsm/virtual-dsm ) where the GPU is used for accelerating facial recognition in photos, etc.

I never tried it in a Windows guest, so its possible that it does not work there (or needs special drivers to be installed). I created this container only one day ago, and even much less advanced features (like an audio device for sound) are not even implemented yet. So its better to focus first on getting the basics finished, and the very complicated/advanced stuff like GPU acceleration will be one of the last things on the list, sorry.

I also use passthrough in several other containers (plex, jellyfin, frigate,...). Being able to achieve in this container can be big/great thing (applications design to only work with windows) . Sharing the iGPU with containers and not dedicating to a single VM can be a very economical, versatile and power efficient approach.

Looking forward to hearing from you in the future on this matter.

Despite this "issue", thanks for your hard work. 👍

@Joly0
Copy link
Author

Joly0 commented Jan 16, 2024

I know for certain that it works in Linux guests, as I use the same code in my other project ( https://github.com/vdsm/virtual-dsm ) where the GPU is used for accelerating facial recognition in photos, etc.

I never tried it in a Windows guest, so its possible that it does not work there (or needs special drivers to be installed). I created this container only one day ago, and even much less advanced features (like an audio device for sound) are not even implemented yet. So its better to focus first on getting the basics finished, and the very complicated/advanced stuff like GPU acceleration will be one of the last things on the list, sorry.

Is definitely reasonable. Appreciate your hard work

@kroese
Copy link
Contributor

kroese commented Jan 18, 2024

I did some investigation and it seems its possible to have DirectX in a Windows Guest by using the virtio-gpu-gl display device and the experimental drivers from this topic: virtio-win/kvm-guest-drivers-windows#943 .

The other option is PCI passthrough, but it is less nice in the sense that it requires exclusive access to the device, so you cannot use the same device for multiple containers. And its very complicated to support, because depending on the generation of the iGPU you will need to use different methods, for example SR-IOV for very recent Intel XE graphics, iGVT-g for others, etc. It will be impossible to add a short and universal set of instructions that will work for most graphics card to the FAQ.

@CallyHam
Copy link

CallyHam commented Feb 5, 2024

If you use windows 7 as the guest and rdp into the container from a windows 7 machine that has aero enabled, you get aero glass effects in the vm, not sure if it accelerates anything else other than the desktop experience though
image

@kieeps
Copy link

kieeps commented Feb 28, 2024

I have to say that i was intrigued by the idea to run this container as a windows game streaming server and pass my Nvidia GPU through to the VM.... but looking through this and the qemus/qemu-docker project i understand that it would be a huge project :)

I'll probably find some other use case for this tho :D

@tarunx
Copy link

tarunx commented Mar 1, 2024

So using this project as game streaming server is not possible? Is there any other alternative game streaming that is hosted in docker?

@kieeps
Copy link

kieeps commented Mar 1, 2024

Not that I know of, my plan was to have a windows VM on my server that could run all the games my linux PC can't, but Nvidia and docker is not a fun project :-/ I don't know how to pass an Nvidia card through to a container without an extra nvidia-toolkit container as a layer in-between.

At least that's my guess :-) I could look in to it a bit more, if the container can access the GPU qemu should be able to use it with some modification I guess

@Husky110
Copy link

Husky110 commented Mar 1, 2024

@kieeps - Maybe I can be of help... :)
I got the NVIDIA-Toolkit running on my server, so I can use some AI-Stuff. Maybe you checkout the docs for that see https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html so you don't need an extra container inbetween.
If you could check that out, that would be great, cause I am considering to do the same, but I am busy tonight with another project... :)

@Husky110
Copy link

Husky110 commented Mar 1, 2024

@tarunx - Maybe try out using KasmVNC - they have a SteamContainer so it might be possible...

@sweetbbak
Copy link

sweetbbak commented Mar 3, 2024

passing a GPU through to Qemu is quite the process, in a container just adds an extra layer of issues. Typically you have to enable vfio_pci and iommu for your cpu type in the kernel modules. Then you use options to pass it through to Qemu. You can remotely connect to a running qemu instance (virt-manager is typically what people use)

then add in Docker/Podman and its a whole other thing. I bet someone has done it, but it doesn't sound easy necessarily. What I did was install Nix on a remote machine and followed this guide https://alexbakker.me/post/nixos-pci-passthrough-qemu-vfio.html and there are a lot of articles about the options that qemu needs. Im curious to see if someone tries this on top of Docker/Podman

This was referenced Mar 10, 2024
@m4tt72
Copy link

m4tt72 commented Mar 10, 2024

I already have my GPU passthrough enabled, is there an argument that I can pass to allow the windows guest to use the passthrough gpu?

image

@Xav-v
Copy link

Xav-v commented Apr 4, 2024

Hi all,

I managed yesterday to have a successful GPU passthrough (Yay).
As mentioned before, switching kernel module used by GPU from i915 to vfio-pci was the key.

On my system (Debian, 6.6.13), intel arc A380:
In BIOS, enable IOMMU, virtualization VT-d, VT-x.

Edit /etc/default/grub and add intel_iommu=on in GRUB_CMDLINE_LINUX_DEFAULT
sudo update-grub to update grub, and restart.

I wanted to be able to switch the GPU from host to VM, and therefore decided to have a script instead of having options within the modprobe loads. You can find how to pass a PCI to vfio-pci in other links.

sudo lspci gives you the list of PCI devices.
My GPU is listed as 03:00.0, it's audio device as 04:00.0. It's important to pass both or you'll end up with some issues later on.

As root:
To detach from i915 to vfio-pci:
modprobe vfio vfio_pci

Then for both 0000:03:00.0 and 0000:04:00.0 in my case:

echo %s > /sys/bus/pci/devices/%s/driver/unbind
echo vfio-pci > /sys/bus/pci/devices/%s/driver_override
echo %s > /sys/bus/pci/drivers_probe

From now on, lspci -v | grep -A 15 " VGA " shall give you vfio-pci as driver in use.

My docker-compose file as following:

version: "3"
services:
  windows:
    image: dockurr/windows
    build: .
    container_name: windows
    privileged: true
    environment:
      VERSION: "win11"
      DEBUG: Y
      RAM_SIZE: "16G"
      CPU_CORES: "14"
      ARGUMENTS: "-device vfio-pci,host=03:00.0,multifunction=on -device vfio-pci,host=04:00.0,multifunction=on"
    devices:
      - /dev/kvm
      - /dev/vfio/1
    group_add:
      - "105"
    volumes:
      - ./storage:/storage
    cap_add:
      - NET_ADMIN
    ports:
      - 8006:8006
      - 3389:3389/tcp
      - 3389:3389/udp
    stop_grace_period: 2m
    restart: on-failure

I was then able to install Intel GPU drivers within Windows with no issue.

I'm not an expert, therefore don't hesitate to comment/redact as per your needs.

@kroese
Copy link
Contributor

kroese commented Apr 4, 2024

@Xav-v If you are not an expert, than its even more impressive that you got this working! I'm sure this will be very useful for other people, as they can follow your steps now. Thanks!

@jasonmbrown
Copy link

Hi all,

I managed yesterday to have a successful GPU passthrough (Yay). As mentioned before, switching kernel module used by GPU from i915 to vfio-pci was the key.

On my system (Debian, 6.6.13), intel arc A380: In BIOS, enable IOMMU, virtualization VT-d, VT-x.

Edit /etc/default/grub and add intel_iommu=on in GRUB_CMDLINE_LINUX_DEFAULT sudo update-grub to update grub, and restart.

I wanted to be able to switch the GPU from host to VM, and therefore decided to have a script instead of having options within the modprobe loads. You can find how to pass a PCI to vfio-pci in other links.

sudo lspci gives you the list of PCI devices. My GPU is listed as 03:00.0, it's audio device as 04:00.0. It's important to pass both or you'll end up with some issues later on.

As root: To detach from i915 to vfio-pci: modprobe vfio vfio_pci

Then for both 0000:03:00.0 and 0000:04:00.0 in my case:

echo %s > /sys/bus/pci/devices/%s/driver/unbind
echo vfio-pci > /sys/bus/pci/devices/%s/driver_override
echo %s > /sys/bus/pci/drivers_probe

From now on, lspci -v | grep -A 15 " VGA " shall give you vfio-pci as driver in use.

My docker-compose file as following:

version: "3"
services:
  windows:
    image: dockurr/windows
    build: .
    container_name: windows
    privileged: true
    environment:
      VERSION: "win11"
      DEBUG: Y
      RAM_SIZE: "16G"
      CPU_CORES: "14"
      ARGUMENTS: "-device vfio-pci,host=03:00.0,multifunction=on -device vfio-pci,host=04:00.0,multifunction=on"
    devices:
      - /dev/kvm
      - /dev/vfio/1
    group_add:
      - "105"
    volumes:
      - ./storage:/storage
    cap_add:
      - NET_ADMIN
    ports:
      - 8006:8006
      - 3389:3389/tcp
      - 3389:3389/udp
    stop_grace_period: 2m
    restart: on-failure

I was then able to install Intel GPU drivers within Windows with no issue.

I'm not an expert, therefore don't hesitate to comment/redact as per your needs.

I'm glad you got it working, I really wish there was a tiny script that could generate the correct vfio passthrough for noobs. But sadly it wouldnt work for everyone... (Like me with my stupid AMD Gpu and its inability to reset itself)

@DrKGD
Copy link

DrKGD commented Apr 5, 2024

Hello, been following this issue for a while, couldnt really partecipate
at the discussion as my knowledge on the matter is very limited.

@Xav-v's tutorial did the trick for me as well, so I successfully managed to passthrough
on my dual GPU setup using my bench nvidia 1050ti (ancient, I know) for the docker itself.

My configuration is almost identical to his, I kinda tried to configure looking-glass w/ IddSampleDriver as well,
but had no success at it is supposedly failing at configuring the spice server for the IVSHMEM device,
also I am not entirly sure whether or not it has to be configured as a plain file volume or as a
device.

One thing I know for sure is that the file is correctly initalized, with the following
procedure

  1. Create an empty file touch /dev/shm/looking-glass
  2. chown the file as $(user):qemu
  3. chmod 660 the file
  4. Proceed to start the docker, as I am using a 1920x1080 display, I'd expect a 32MB IVSHMEM sized file.

Also it is mandatory to have privileged: true, otherwise the docker would fail on me
with RLIMIT_MEMLOCK messages.

services:
  windows:
    image: dockurr/windows:latest
    container_name: W11-Core
    privileged: true
    environment:
      VERSION: "win11"
      RAM_SIZE: "12G"
      CPU_CORES: "4"
      DEVICE2: "/dev/sda"
      ARGUMENTS: >
        -device vfio-pci,host=23:00.0,multifunction=on 
        -device vfio-pci,host=23:00.1,multifunction=on 
        -device ivshmem-plain,memdev=ivshmem,bus=pcie.0 
        -object memory-backend-file,id=ivshmem,share=on,mem-path=/dev/shm/looking-glass,size=32M
    devices:
      - /dev/kvm
      - /dev/sda
      - /dev/vfio/22
      - /dev/vfio/vfio
      - /dev/shm/looking-glass
    # volumes:
      # - /dev/shm/looking-glass:/dev/shm/looking-glass
    cap_add:
      - NET_ADMIN
    ports:
      - 8006:8006
      - 3389:3389/tcp
      - 3389:3389/udp
    stop_grace_period: 2m
    restart: on-failure

As I said, I suppose we are missing a spice server configuration, this is how virt-manager would do it
and probably disable the default vnc display as suggested in this issue;
so far I had no luck, as (supposedly) the docker instance is missing the required QEMU module.

ARGUMENTS: >
	-device vfio-pci,host=23:00.0,multifunction=on 
	-device vfio-pci,host=23:00.1,multifunction=on 
	-device ivshmem-plain,memdev=ivshmem,bus=pcie.0 
	-object memory-backend-file,id=ivshmem,share=on,mem-path=/dev/shm/looking-glass,size=32M
	-spice port=5900
[+] Running 1/0
 ✔ Container W11-Core  Created
Attaching to W11-Core
W11-Core  | ❯ Starting Windows for Docker v2.08...
W11-Core  | ❯ For support visit https://github.com/dockur/windows
W11-Core  |
W11-Core  | ❯ Booting Windows using QEMU emulator version 8.2.1 ...
W11-Core  | ❯ ERROR: qemu-system-x86_64: -spice 5900: There is no option group 'spice'
W11-Core  | qemu-system-x86_64: -spice 5900: Perhaps you want to install qemu-system-modules-spice package?
W11-Core exited with code 0

Just in case someone else would need it
An older gist to GPU passthrough
A guide to IddSampleDriver + Looking Glass

Last but not least, I would like to thank both the project owner and all participants!

@DrKGD
Copy link

DrKGD commented Apr 6, 2024

Good news everyone, I did in fact manage to make looking-glass work as intended!

Of course, there is still something missing (such as audio and the clipboard is not sync'd),
but it is only a matter of configuration at this point.

My intuition was, in fact, correct; qemu-system-modules-spice package was missing,
thus I had to slightly modify the docker by adding the debian repository (thus the package).

FROM dockurr/windows:latest

# Add testing repository
RUN echo "deb http://deb.debian.org/debian/ testing main" >> /etc/apt/sources.list.d/sid.list

RUN echo -e "Package: *\nPin: testing n=trixie\nPin-Priority: 350" | tee -a /etc/apt/preferences.d/preferences > /dev/null

RUN apt-get update && \
		apt-get --no-install-recommends -y install \
		qemu-system-modules-spice

ENTRYPOINT ["/usr/bin/tini", "-s", "/run/entry.sh"]

Thus I built the new docker via

docker buildx build -t windows-spice --file spice-support.dockerfile .

I then found some looking-glass documentation
which gave all I had to know to configure the passthrough as I really needed.

By default looking-glass host on windows uses port 5900, not going to change that,
but you are required to expose that port, (and I did on port 60400 60400:5900).

As matter of fact, you should NOT disable the display, as it disables all displays,
you could theoretically pass -vga none as an additional argument thought.

One major difference from yesterday is that I decided to setup
the IVSHMEM with KVMFR module
as suggested from the documentation itself

Please be aware that as a result you will not be able to take advantage of
your GPUs ability to access memory via it’s hardware DMA engine if you use
this method.

For arch linux there's an AUR package available looking-glass-module-dkms

# Configure KVMFR (IVSHMEM) with 32MB (ideal for 1920x1080)
modprobe kvmfr static_size_mb=32
modprobe kvmfr

My full docker compose .yaml file configuration ahead!

services:
  windows:
    image: windows-spice
    container_name: W11-Core
    privileged: true
    environment:
      VERSION: "win11"
      RAM_SIZE: "12G"
      CPU_CORES: "4"
      DEVICE2: "/dev/sda"
      ARGUMENTS: >
        -device vfio-pci,host=23:00.0,multifunction=on 
        -device vfio-pci,host=23:00.1,multifunction=on 
        -device ivshmem-plain,id=shmem0,memdev=looking-glass
        -object memory-backend-file,id=looking-glass,mem-path=/dev/kvmfr0,size=32M,share=yes
        -device virtio-mouse-pci
        -device virtio-keyboard-pci
        -device virtio-serial-pci
        -spice addr=0.0.0.0,port=5900,disable-ticketing
        -device virtio-serial-pci 
        -chardev spicevmc,id=vdagent,name=vdagent 
        -device virtserialport,chardev=vdagent,name=com.redhat.spice.0
    devices:
      - /dev/kvm
      - /dev/sda
      - /dev/vfio/22
      - /dev/vfio/vfio
      - /dev/kvmfr0
    cap_add:
      - NET_ADMIN
    ports:
      - 60400:5900
      - 8006:8006
      - 3389:3389/tcp
      - 3389:3389/udp
    stop_grace_period: 2m
    restart: on-failure

Of course, also IddSampleDriver and looking-glass requires just the right configuration as well.

I'd suggest to install IddSampleDriver at C:\IddSampleDriver\, thus to configure
the C:\IddSampleDriver\option.txt with only the right resolution (for some reason it
defaults to 640x480, which is unusable with a virtio-mouse); my primary monitor is a 1920x1080@144hz, thus:

1
1920, 1080, 144

You'd probably want to configure looking-glass as well on the windows host, by default it
should be installed at C:\Program Files\Looking Glass (host), thus
add a looking-glass-client.ini.
Have a look here for available configuration options;
as I am using a nvidia card (1050ti) for the passthrough I have enabled the nvfbc interface.

[app]
capture=nvfbc

It might be fine with the default configuration.

You then HAVE to configure looking-glass for the linux client itself, it
has to match the docker compose .yaml configuration. Again, have a look
at the official documentation, as
my configuration may not work for you (e.g. I use right control to toggle capture mode, which locks
mouse/keyboard).

[app]
shmFile=/dev/kvmfr0

[win]
title=WizariMachine
size=1920x1080
keepAspect=yes
borderless=yes
fullScreen=no
showFPS=yes

[input]
ignoreWindowsKeys=no
escapeKey=97
mouseSmoothing=no
mouseSens=1

[wayland]
warpSupport=yes
fractionScale=yes

[spice]
port=60400

Run the docker, run looking-glass-client from your linux host, at this point you should see your windows machine

Finally, connect via VNC like you normally would and change which one is
the primary display (or disable the default altogether).

image

I am also going to attach some screenshots where you can clearly see I am on linux (wayland, hyprland, a plain and simple ags bar on top), I tested both furmark (for the video capabilities) and gzdoom/youtube (mouse, keyboard and display latency, I'd say there is no noticable latency at all)

image

image

image

EDIT 1: nvfbc is only supported for "professional grade GPUs", I suppose it is automatically falling back to dxgi then?

EDIT 2: lately been busy with studies, I figured out a way to also enable audio via pulseaudio/pipewire; as always I am not an expert. Not sure if -audio spice would somehow work by itself, but I found that passing the native pulseaudio unix server as a volume (on Arch /run/user/1000/pulse/native, mount anywhere you please, e.g. /tmp/pa), thus configuring it MANUALLY (audiodev + device qemu arguments instead of audio) it just works.

Of course, not taking full credits, I had a look into the qemu documentation, this very forum which explained how to setup a pulseaudio socket (which I totally skipped and gave the native socket instead xD), and this stackoverflow thread.

TL:DR Add these lines as ARGUMENTS (configuration above)

-device ich9-intel-hda,addr=1f.1
-audiodev pa,id=snd0,server=unix:/tmp/pa
-device hda-output,audiodev=snd0

Also mount the pipewire/pulseaudio as a docker volume

volumes:
    - /run/user/1000/pulse/native:/tmp/pa

I'd say only clipboard sharing is missing.

@Xav-v
Copy link

Xav-v commented Apr 11, 2024

Hi all,
I managed yesterday to have a successful GPU passthrough (Yay). As mentioned before, switching kernel module used by GPU from i915 to vfio-pci was the key.
On my system (Debian, 6.6.13), intel arc A380: In BIOS, enable IOMMU, virtualization VT-d, VT-x.
Edit /etc/default/grub and add intel_iommu=on in GRUB_CMDLINE_LINUX_DEFAULT sudo update-grub to update grub, and restart.
I wanted to be able to switch the GPU from host to VM, and therefore decided to have a script instead of having options within the modprobe loads. You can find how to pass a PCI to vfio-pci in other links.
sudo lspci gives you the list of PCI devices. My GPU is listed as 03:00.0, it's audio device as 04:00.0. It's important to pass both or you'll end up with some issues later on.
As root: To detach from i915 to vfio-pci: modprobe vfio vfio_pci
Then for both 0000:03:00.0 and 0000:04:00.0 in my case:

echo %s > /sys/bus/pci/devices/%s/driver/unbind
echo vfio-pci > /sys/bus/pci/devices/%s/driver_override
echo %s > /sys/bus/pci/drivers_probe

From now on, lspci -v | grep -A 15 " VGA " shall give you vfio-pci as driver in use.
My docker-compose file as following:

version: "3"
services:
  windows:
    image: dockurr/windows
    build: .
    container_name: windows
    privileged: true
    environment:
      VERSION: "win11"
      DEBUG: Y
      RAM_SIZE: "16G"
      CPU_CORES: "14"
      ARGUMENTS: "-device vfio-pci,host=03:00.0,multifunction=on -device vfio-pci,host=04:00.0,multifunction=on"
    devices:
      - /dev/kvm
      - /dev/vfio/1
    group_add:
      - "105"
    volumes:
      - ./storage:/storage
    cap_add:
      - NET_ADMIN
    ports:
      - 8006:8006
      - 3389:3389/tcp
      - 3389:3389/udp
    stop_grace_period: 2m
    restart: on-failure

I was then able to install Intel GPU drivers within Windows with no issue.
I'm not an expert, therefore don't hesitate to comment/redact as per your needs.

I'm glad you got it working, I really wish there was a tiny script that could generate the correct vfio passthrough for noobs. But sadly it wouldnt work for everyone... (Like me with my stupid AMD Gpu and its inability to reset itself)

I'm using that, of course to adapt as per your needs (array):

#!/bin/bash

#vfio-pci or i915
array=( '0000:03:00.0' '0000:04:00.0' )

while getopts t: flag
do
    case "${flag}" in
        t) type=${OPTARG};;
    esac
done

modprobe vfio
modprobe vfio_pci
modprobe i915

for pcid in "${array[@]}"
do
        echo "Switching pcids $pcid to $type"
        echo $pcid > "/sys/bus/pci/devices/$pcid/driver/unbind"
        echo $type > "/sys/bus/pci/devices/$pcid/driver_override"
        echo $pcid > "/sys/bus/pci/drivers_probe"
done

you have to call this script with either -t vfio-pci or -t i915

@dmestudent
Copy link

Hello,

I am following this issue since it was created as a silent reader and want to thank everyone that has provided so much information regarding this topic.

I`d like to throw another layer into the pit regarding passing gpus to windows running inside docker via this project.

I would be highly interested in any information regarding not doing a full gpu passthrough but splitting a gpu into vGPUs using the https://github.com/mbilker/vgpu_unlock-rs project (a detailed tutorial how to do this with a Proxmox Server can be found here https://gitlab.com/polloloco/vgpu-proxmox) and then passing a vGPU to a specific windows docker container.

Maybe someone has already tried this. It works like charm on proxmox with Windows VMs using for example enterprise GPUs like the Tesla M40 or Tesla P4.

Thanks in advance

@gregewing
Copy link

Hi, new to this thread and having a go at the config to get a NVIDIA card passed through to a docker image (dockur/windows) and have it show up in the nested VM. I have the card showing up in nvidia-smi in the docker container and am about to do the passthrough from there to the Windows11 VM. I did this by installing nvidia container tools on the host, then passing through the GPU using portainer and/or command line switches in the docker run command ( i dont use compose ) then installint the nvidia drivers and the nvidia-container toolkit in the docker container.

I just wanted to ask, as my server is headless, do I really need to add in vfio-pci and/or looking-glass on the docker image ? from the perspective of the docker image, it is the only thing using the card... so cant I just forward the pci device ?

There are other docker images using it for other purposes, but the windows image will be the only one using it for 'display'

@kamalfarahani
Copy link

kamalfarahani commented May 8, 2024

Hi @kroese,
The previous discussions have been quite technical. While some users have reportedly been successful in passing through their GPUs to Dockerized Windows containers, the process seems complex for those who are not Docker experts. Is there a plan to simplify GPU passthrough in the future? Ideally, users like myself could easily enable it by adding just a few lines of configuration to the docker-compose.yml file.

@softplaceio
Copy link

Would it be possible to create a video teaching how to do "GPU Passthrough"?

@chuan1127
Copy link

1
2
这是我的配置截图,已经实现了1660 SUPER的直通,和网卡的直通,而且我已经实现了CPU去虚拟化.
3
4
5
这是我的截图
其中ARGUMENTS变量参数如下:-device vfio-pci,host=01:00.0,multifunction=on -device vfio-pci,host=01:00.1,multifunction=on -device vfio-pci,host=01:00.2,multifunction=on -device vfio-pci,host=88:00.0,multifunction=on -device usb-host,vendorid=0x0557,productid=0x2419 -cpu host,-hypervisor,+kvm_pv_unhalt,+kvm_pv_eoi,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv_reset,hv_vpindex,hv_runtime,hv_relaxed,kvm=off,hv_vendor_id=intel

具体的ID参照你的硬件修改.

#!/bin/bash

定义Docker容器的名称

DOCKER_CONTAINERS=("qbittorrent" "nas-tools" "transmission" "xiaoyaliu" "MoviePilot")

获取当前小时(24小时制)

CURRENT_HOUR=$(date +"%H")

获取当前分钟

CURRENT_MINUTE=$(date +"%M")

日志文件路径

LOG_FILE="/mnt/user/domains/docker_control.log"

函数:记录日志

log() {
echo "[$(date +"%Y-%m-%d %H:%M:%S")] $1" >> "$LOG_FILE"
echo "[$(date +"%Y-%m-%d %H:%M:%S")] $1"
}

函数:清理日志

cleanup_logs() {
log_size=$(du -m "$LOG_FILE" | cut -f1)
max_log_size=50
if [ "$log_size" -gt "$max_log_size" ]; then
mv "$LOG_FILE" "/mnt/user/domains/docker_control_$(date +"%Y%m%d%H%M%S").log"
touch "$LOG_FILE" # 清空日志文件
log "日志文件超过50M,已清理."
fi
}

log "开始脚本执行."

如果当前时间在0点到7点59分59秒之间

if [ "$CURRENT_HOUR" -ge 0 ] && [ "$CURRENT_HOUR" -lt 8 ] && [ "$CURRENT_MINUTE" -lt 60 ]; then
log "当前时间在0点到7点59分59秒之间,启动Docker容器..."
for CONTAINER in "${DOCKER_CONTAINERS[@]}"; do
log "启动容器 $CONTAINER..."
echo "启动容器 $CONTAINER..."
if docker start "$CONTAINER" >> "$LOG_FILE" 2>&1; then
log "容器 $CONTAINER 启动成功."
echo "容器 $CONTAINER 启动成功."
sleep 5 # 等待5秒
else
log "容器 $CONTAINER 启动失败. 详细错误信息请查看日志."
echo "容器 $CONTAINER 启动失败. 详细错误信息请查看日志."
fi
done
else
log "当前时间不在0点到7点59分59秒之间,停止Docker容器..."
for CONTAINER in "${DOCKER_CONTAINERS[@]}"; do
log "停止容器 $CONTAINER..."
echo "停止容器 $CONTAINER..."
if docker stop "$CONTAINER" >> "$LOG_FILE" 2>&1; then
log "容器 $CONTAINER 停止成功."
echo "容器 $CONTAINER 停止成功."
sleep 5 # 等待5秒
else
log "容器 $CONTAINER 停止失败. 详细错误信息请查看日志."
echo "容器 $CONTAINER 停止失败. 详细错误信息请查看日志."
fi
done
fi

log "脚本执行结束."

清理日志

cleanup_logs

这是我的docker定时开启,关闭脚本,我目前在写代码,准备使用类似 virsh shutdown命令来进行这个工作.

@softplaceio
Copy link

Olá a todos,

Consegui ontem uma passagem de GPU bem-sucedida (Yay). Como mencionado antes, mudar o módulo do kernel usado pela GPU de i915para vfio-pciera a chave.

No meu sistema (Debian, 6.6.13), Intel Arc A380: No BIOS, habilite IOMMU, virtualização VT-d, VT-x.

Edite /etc/default/grube adicione intel_iommu=on para GRUB_CMDLINE_LINUX_DEFAULT sudo update-grubatualizar o grub e reinicie.

Eu queria poder mudar a GPU do host para a VM e, portanto, decidi ter um script em vez de opções nas cargas do modprobe. Você pode descobrir como passar um PCI vfio-pciem outros links.

sudo lspcifornece a lista de dispositivos PCI. Minha GPU está listada como 03:00.0, é um dispositivo de áudio como 04:00.0. É importante passar em ambos ou você terá alguns problemas mais tarde.

Como root: Para desconectar do i915 para vfio-pci: modprobe vfio vfio_pci

Então, para ambos 0000:03:00.0e 0000:04:00.0 no meu caso:

echo %s > /sys/bus/pci/devices/%s/driver/unbind
echo vfio-pci > /sys/bus/pci/devices/%s/driver_override
echo %s > /sys/bus/pci/drivers_probe

A partir de agora, lspci -v | grep -A 15 " VGA "darei você vfio-pcicomo motorista em uso.

Meu arquivo docker-compose da seguinte forma:

version: "3"
services:
  windows:
    image: dockurr/windows
    build: .
    container_name: windows
    privileged: true
    environment:
      VERSION: "win11"
      DEBUG: Y
      RAM_SIZE: "16G"
      CPU_CORES: "14"
      ARGUMENTS: "-device vfio-pci,host=03:00.0,multifunction=on -device vfio-pci,host=04:00.0,multifunction=on"
    devices:
      - /dev/kvm
      - /dev/vfio/1
    group_add:
      - "105"
    volumes:
      - ./storage:/storage
    cap_add:
      - NET_ADMIN
    ports:
      - 8006:8006
      - 3389:3389/tcp
      - 3389:3389/udp
    stop_grace_period: 2m
    restart: on-failure

Consegui então instalar os drivers da GPU Intel no Windows sem problemas.

Não sou um especialista, portanto, não hesite em comentar/redigir conforme suas necessidades.

Do I need to have two video cards?
Can I transport it to Windows only with a Video Card (HOST)?

Thanks!

@Skslience
Copy link

1 2 这是我的配置截图,已经实现了1660 SUPER的直通,和网卡的直通,而且我已经实现了CPU去虚拟化. 3 4 5 这是我的截图 其中ARGUMENTS变量参数如下:-device vfio-pci,host=01:00.0,multifunction=on -device vfio-pci,host=01:00.1,multifunction=on -device vfio-pci,host=01:00.2,multifunction=on -device vfio-pci,host=88:00.0,multifunction=on -device usb-host,vendorid=0x0557,productid=0x2419 -cpu host,-hypervisor,+kvm_pv_unhalt,+kvm_pv_eoi,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv_reset,hv_vpindex,hv_runtime,hv_relaxed,kvm=off,hv_vendor_id=intel

具体的ID参照你的硬件修改.

#!/bin/bash

定义Docker容器的名称

DOCKER_CONTAINERS=("qbittorrent" "nas-tools" "transmission" "xiaoyaliu" "MoviePilot")

获取当前小时(24小时制)

CURRENT_HOUR=$(date +"%H")

获取当前分钟

CURRENT_MINUTE=$(date +"%M")

日志文件路径

LOG_FILE="/mnt/user/domains/docker_control.log"

函数:记录日志

log() { echo "[$(date +"%Y-%m-%d %H:%M:%S")] $1" >> "$LOG_FILE" echo "[$(date +"%Y-%m-%d %H:%M:%S")] $1" }

函数:清理日志

cleanup_logs() { log_size=$(du -m "$LOG_FILE" | cut -f1) max_log_size=50 if [ "$log_size" -gt "$max_log_size" ]; then mv "$LOG_FILE" "/mnt/user/domains/docker_control_$(date +"%Y%m%d%H%M%S").log" touch "$LOG_FILE" # 清空日志文件 log "日志文件超过50M,已清理." fi }

log "开始脚本执行."

如果当前时间在0点到7点59分59秒之间

if [ "$CURRENT_HOUR" -ge 0 ] && [ "$CURRENT_HOUR" -lt 8 ] && [ "$CURRENT_MINUTE" -lt 60 ]; then log "当前时间在0点到7点59分59秒之间,启动Docker容器..." for CONTAINER in "${DOCKER_CONTAINERS[@]}"; do log "启动容器 $CONTAINER..." echo "启动容器 $CONTAINER..." if docker start "$CONTAINER" >> "$LOG_FILE" 2>&1; then log "容器 $CONTAINER 启动成功." echo "容器 $CONTAINER 启动成功." sleep 5 # 等待5秒 else log "容器 $CONTAINER 启动失败. 详细错误信息请查看日志." echo "容器 $CONTAINER 启动失败. 详细错误信息请查看日志." fi done else log "当前时间不在0点到7点59分59秒之间,停止Docker容器..." for CONTAINER in "${DOCKER_CONTAINERS[@]}"; do log "停止容器 $CONTAINER..." echo "停止容器 $CONTAINER..." if docker stop "$CONTAINER" >> "$LOG_FILE" 2>&1; then log "容器 $CONTAINER 停止成功." echo "容器 $CONTAINER 停止成功." sleep 5 # 等待5秒 else log "容器 $CONTAINER 停止失败. 详细错误信息请查看日志." echo "容器 $CONTAINER 停止失败. 详细错误信息请查看日志." fi done fi

log "脚本执行结束."

清理日志

cleanup_logs

这是我的docker定时开启,关闭脚本,我目前在写代码,准备使用类似 virsh shutdown命令来进行这个工作.

能给个邮箱,问问怎么在unraid下做到直通的吗,我也试过了,但是报错了。

@pmanaseri
Copy link

pmanaseri commented Aug 19, 2024

Hello All,

I'm not sure if anyone is curious how to passthrough a GPU to the VM directly on an UnRaid system still but, if you are i have a quick hit guide listed below.

NOTES: This is an UnRaid Setup w/ NVIDIA | I have 2 GPUs on bare metal (1080 & 3060) & DEDICATING one (3060) to the Windows inside docker | Mileage may vary.

On UnRaid Terminal as root:

lspci -nnk | grep -i -A 3 'VGA'

Output:

lspci -nnk | grep -i -A 3 'VGA'
03:00.0 VGA compatible controller [0300]: NVIDIA Corporation GA106 [GeForce RTX 3060] [10de:2503] (rev a1)
Subsystem: eVga.com. Corp. GA106 [GeForce RTX 3060] [3842:3657]
Kernel driver in use: nvidia
Kernel modules: nvidia_drm, nvidia

03:00.1 Audio device [0403]: NVIDIA Corporation GA106 High Definition Audio Controller [10de:228e] (rev a1)
Subsystem: eVga.com. Corp. Device [3842:3657]

81:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP104 [GeForce GTX 1080] [10de:1b80] (rev a1)
Subsystem: Gigabyte Technology Co., Ltd GP104 [GeForce GTX 1080] [1458:3702]
Kernel driver in use: nvidia
Kernel modules: nvidia_drm, nvidia

Make Note of the Device you want to add to the VM, in my case its:

03:00.0 VGA compatible controller [0300]: NVIDIA Corporation GA106 [GeForce RTX 3060] [10de:2503] (rev a1)
03:00.1 Audio device [0403]: NVIDIA Corporation GA106 High Definition Audio Controller [10de:228e] (rev a1)

UnRaid Docker Setup:

image

image

How to Add the 3 Device Types & 1 Variable

image
image
image
image

The variable as you might expect is well......variable, change the code below based on your system output above in my case its built like this:
-device vfio-pci,host=03:00.0,multifunction=on -device vfio-pci,host=03:00.1,multifunction=on

If i wanted to use the 1080 itd be built like this:
-device vfio-pci,host=81:00.0,multifunction=on ect.

Save the docker, it will setup successful but NOT start successfully - this is expected!

you should see an error in the logs stating that it cant access the vfio device ect.

On UnRaid Terminal as root:

lspci -nnk | grep -i -A 3 'VGA'

NOTE: Kernel is nvidia

Kernel driver in use: nvidia

Time to unbind NVIDIA & bind to VFIO-PCI:

This will be the based on the above output, my GPU Video device ID is 03:00:0 & Vendor ID is 10de:2503
echo "0000:03:00.0" > /sys/bus/pci/devices/0000:03:00.0/driver/unbind
echo "10de 2503" > /sys/bus/pci/drivers/vfio-pci/new_id
OR
echo "0000:03:00.0" > /sys/bus/pci/drivers/vfio-pci/bind - Updated command with Unraid 6.12.13

This will be the based on the above output, my GPU Audio device ID is 03:00:1 & Vendor ID is 10de:228e
echo "0000:03:00.1" > /sys/bus/pci/devices/0000:03:00.1/driver/unbind
echo "10de 228e" > /sys/bus/pci/drivers/vfio-pci/new_id
OR
echo "0000:03:00.1" > /sys/bus/pci/drivers/vfio-pci/bind - Updated command with Unraid 6.12.13

NOTE: Kernel is VFIO-PCI

Kernel driver in use: vfio-pci

Before:

image

After:

image

Start the docker container & see if it boots:

Let it run through the install, once you hit the desktop - type device manager in the start menu, you should see your GPU in there. Add device drivers, reboot, device should now be in task manager as a dedicated GPU.

Device Manager:
image

Task Manager:
image

ENDING NOTES:

The changes made in the unbind NVIDIA & bind to VFIO-PCI section stays in effect until reboot oh the host (UnRaid) after reboot you will need to redo that section. You can however run a script on start up or on-demand to help automate the process. I can add that onto this if enough people ask for it. Hope this helps & I didn't miss anything :)

ALSO HUGE THANK YOU FOR THIS PROJECT ITS EXACTLY WHAT I NEEDED!!!!!

@Juphex
Copy link

Juphex commented Oct 5, 2024

Good news everyone, I did in fact manage to make looking-glass work as intended!

Of course, there is still something missing (such as audio and the clipboard is not sync'd), but it is only a matter of configuration at this point.

My intuition was, in fact, correct; qemu-system-modules-spice package was missing, thus I had to slightly modify the docker by adding the debian repository (thus the package).

FROM dockurr/windows:latest

# Add testing repository
RUN echo "deb http://deb.debian.org/debian/ testing main" >> /etc/apt/sources.list.d/sid.list

RUN echo -e "Package: *\nPin: testing n=trixie\nPin-Priority: 350" | tee -a /etc/apt/preferences.d/preferences > /dev/null

RUN apt-get update && \
		apt-get --no-install-recommends -y install \
		qemu-system-modules-spice

ENTRYPOINT ["/usr/bin/tini", "-s", "/run/entry.sh"]

Thus I built the new docker via

docker buildx build -t windows-spice --file spice-support.dockerfile .

I then found some looking-glass documentation which gave all I had to know to configure the passthrough as I really needed.

By default looking-glass host on windows uses port 5900, not going to change that, but you are required to expose that port, (and I did on port 60400 60400:5900).

As matter of fact, you should NOT disable the display, as it disables all displays, you could theoretically pass -vga none as an additional argument thought.

One major difference from yesterday is that I decided to setup the IVSHMEM with KVMFR module as suggested from the documentation itself

Please be aware that as a result you will not be able to take advantage of
your GPUs ability to access memory via it’s hardware DMA engine if you use
this method.

For arch linux there's an AUR package available looking-glass-module-dkms

# Configure KVMFR (IVSHMEM) with 32MB (ideal for 1920x1080)
modprobe kvmfr static_size_mb=32
modprobe kvmfr

My full docker compose .yaml file configuration ahead!

services:
  windows:
    image: windows-spice
    container_name: W11-Core
    privileged: true
    environment:
      VERSION: "win11"
      RAM_SIZE: "12G"
      CPU_CORES: "4"
      DEVICE2: "/dev/sda"
      ARGUMENTS: >
        -device vfio-pci,host=23:00.0,multifunction=on 
        -device vfio-pci,host=23:00.1,multifunction=on 
        -device ivshmem-plain,id=shmem0,memdev=looking-glass
        -object memory-backend-file,id=looking-glass,mem-path=/dev/kvmfr0,size=32M,share=yes
        -device virtio-mouse-pci
        -device virtio-keyboard-pci
        -device virtio-serial-pci
        -spice addr=0.0.0.0,port=5900,disable-ticketing
        -device virtio-serial-pci 
        -chardev spicevmc,id=vdagent,name=vdagent 
        -device virtserialport,chardev=vdagent,name=com.redhat.spice.0
    devices:
      - /dev/kvm
      - /dev/sda
      - /dev/vfio/22
      - /dev/vfio/vfio
      - /dev/kvmfr0
    cap_add:
      - NET_ADMIN
    ports:
      - 60400:5900
      - 8006:8006
      - 3389:3389/tcp
      - 3389:3389/udp
    stop_grace_period: 2m
    restart: on-failure

Of course, also IddSampleDriver and looking-glass requires just the right configuration as well.

I'd suggest to install IddSampleDriver at C:\IddSampleDriver\, thus to configure the C:\IddSampleDriver\option.txt with only the right resolution (for some reason it defaults to 640x480, which is unusable with a virtio-mouse); my primary monitor is a 1920x1080@144hz, thus:

1
1920, 1080, 144

You'd probably want to configure looking-glass as well on the windows host, by default it should be installed at C:\Program Files\Looking Glass (host), thus add a looking-glass-client.ini. Have a look here for available configuration options; as I am using a nvidia card (1050ti) for the passthrough I have enabled the nvfbc interface.

[app]
capture=nvfbc

It might be fine with the default configuration.

You then HAVE to configure looking-glass for the linux client itself, it has to match the docker compose .yaml configuration. Again, have a look at the official documentation, as my configuration may not work for you (e.g. I use right control to toggle capture mode, which locks mouse/keyboard).

[app]
shmFile=/dev/kvmfr0

[win]
title=WizariMachine
size=1920x1080
keepAspect=yes
borderless=yes
fullScreen=no
showFPS=yes

[input]
ignoreWindowsKeys=no
escapeKey=97
mouseSmoothing=no
mouseSens=1

[wayland]
warpSupport=yes
fractionScale=yes

[spice]
port=60400

Run the docker, run looking-glass-client from your linux host, at this point you should see your windows machine

Finally, connect via VNC like you normally would and change which one is the primary display (or disable the default altogether).

image

I am also going to attach some screenshots where you can clearly see I am on linux (wayland, hyprland, a plain and simple ags bar on top), I tested both furmark (for the video capabilities) and gzdoom/youtube (mouse, keyboard and display latency, I'd say there is no noticable latency at all)

image

image

image

EDIT 1: nvfbc is only supported for "professional grade GPUs", I suppose it is automatically falling back to dxgi then?

EDIT 2: lately been busy with studies, I figured out a way to also enable audio via pulseaudio/pipewire; as always I am not an expert. Not sure if -audio spice would somehow work by itself, but I found that passing the native pulseaudio unix server as a volume (on Arch /run/user/1000/pulse/native, mount anywhere you please, e.g. /tmp/pa), thus configuring it MANUALLY (audiodev + device qemu arguments instead of audio) it just works.

Of course, not taking full credits, I had a look into the qemu documentation, this very forum which explained how to setup a pulseaudio socket (which I totally skipped and gave the native socket instead xD), and this stackoverflow thread.

TL:DR Add these lines as ARGUMENTS (configuration above)

-device ich9-intel-hda,addr=1f.1
-audiodev pa,id=snd0,server=unix:/tmp/pa
-device hda-output,audiodev=snd0

Also mount the pipewire/pulseaudio as a docker volume

volumes:
    - /run/user/1000/pulse/native:/tmp/pa

I'd say only clipboard sharing is missing.

I assume that you were running the docker container from a linux host machine not windows host machine, right? :)

@ColorfulDick
Copy link

1 2这是我的配置截图,已经实现了1660 SUPER的直通,和网卡的直通,而且我已经实现了CPU去虚拟化。 3 4 5 这是我的截图 其中ARGUMENTS变量参数如下:-device vfio-pci,host=01:00.0,multifunction=on -device vfio-pci,host=01:00.1,multifunction=on -device vfio-pci,host=01:00.2,multifunction=on -device vfio-pci,host=88:00.0,multifunction=on -device usb-host,vendorid=0x0557,productid=0x2419 -cpu host,-hypervisor,+kvm_pv_unhalt,+kvm_pv_eoi,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv_reset,hv_vpindex,hv_runtime,hv_relaxed,kvm=off,hv_vendor_id=intel

具体的ID参照你的硬件修改.

#!/bin/bash

定义Docker容器的名称

DOCKER_CONTAINERS=(“qbittorrent”, “nas-tools”, “transmission”, “xiaoyaliu”, “MoviePilot”)

获取当前小时(24小时制)

CURRENT_HOUR=$(日期 +“%H”)

获取当前分钟

CURRENT_MINUTE=$(日期 +“%M”)

日志文件路径

LOG_FILE=“/mnt/user/domains/docker_control.log”

函数:记录日志

log() { echo “[$(date +”%Y-%m-%d %H:%M:%S“)] $1” >> “$LOG_FILE” echo “[$(date +”%Y-%m-%d %H:%M:%S“)] $1” }

函数:清理日志

cleanup_logs() { log_size=$(du -m “$LOG_FILE” | cut -f1) max_log_size=50 if [ “$log_size” -gt “$max_log_size” ];then mv “$LOG_FILE” “/mnt/user/domains/docker_control_$(date +”%Y%m%d%H%M%S“).log” touch “$LOG_FILE” # 清空日志文件log “日志文件超过50M,已清理。 fi }

log “开始脚本执行.”

如果当前时间在0点到7点59分59秒之间

if [ “$CURRENT_HOUR” -ge 0 ] & [ “$CURRENT_HOUR” -lt 8 ] & [ “$CURRENT_MINUTE” -lt 60 ];然后记录 “当前时间在0点到7点59分59秒之间,启动Docker容器...”的 CONTAINER in “${DOCKER_CONTAINERS[@]}”;do log “启动容器 $CONTAINER...” echo “启动容器 $CONTAINER...” 如果 docker start “$CONTAINER” >> “$LOG_FILE” 2>&1;然后记录 “容器 $CONTAINER 启动成功”。 echo “容器 $CONTAINER 启动成功。” sleep 5 # 等待5秒 else log “容器 $CONTAINER 启动失败.详细错误信息请查看日志." echo “容器 $CONTAINER 启动失败.详细错误信息请查看日志." fi done else log “当前时间不在0点到7点59分59秒之间,停止Docker容器...”对于CONTAINER in “${DOCKER_CONTAINERS[@]}”;do log “停止容器 $CONTAINER...” echo “停止容器 $CONTAINER...” 如果 docker stop “$CONTAINER” >> “$LOG_FILE” 2>&1;然后记录 “容器 $CONTAINER 停止成功”。 echo “容器 $CONTAINER 停止成功。” sleep 5 # 等待5秒 else log “容器 $CONTAINER 停止失败.详细错误信息请查看日志." echo “容器 $CONTAINER 停止失败.详细错误信息请查看日志."你好了

log “脚本执行结束。”

清理日志

cleanup_logs

这是我的docker定时开启,关闭脚本,我目前在写代码,准备使用类似 virsh shutdown命令来进行这个工作。

老哥,你的完整docker-compose方便发下吗,我是amd 5825u

@Steel-skull
Copy link

Steel-skull commented Oct 30, 2024

to those interested in this ive written a script that automatically does this

Hello All,

I'm not sure if anyone is curious how to passthrough a GPU to the VM directly on an UnRaid system still but, if you are i have a quick hit guide listed below.

NOTES: This is an UnRaid Setup w/ NVIDIA | I have 2 GPUs on bare metal (1080 & 3060) & DEDICATING one (3060) to the Windows inside docker | Mileage may vary.

On UnRaid Terminal as root:

lspci -nnk | grep -i -A 3 'VGA'

Output:

lspci -nnk | grep -i -A 3 'VGA' 03:00.0 VGA compatible controller [0300]: NVIDIA Corporation GA106 [GeForce RTX 3060] [10de:2503] (rev a1) Subsystem: eVga.com. Corp. GA106 [GeForce RTX 3060] [3842:3657] Kernel driver in use: nvidia Kernel modules: nvidia_drm, nvidia

03:00.1 Audio device [0403]: NVIDIA Corporation GA106 High Definition Audio Controller [10de:228e] (rev a1) Subsystem: eVga.com. Corp. Device [3842:3657]

81:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP104 [GeForce GTX 1080] [10de:1b80] (rev a1) Subsystem: Gigabyte Technology Co., Ltd GP104 [GeForce GTX 1080] [1458:3702] Kernel driver in use: nvidia Kernel modules: nvidia_drm, nvidia

Make Note of the Device you want to add to the VM, in my case its:

03:00.0 VGA compatible controller [0300]: NVIDIA Corporation GA106 [GeForce RTX 3060] [10de:2503] (rev a1) 03:00.1 Audio device [0403]: NVIDIA Corporation GA106 High Definition Audio Controller [10de:228e] (rev a1)

UnRaid Docker Setup:

image

image

How to Add the 3 Device Types & 1 Variable

image image image image

The variable as you might expect is well......variable, change the code below based on your system output above in my case its built like this: -device vfio-pci,host=03:00.0,multifunction=on -device vfio-pci,host=03:00.1,multifunction=on

If i wanted to use the 1080 itd be built like this: -device vfio-pci,host=81:00.0,multifunction=on ect.

Save the docker, it will setup successful but NOT start successfully - this is expected!

you should see an error in the logs stating that it cant access the vfio device ect.

On UnRaid Terminal as root:

lspci -nnk | grep -i -A 3 'VGA'

NOTE: Kernel is nvidia

Kernel driver in use: nvidia

Time to unbind NVIDIA & bind to VFIO-PCI:

This will be the based on the above output, my GPU Video device ID is 03:00:0 & Vendor ID is 10de:2503 echo "0000:03:00.0" > /sys/bus/pci/devices/0000:03:00.0/driver/unbind echo "10de 2503" > /sys/bus/pci/drivers/vfio-pci/new_id OR echo "0000:03:00.0" > /sys/bus/pci/drivers/vfio-pci/bind - Updated command with Unraid 6.12.13

This will be the based on the above output, my GPU Audio device ID is 03:00:1 & Vendor ID is 10de:228e echo "0000:03:00.1" > /sys/bus/pci/devices/0000:03:00.1/driver/unbind echo "10de 228e" > /sys/bus/pci/drivers/vfio-pci/new_id OR echo "0000:03:00.1" > /sys/bus/pci/drivers/vfio-pci/bind - Updated command with Unraid 6.12.13

NOTE: Kernel is VFIO-PCI

Kernel driver in use: vfio-pci

Before:

image

After:

image

Start the docker container & see if it boots:

Let it run through the install, once you hit the desktop - type device manager in the start menu, you should see your GPU in there. Add device drivers, reboot, device should now be in task manager as a dedicated GPU.

Device Manager: image

Task Manager: image

ENDING NOTES:

The changes made in the unbind NVIDIA & bind to VFIO-PCI section stays in effect until reboot oh the host (UnRaid) after reboot you will need to redo that section. You can however run a script on start up or on-demand to help automate the process. I can add that onto this if enough people ask for it. Hope this helps & I didn't miss anything :)

ALSO HUGE THANK YOU FOR THIS PROJECT ITS EXACTLY WHAT I NEEDED!!!!!

to those interested in this I've written a script that automatically binds and unbinds #845 its still a work in progress so testers will be helpful, the current version needs to be run in user scripts {with modifications} as I need to find a way to run the script pre start and post stop of the container

you will still need to do the variables, except the arguments

once i have a gpu for my server i can test further

@Karinza38
Copy link

environment:
GPU: "Y"
devices:

  • /dev/dri

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests