An every-day problem for Linux gamers:
Every Linux user that plays PC games and / or uses 3D-intensive Windows applications eventually finds themselves in a situation where Wine fails them or Windows Virtual Machine’s lack the GPU manpower required to lift whatever it is you are trying to run.
Most people will then fall back to dual-boot installations of Linux & Windows, booting from one OS to the other as required.
But there are better ways to deal with this, given that you have IOMMU virtualization compatible hardware (also called VT-d or AMD-Vi), which, given the rather large time that has passed since its first appearance in 2007, might not be that unlikely, even if you didn’t build your rig with virtualization in mind.
This post is going to introduce you to GPU passthrough, a technique that allows your Windows Virtual Machine’s to tap into your dedicated GPU and all its abilities, giving you near-native gaming and 3D-performance without ever having to leave your Linux OS, including step by step instructions on how to set it up, possible caveats you might run into and how to operate it properly once set up.
What you get out of this, once done and set up:
A IOMMU virtualized Windows Virtual Machine with full access to your dedicated GPU, allowing you to game on your Linux machine as if it were a regular Windows computer, aka. full game compatiblity.
Of course this isn’t limited to just games, but most people will end up using it for just that, as there is a plethora of Linux applications to deal with pretty much everything else you can imagine.
At the very least this is the case for me. I’ve never missed any Windows applications on my Linux setup, other than games.
The ups and downs of operating a GPU passthrough:
It’s quite clear what the advantages are at this point: near-native gaming performance and perfect application compatibility, but before diving into all of this we should mention the maybe not so obvious downsides of operating a GPU passthrough setup:
- You will be running two operating systems side by side!
This is the whole idea behind this setup, after all we don’t want to leave Linux just to get our gaming fit, but non the less it has to be mentioned, as CPU-taxing tasks on the Host (Linux) OS will, without a doubt, have a negative influence on your Windows Virtual Machine CPU performance, which in turn could bottleneck your GPU.
- You will require dual-monitor wiring!
This doesn’t mean that you are required to own multiple monitors (but I highly suggest getting a second monitor if you don’t have one already, it helps productivity and makes a GPU passthrough setup so much more comfortable to use)!
What it does mean though is that you will have to run two monitor cables to each of your monitors (one for your Windows Virtual Machine, one for your Host (Linux) OS).
- Your Windows Virtual Machine won’t be able to run inside a Linux-decorated Window anymore!
You probably know how Linux users interact with Windows Virtual Machine’s by now: You boot them up, they pop up in a separate Window and you interact with that Window.
Well, this won’t be an option any longer as GPU passthrough directly wires your Windows Virtual Machine to your dedicated GPU, it essentially makes your Virtual Machine’s video output come straight out of your dedicated GPU’s video output plugs (explaining why you need to double-wire your monitors).
- If you only own one monitor you will be pressing that input-switch button on it all the time!
Remember when I said that having multiple monitors isn’t required but highly suggested? Well now you know why.
Due to your Host (Linux) OS living on your integrated GPU, and your Windows Virtual Machine living on your dedicated GPU, you will now be forced to switch monitor input signals all the time whenever you want to do something on one OS or the other.
Again, having multiple monitors solves this issue, as you can just put one OS on one monitor, while the other runs on your second one.
Caveats you might face during this tutorial:
- The initial Windows Installation disk might not boot up!
If we want GPU passthrough to work we will need to adhere to certain standards.
One of these is that we run our OS on top of the EDK2 UEFI BIOS for Qemu.
This virtual BIOS doesn’t support booting non-UEFI compatible boot partitions though, which renders a whole lot of unofficial / self-made Windows Installation disks useless.
To work around this issue, stick with official Windows Installation disks (7, 8, 8.1 and 10 all work fine, older one’s though will most likely give you issues).
- The GPU drivers need to be installed via the Spice QXL Qemu display adapter prior to using it!
If you forget to install the dedicated GPU’s drivers while still operating the Spice QXL Qemu display adapter, your Virtual Machine will get stuck in the Windows boot animation!
This is no big deal though, as one can easily go back, re-add the Spice QXL Qemu display adapter to the Virtual Machine, and give it another go.
Technically, this issue arises because Windows falls back to a bad default VGA driver in the absence of proper OEM drivers, resulting in the GPU being double-initialized during the early boot process, freezing it.
Once the driver has been installed though, future driver updates will work just fine without any workarounds, as the system will always have the old drivers to fall back on in case of driver load failures.
- Sound will most likely not work out of the box!
Most Linux distributions have adapted PulseAudio as their audio mixing server, which is great! PulseAudio is a very versatile tool that can pull off a huge variety of cool things!
Want to stream your whole system to your network-attached HiFi system? No problem! Want to stream your three speaker-less computers to your one computer that does have speakers? Also quite easy!
The downside to it though is that PulseAudio has adapted a per-user-session principle, that means every user runs their own instance of the PulseAudio service.
This is meant to increase security, but it introduces a whole set of other issues when combined with elevated processes running outside of your own user context.
Long story short, if your system runs PulseAudio, and you are running an elevated (administrative) process without passing along a suitable PulseAudio sink address, your new process will be unable to produce any sound at all! And guess what, this is exactly the case for libvirtd-operated Qemu Virtual Machine’s.
The workaround we are going to utilize in this tutorial is a little manual libvirtd commandline override, which will allow us to sneak a proper PulseAudio sink address into the process environment variables, fixing this nasty issue without having to sacrifice security by running PulseAudio in global root mode.
- Your host system might freeze after using the VM for a while!
This is a known interrupt (IRQ) issue that can occur while passing through PCI components.
The easiest way to work around the issue is to switch your graphic card into MSI interrupt mode.
The people at guru3d did a good job explaining the steps required to do so.
- Depending on your card, you might need to take additional steps to ensure proper operation!
For example, to operate an NVidia GTX 1060, you need to add the following block to your Virtual Machine XML (virsh edit domainname):
<spinlocks state=’on’ retries=’8191’/>
<vendor_id state=’on’ value=’none’/>
<timer name=’rtc’ tickpolicy=’catchup’/>
<timer name=’pit’ tickpolicy=’delay’/>
<timer name=’hpet’ present=’no’/>
<timer name=’hypervclock’ present=’no’/>
<hostdev mode=’subsystem’ type=’pci’ managed=’yes’>
<rom bar=’on’ file=’/path/to/your/gpu/vrom.rom’/>
Furthermore, you need to switch both the graphic card, and the HDMI audio interface into MSI interrupt mode IMMEDIATELY after installing the NVidia drivers (ignoring the Windows-typical reboot request).
Failing to do so will leave you with a bluescreen and a non-functioning GTX 1060 card.
The host OS setup:
The Arch Linux base setup:
For this tutorial I’ve set up a fresh Arch Linux system using the Antergos installer.
It’s the easiest way to get Arch Linux up and running, and allows me to skip a lot of tech-voodoo that I would otherwise have to cover in this tutorial.
As for the actual system installation, there really isn’t much you have to mind, except for three things:
- Run the installation from your integrated GPU, this means you need to disable your dedicated GPU in your BIOS and wire your monitor to your integrated GPU
- Enable AUR support in the extra-packages dialog
- Disable proprietary GPU driver support in the extra-packages dialog
Installing the vfio-compatible kernel:
After you have rebooted back into your fresh Arch Linux install, we need to take care of installing the vfio-compatible kernel.
This is easily done by typing the following command in a root-enabled terminal:
yaourt -S linux-vfio
This will take a good (long) while, we are talking ~45 minutes on a good system, and 2 hours on a mediocre one.
You got a favorite TV show? If so, now might be a good time to watch an episode or two of it.
Now that the kernel has been installed we need to determine the vendor and product ids of our GPU, this can be done by executing the following command in a terminal:
lspci -nnk | grep NVIDIA
Which should give you an output similiar to mine:
01:00.0 VGA compatible controller : NVIDIA Corporation GK104 [GeForce GTX 760] [10de:1187] (rev a1)
01:00.1 Audio device : NVIDIA Corporation GK104 HDMI Audio Controller [10de:0e0a] (rev a1)
Note down the vendor and product ids as well as the PCI bus port numbers, we will need them in a second.
I have highlighted mine in this tutorial, to make them easier to spot for you, but yours will most likely differ.
Next you will want to open /etc/default/grub as root in an text editor of your choice and modify the GRUB_CMDLINE_LINUX_DEFAULT variable, appending the bold text below.
Another thing you will want to add, while you are in there, is the GRUB_DEFAULT variable, so that your system will boot into the new kernel by default the next time you start your system.
As you can see below, you will have to fill in your previously recorded GPU vendor and product ids as well.
GRUB_CMDLINE_LINUX_DEFAULT=”quiet resume=UUID=26c9d4a6-8642-48ae-8057-ac8fb7cbccd9 pcie_acs_override=downstream intel_iommu=on i915.enable_hd_vgaarb=1 video=efifb:off modprobe.blacklist=nouveau hugepages=2048 pci-stub.ids=10de:1187,10de:0e0a“
Finally, you need to execute the following command in a root terminal to make the changes stick permanently:
grub-mkconfig -o /boot/grub/grub.cfg
After this, we need to ensure that the vfio driver gets to actually claim our PCI devices before, lets say, the noveau or nvidia GPU driver does.
To ensure this is always the case, we now open the /etc/mkinitcpio.conf file as root in a text editor of your choice and modify the MODULES variable as shown below:
MODULES=”vfio vfio_iommu_type1 vfio_pci vfio_virqfd“
Next create a new file named /etc/modprobe.d/vfio.conf and add the following content, again, substituting the ids for your own:
options vfio-pci ids=10de:1187,10de:0e0a
And again, as before, we need to execute another command inside a root terminal to make it stick permanently:
mkinitcpio -p linux-vfio
Now, finally, its time to reboot into our new shiny vfio-compatible kernel.
Installing qemu and all that comes with it:
By now you should be back inside your system, this time around in your new shiny vfio-compatible kernel.
Open yet another root terminal and execute the following command to install the essentials required for your Windows Virtual Machine:
pacman -S qemu libvirt virt-manager rpmextract
Next download the latest RPM package of the EDK2 UEFI qemu BIOS from here and unpack it.
At the time of writing this tutorial, the following commands worked fine:
Next, execute the following commands inside a root terminal to install the EDK2 UEFI qemu BIOS into your system:
cp -R usr/share/* /usr/share
Next, open /etc/libvirt/qemu.conf as root in a text editor of your choice and append the following lines to the file:
nvram = [
And finally, restart the virtualization daemon and make it auto-start on system-boot by executing the following commands in a root terminal:
systemctl restart libvirtd.service
systemctl enable libvirtd.service
Creating our Windows Virtual Machine:
Open Virtual Machine Management, aka. virt-manager and create a new Windows Virtual Machine.
- Pick local installation medium as your source and click next
- Pick your Windows installation disk ISO (or physical drive) and click next
- Decide on how much RAM and how many CPU cores you wish to give your Virtual Machine and click next
- Create a new disk image for your Virtual Machine (make sure its big enough for whatever you need it to do) and click next
- Check “Customize configuration before install” and click finish
- Download the VirtIO driver ISO from here, create a new (second) IDE CDROM device in your Virtual Machine, and mount it (you will need it for the Windows installation)
- Add two PCI Host Devices to your Virtual Machine, and pass through both of your GPU devices through (you can identify them by the PCI bus port numbers we wrote down earlier), make sure you disable the ROM BAR option in them though (we will re-enable this option later)
- Remove the IDE harddrive disk from your Virtual Machine, and re-add your previously created harddrive as a SCSI disk to your Virtual Machine (this will net you big performance improvements down the road)
- Change your Virtual Machine’s SCSI Controller Model to VirtIO SCSI
- Change your Virtual Machine’s NIC Device Model to virtio (make sure you leave it as NAT)
- Ensure your Soundcard model is set to ich6
- Next, boot up your Virtual Machine and enter the Windows setup.
Here are a few things you should mind while handling the Windows setup:
- Once you reach the harddrive screen of the setup, click the driver load button and load the SCSI VirtIO driver from the previously mounted VirtIO CDROM, this will make your harddrive accessible to the Windows setup and allow you to continue your installation.
- Progress as you normally would through the installer, but ensure you disable the automatic activation on first login, as this will fail in the most dramatic fashion you can imagine.
- Once you reach the user account creation screen, enable the automatic login for this user, as it will make the setup of Synergy (your mouse and keyboard driver) a lot more hassle-free.
By now you should be done with the Windows Installation and sitting in front of the Windows Desktop:
- Open the Device Manager, and install the NIC drivers from the VirtIO CDROM to get your network card working.
- Download the NVIDIA driver installer for your GPU, as well as 7zip, and use 7zip to extract the driver installer .exe file.
- Back in the Device Manager, find your GPU (it should have an exclamation mark next to it and be labeled VGA Display Adapter), and manually install the GPU driver from the folder you extracted from the installer .exe via 7zip.
- Download Synergy for Windows (Version 1.5.0 is freeware) and install it in your Virtual Machine.
Open Synergy, check the Elevated option and set the Server IP to 192.168.122.1 (you will see why in a bit).
- Open the Windows Control Panel and disable UAC (as it interferes with Synergy)
- Shutdown your Virtual Machine, and open up its Information-Tab again, as we need to change a few settings, once again.
By now you should be in the Information-Tab of your Virtual Machine, inside the Virtual Machine Manager again.
- In your two PCI passthrough devices, re-enable the ROM BAR option.
- You can remove the CDROM devices now if you wish.
- Remove all Mouse, Tablet and Keyboard devices.
- Remove all Video and Graphics adapters.
- Exit the Virtual Machine Manager, we need to change a few more things on your host system before we can continue.
Now we are ready to give your host system its final set of changes to make your Windows Virtual Machine work to its fullest.
We will start by fixing the sound of your Windows Virtual Machine.
This is a known bug in libvirt that only affects GPU passthrough and VNC-based Virtual Machines, so follow closely.
First, we need to change your local user’s pulseaudio server settings to include a tcp-based listener (which your Virtual Machine will use to output sound), so follow closely:
- Copy /etc/pulse/default.pa to ~/.config/pulse/default.pa
- Open ~/.config/pulse/default.pa with a texteditor of your choice and add the following line to the bottom of the file:
load-module module-native-protocol-tcp auth-ip-acl=127.0.0.1
Next up, we need to make our Virtual Machine use this new listener. To do so, follow the instructions below to the letter:
- Open a root terminal and execute the following command, substituting the Virtual Machine name with the one you have given to your Virtual Machine:
virsh edit machinename
- Executing the command above should have opened the Virtual Machine configuration file in nano, a command line text editor.
You can use the directional buttons to navigate this configuration file, your keyboard for typing in it, and CTRL + O for saving your changes.
CTRL + X exits the editor.
- In the very first line of the configuration file, add the highlighted text to the domain tag:
<domain type=’kvm’ xmlns:qemu=’http://libvirt.org/schemas/domain/qemu/1.0′>
- Further down in the file, add the following highlighted tags after the currentMemory tag:
- Even further down in the file, inside the features tag, add the following highlighted tags:
- Finally, right at the bottom of the file, but before the closing domain tag, add the following highlighted tags:
<qemu:env name=’PULSE_SERVER’ value=’127.0.0.1:4713’/>
<qemu:env name=’QEMU_PA_SAMPLES’ value=’4096’/>
<qemu:env name=’QEMU_AUDIO_DRV’ value=’pa’/>
- Now that you have done all the changes required, press CTRL+O, then the Enter key, and finally CTRL+X to exit the editor.
Next, we will prepare ourselves a proper keyboard and mouse driver for our Virtual Machine.
Open a root terminal and execute the following command to install the Synergy server on your system:
pacman -S synergy
Next, create a new text file named .synergy.conf in your home directory, and copy the following text into it:
halfDuplexCapsLock = false
halfDuplexNumLock = false
halfDuplexScrollLock = false
xtestIsXineramaUnaware = false
switchCorners = none
switchCornerSize = 0
halfDuplexCapsLock = false
halfDuplexNumLock = false
halfDuplexScrollLock = false
xtestIsXineramaUnaware = false
switchCorners = none
switchCornerSize = 0
right = Darkness
left = Light
relativeMouseMoves = false
screenSaverSync = true
win32KeepForeground = false
switchCorners = none
switchCornerSize = 0
Next, create a new text file named .xprofile in your home directory and copy the following text into it:
synergys –daemon –config ~/.synergy.conf
Next, open /etc/libvirt/qemu/networks/default.xml as root in a text editor of your choice and make the following highlighted changes inside the dhcp tags, substituting the placeholder for your actual Virtual Machine NIC MAC (you can look up the MAC in the Virtual Machine Manager by going into the VM’s Information Tab):
<range start=’192.168.122.2′ end=’192.168.122.254’/>
<host mac=”yourvirtualmachinenicmac” name=”Windows” ip=”192.168.122.2″ />
And finally, to ensure all the services realize things have changed, reboot your computer once.
Using our Virtual Machine:
At this point, we are now done setting up our Virtual Machine, and hopefully, things should start falling into place now.
Now, when you start your Virtual Machine, no more Windows should be popping up, instead, your Virtual Machine will now output directly to whatever Monitor you physically wired to your dedicated GPU.
Go ahead, try it: Switch your Monitor’s input to whatever is currently wired to the dedicated GPU and you should now be seeing Windows happily booting up.
To control Windows, swipe your mouse to the left, outside of your Linux monitor’s boundaries, and Synergy should immediately move your mouse and keyboard input into Windows, allowing you to work (or play) on it as you would do on a regular Windows installation.
At this point, your Windows Virtual Machine should be fully operational, and you can install whatever you want into it, utilizing the full potential of your dedicated GPU.
Another thing you might want to set up at this point is one, or multiple, samba / cifs shares on your host (Linux) OS, so that your Windows guest can access your host’s files.
This is of course optional, but I’ve found it to be quite useful in day-to-day tasks.