This was supposed to be a full write-up on getting a Windows 10 Gaming VM running on Ubuntu, with a dedicated USB card for SteamVR and network storage. I never completed the setup steps, but some of the links and settings may be useful.
Tutorials & Resources
These fantastic tutorials:
- https://davidyat.es/2016/09/08/gpu-passthrough/
- https://medium.com/@calerogers/gpu-virtualization-with-kvm-qemu-63ca98a6a172
- the always-helpful Arch wiki: https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF
Windows Setup
Installing Windows was relatively easy, although it does require additional drivers for network and storage.
While it is possible to set up networking without drivers using the e1000
device model, the storage driver
is required to install onto a paravirtualized disk.
Once the drivers have been installed, Windows setup should continue normally, reboot a few times, and eventually present the log in screen.
KVM Domain
For the most part this is a normal Windows 10 VM using UEFI firmware, but there are a few settings and optimizations that have proven helpful.
CPU Mode & Topology
Setting the correct CPU mode and topology is critical for Windows to work correctly, and it may not install with
the wrong CPU type set. If you have hyperthreading and prefer it over security, that should be reflected in the
<topology>
key and both VCPUs pinned to the appropriate cores.
<cpu mode='host-passthrough' check='none'>
<topology sockets='1' cores='4' threads='2'/>
</cpu>
Pinning prevents the VM from switching between cores and ensures both the real core and hyperthread are usable by the VM, which did improve performance slightly:
<vcpu placement='static'>8</vcpu>
<cputune>
<quota>-1</quota>
<vcpupin vcpu='0' cpuset='2'/>
<vcpupin vcpu='1' cpuset='3'/>
<vcpupin vcpu='2' cpuset='4'/>
<vcpupin vcpu='3' cpuset='5'/>
<vcpupin vcpu='4' cpuset='8'/>
<vcpupin vcpu='5' cpuset='9'/>
<vcpupin vcpu='6' cpuset='10'/>
<vcpupin vcpu='7' cpuset='11'/>
<emulatorpin cpuset='0-1'/>
</cputune>
It is possible to isolate the cores completely and prevent the host from using them, but I haven’t needed to configure that yet and rarely see any slowness within the VM. Closing CPU and graphically-intensive programs on the host, like Cura, has been good enough.
Finally, huge pages and matching current memory to maximum memory will force the host OS to allocate memory when it boots and prevent the guest from ballooning. Turning these on did not improve the high end of my benchmarks, but did narrow the averages and keep performance more consistent.
<memory unit='KiB'>16777216</memory>
<currentMemory unit='KiB'>16777216</currentMemory>
<memoryBacking>
<hugepages/>
</memoryBacking>
Network Storage
Rather than install multiple hard drives in each machine and worry about maintaining numerous small RAID arrays, I
decided to add an Optane AIC (PCIe card) as a cache to my existing Ceph cluster and use network storage for both the
root (C:
) and data (D:
) drives in my Windows VM. Since then, I’ve experimented with putting the C:
drive on
a local NVMe drive and LVM volume, and boot times are a few seconds faster - neither setup is painful.
KVM has good support for Ceph through libvirt
regardless of kernel version and impressive performance. Network
drives are very similar to regular device='disk'
drives and use the same VirtIO driver:
<!-- local C drive, LVM on NVMe -->
<disk type='block' device='disk'>
<driver name='qemu' type='raw' cache='none' io='threads'/>
<source dev='/dev/virtual-vg/win10_data'/>
<backingStore/>
<target dev='vdd' bus='virtio'/>
<boot order='2'/>
<alias name='virtio-disk3'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x0d' function='0x0'/>
</disk>
<!-- network D drive, RBD over TCP -->
<disk type='network' device='disk'>
<driver name='qemu' type='raw' cache='none' io='threads'/>
<auth username='desktop-kvm'>
<secret type='ceph' uuid='...'/>
</auth>
<source protocol='rbd' name='home-rust/win10-steam'>
<host name='scylla.home.holdmyran.ch' port='6789'/>
<host name='arachne.home.holdmyran.ch' port='6789'/>
<host name='harpy.home.holdmyran.ch' port='6789'/>
</source>
<target dev='vde' bus='virtio'/>
<alias name='virtio-disk4'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x0e' function='0x0'/>
</disk>
Latency was good on my existing wired network, but 1GbE of bandwidth was very limiting. SATA runs up to 6gbit before encoding, load times were definitely longer poor, and using the VM at all would cause Hulu to start skipping. I ended upgrading to this 10Gbe Aquantia NIC, which performs much better.
Some of my largest games - Forza Motorsport 7 and Horizon 4 - handle this perfectly and load well. There is no visible pop-in with open world games like Rainbox Six Wildlands; the only game with any noticeable change is Rainbow Six Siege, which takes forever to load levels the first time. Once they are cached on the Optane card, even Siege is playable.
USB Cards
While it is possible to use a VM through the Spice console, the input lag and host-to-guest resolution differences make gaming very difficult. Using Remmina to remote desktop into the VM helped, but the display lag was too much.
Eventually, I purchased this IOGEAR GUS432 KVM (4 input ports, 2 output) and attached my mouse, keyboard, and headphones (leaving one spare port). It works reliably with no noticeable lag; the only downside is that it can be hard to predict which output port will be active, and if you don’t check the light before booting up the VM, the mouse and keyboard may suddenly switch. The USB card is active on the host until the VM takes over, which can cause lights to blink a few times.
It took a while to find a USB card that would attach to the VM correctly. The first few caused the VM to freeze or
panic, which turned out to be related to their error-handling and required the pci=noaer
boot parameter.
- QICENT PCI-E to USB 3.1 2-Port Hub Controller Adapter
- Silverstone Tek PCI Express Card with 2X USB 3.0 External Ports and Internal 19 Pin Dual Port Connector
- Mailiya PCI-E to USB 3.0 4 Port PCI Express Expansion Card
- StarTech.com 2 Port PCI Express (PCIe) SuperSpeed USB 3.0 Card
The Mailiya and Silverstone cards both use the NEC uPD720201 chipset, while the Qicent and Startech both use the ASM1142 chipset. I wasn’t able to see much of a difference between chipsets, and ended up using the Mailiya thanks to the 4 external ports. If you want to plug front-panel USB ports into the VM, you may want a card with internal ports.
While the Mailiya and QICENT worked out of the box, the Silverstone required pci=noaer
. That seems odd to me,
considering it’s essentially the same hardware as the Mailiya, but it did work with that boot parameter.
I tested each card with a mouse and keyboard (low bandwidth and power draw), then a Thrustmaster steering wheel and shifter (unknown bandwidth and power, but very sensitive to latency), and finally my Samsung Odyssey HMD. The Odyssey is a Windows Mixed Reality headset, so I tested both Windows Store and SteamVR games. All four cards were able to run the headset, and I didn’t feel any lag in the steering wheel or headset.
Issues
Coffee Lake Graphics & VFIO
Older kernels, before 4.19, needed the Intel alpha driver enabled to support Coffee Lake graphics. While the AMD Vega card worked on first boot, the onboard Intel graphics did not and caused a black screen when I tried to make integrated graphics the primary choice in the motherboard UEFI settings. Later kernels enabled the Coffee Lake drivers.
Vega 64 Reset Bug
The Vega 64 cards have a bug somewhere in their reset routine, which prevents them from correctly restarting when the VM does. Restarting the host will reset the card, but a late-night Windows update can shut down the VM and leave the card fan running at 100% until you have a chance to restart the host.
There is a partial workaround for the issue documented in this level1techs thread that allows them to restart, but it requires a kernel patch and there are rumors of a proper firmware fix from AMD. Until then, I occasionally have to restart for the VM’s sake.
KVM Hangs with Ceph Volumes
Something about the librbd
driver causes QEMU to hang for a moment, then reset the domain if it cannot mount an
RBD volume. There was no indication that it was related to storage until I turned on detailed logging in
/etc/libvirt/libvirt.conf
:
log_level = 1
log_filters="1:qemu 3:remote 4:event 3:util.json 3:rpc"
log_outputs="1:file:/var/log/libvirt/libvirtd.log"
These settings are for versions of libvirt prior to 4.4.0, which includes the rather old 4.0.0 that Ubuntu 18.04 ships. The issue seems to be on the Ceph-side, where an OSD was unable to listen on its port, but the VM hung trying to send data to that particular OSD and it was confusing to trace. As long as the storage cluster is healthy, the VM should boot up happily.