Setting up Intel GVT-g for QEMU/Libvirt VMs

Introductioin

Intel GVT-g is a mediated passthrough so that the host and the guest VMs can share the intel iGPU without full device passthrough. This is a similar technology to NVIDIA’s vgpu which is only available for some high-end professional cards even through there is a software unlock for the consumer grade GPUs. There is also similar technology available for AMD’s GPUs.

If the full device passthrough is what you want, then you can check Intel GVT-d.

Prerequisite

You will have to create a virtual GPU on the host machine and the guest machine will see it as a “regular” GPU. If you have multiple virtual guest VMs that want you use this feature concurrently, you’ll have to create multiple virtual GPUs on the host machine.

You will need to:

  1. Of course, an Intel iGPU is necessary. Also we will need its PCI address which can be found via lspci tool.

    lscpi | grep VGA
    

    On my machine the output looks like

    00:02.0 VGA compatible controller: Intel Corporation UHD Graphics 620 (Whiskey Lake) (rev 02)
    

    The 00:02.0 is the PCI address on my host machine for the Intel UHD 620.

    NOTE: If you’re using a hybrid laptop, which means there’s also another dedicated AMD/NVIDIA GPU. You have to disable that GPU and make sure the Inte iGPU is the primary one otherwise you might run into various problems. For NVIDIA GPU, you can use prime-select intel to make sure the Intel iGPU is chosen.

  2. libvirt at least 4.6.0. You can verify it by libvirtd -V. qemu at least 4.0.0 and you can verify it by qemu-system-x86_64 --version.

  3. Enable IOMMU. Please refer to previous posts about setting up KVMs.

  4. Add kernel parameter i915.enable_gvt=1 to your /etc/default/grub file, to the line GRUB_CMDLINE_LINUX. Additional parameters, such as i915.enable_guc=0 and i915.enable_fbc=0 would also be added for better performance. Then you should update the bootloader via sudo update-grub.

  5. Enable related kernel modules: create a file called /etc/modules-load.d/kvm-gvt-g.conf and place the following content in it

    kvmgt
    vfio-iommu-type1
    vfio-mdev
    

After these settings are done, reboot your machine to allow them to take effect. Now we have to check whether GVT-G is working.

Check GVT-G

Go to the device directory in /sys

cd /sys/bus/pci/devices/0000\:00:02.0

Note that previously we have checked that my Intel UHD 620’s pci address is 00:02.0. In this directory there should be a mdev_supported_types folder which contains the types of vGPU that is supported by your physical iGPU.

ls -l mdev_supported_types 

The output would be like i915-GVTg_Vx_y where x represents the generation of your physical iGPU and y represents different profiles of virtual GPU.

vGPU types Video memory Max resolution
i915-GVTg_V5_1 <512MB, 2048MB> 1920x1200
i915-GVTg_V5_2 <256MB, 1024MB> 1920x1200
i915-GVTg_V5_4 <128MB, 512MB> 1920x1200
i915-GVTg_V5_8 <64MB, 384MB> 1024x768

Create a virtual GPU

First we need to generate a uuid for the virtual GPU. We can use the tool uuidgen. If you want to generate multiple uuids, you can use the tool uuid(uuid -n 3 for generating 3 uuids).

uuidgen

My output is

cb013964-c528-4197-adb4-c5f9777f56b2

We denote it as GVT_GUID and we can create a virtual GPU with this uuid by

echo ${GVT_GUID} | sudo tee mdev_supported_types/i915-GVTg_V5_4/create

Here you can see that I choose the V5_4 profile of the vGPU. The created vGPU lies devices folder of the iGPU device folder(i.e, our current directory /sys/bus/pci/devices/0000\:00:02.0). There is also a soft link in the devices folder of the chosen vGPU profile. If you want to remove any vGPU, you can do

echo 1 > /sys/bus/pci/devices/$GVT_PCI/$GVT_GUID/remove

NOTE: The virtual GPU will be removed after system shutdown. Therefore you should consider some ways to persist it if you will use it regularly.

Add vGPU to virtual machine

Back up your current configuration

After setting up a working VM(see previous notes), you can then add the vGPU to it. Before that, you can dump and back up the current XML configuration file for the VM with

virsh dumpxml VM_Name > VM_Name.xml

After the modification, you can import the xml file with

virsh define VM_Name.xml

Assign vGPU

Add the following hostdev into the <devices> part of your VM’s configuration:

...
    <hostdev mode='subsystem' type='mdev' managed='no' model='vfio-pci' display='off'>
      <source>
        <address uuid='GVT_GUID'/>
      </source>
    </hostdev>
...

where the GVT_GUID is just the uuid of your vGPU. If you want your VM to see the vGPU at specific location, add an <address> entry after the <source></source> part with something like

<address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>

Note that in this default setting, the display attribute is set to off.

Get display content from vGPU of Windows VM

Previously, we can use VNC server or SPICE server in virtural machine manager. But now it would be prefer to use VNC server since the default SPICE server setting might freeze the image output when vGPU is presented. The default VNC server setting with vGPU presented has some quirks too, but at least it’s working.

Basic setting

First of all, install the driver. Go to https://downloadcenter.intel.com, check the graphics section and download the DCH driver.

Run the installer and reboot.

Then in the device manager of your Windows VM, you will find 2 video adapters. One is the QEMU emulated adapter, while the other is the Intel vGPU. You can see guest desktop by both QEMU and remote protocol(e.g. RDP). QEMU display the emulated driver and the remote protocol display the Intel GFX driver.

NOTE: If originally you’re using SPICE server for display, you will have to modify the XML. Originally, it would look like

<graphics type="spice" autoport="yes">
  <listen type="address"/>
</graphics>

Enable GL and change it to

<graphics type='spice'>
  <listen type='none'/>
  <gl enable='yes'/>
</graphics>

There is an optional attribute rendernode in the gl tag to specify the renderer, eg:

<gl enable='yes' rendernode='/dev/dri/by-path/pci-0000:00:02.0-render'/>

where the 00:02.0 is the pci address of iGPU in your host machine.

Also, change the display attribute in vGPU’s <hostdev> to on.

Finally, remove all <graphics> and <video> devices, excpet for the spice one we just modified. So the final configuration would look like

...
    <graphics type='spice'>
      <listen type='none'/>
      <gl enable='yes' rendernode='/dev/dri/by-path/pci-0000:00:02.0-render'/>
    </graphics>
    <video>
      <model type='none'/>
    </video>
...

But due to an issue with spice-gtk, the configuration might be different. Another possible configuration is to use egl-headless mode

...
    <graphics type='spice' autoport='yes'>
      <listen type='address'/>
    </graphics>
    <graphics type='egl-headless'>
      <gl rendernode='/dev/dri/by-path/pci-0000:00:02.0-render'/>
    </graphics>
    <video>
      <model type='none'/>
    </video>
...

Note that there’s also an gl tag to force specify the renderer, even an NVIDIA one. Also, one can change the type to vnc to use VNC instead.

NOTE: There’s no display support in GVT-g, users have to use remote protocols(i.e looking glass, windows RDP) to show guest desktop. The Intel documentation suggessts using QEMU emulated VGA cards as primary display and KVMGT vGPU as 2nd VGA card. This ensures that you always get video output from VNC/SPICE view, but might run into some problems. In my VM, the SPICE server view freezes but VNC server works. Some suggest to disable the emulated GFX card in “Device Manager” after you make sure the Intel GFX card is working. This causes you to lose some early boot messages because the Intel had not been loaded yet.

Use DMA-BUF(default to work with Legacy BIOS)

By introducing dma-buf, a new feature called “local display” has been supported in GVT-g.

First, modify the XML schema of the virtual machine definition so that we can use QEMU-specific elements later. Change

<domain type='kvm'>

to

<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>

Then add this configuration to the end of the <domain> element, i. e. insert this text right above the closing </domain> tag:

  <qemu:commandline>
    <qemu:arg value='-set'/>
    <qemu:arg value='device.hostdev0.x-igd-opregion=on'/>
  </qemu:commandline>

Use DMA-BUF(with UEFI/OVMF)

By default, dma-buf only works with legacy BIOS. To make it work with UEFI/OVMF, one has to extract the OpROM from the kernel patch and feed it to QEMU as an override.

NOTE: In Windows startup menu, type system information to check your boot up type(Legacy or UEFI). In your VM’s XML file, check the <os></os> part, specifically, the type of the loader tag. It’s rom for BIOS and pflash for UEFI.

Download vbios_gvt_uefi.rom and place it somewhere world-accessible (we will use /usr/share/vgabios to make an example). Then edit the virtual machine definition, appending this configuration to the <qemu:commandline> element we added earlier:

...
    <qemu:arg value='-set'/>
    <qemu:arg value='device.hostdev0.romfile=/usr/share/vgabios/vbios_gvt_uefi.rom'/>
...

NOTE: some directories other than /usr/share/vgabios/(such as /) might result an error complaining about NOT FIND THE VBIOS FILE. This is due to the APPARMOR setting of libvert, preventing it from loading hardware firmware from just any random directory.

NOTE: another modified vbios rom provided by HouQiming in this github issue, which is also available in this repo’s release pages can also be used here. It offers boot display even without enabling RAMFB.

NOTE: The modified vbios had some hard-coded register address specially to Skylake CPUs which can be dangerous in GVT-d with misspecified models. But should be safe on GVT-g cases.

Enable RAMFB display

This should be combined with the above DMA-BUF configuration in order to also display everything that happens before the guest Intel driver is loaded (i.e. POST, the firmware interface, and the guest initialization).

Add this configuration to the end of the <domain> element, i.e. insert this text right above the closing </domain> tag:

 <qemu:commandline>
    <qemu:arg value='-set'/>
    <qemu:arg value='device.hostdev0.ramfb=on'/>
    <qemu:arg value='-set'/>
    <qemu:arg value='device.hostdev0.driver=vfio-pci-nohotplug'/>
 </qemu:commandline>

Optional tweaks / Troubleshooting / Workarounds

No display

If your virtual machine is not displaying anything when using RAMFB display, try setting the following additional options to the existing <qemu:commandline> tag:

  <qemu:commandline>
    <qemu:arg value="-set"/>
    <qemu:arg value="device.hostdev0.display=on"/>
  </qemu:commandline>

Garbled graphics

If your virtual machine is displaying artifacts when the mouse enters the virtual machine screen, the following workaround might work:

AFTER modifying the XML scheme, insert this right above the closing </domain> tag, taking care to add to the existing <qemu:commandline> tag, if existing:

  <qemu:commandline>
    <qemu:env name="MESA_LOADER_DRIVER_OVERRIDE" value="i965"/>
  </qemu:commandline>

One might also try

  <qemu:commandline>
    <qemu:env name="INTEL_DEBUG" value="norbc"/>
  </qemu:commandline>

if one want to stay with the iris driver.

Changing the display resolution of virtual GPU

The display resolution of vGPU, by default, is the maximum resolution the vGPU is capable of. The display content will be scaled to this resolution by vGPU regardless of what resolution is set by guest OS. This would produce bad quality pictures in the viewer.

To change the display resolution, add this configuration into the <qemu:commandline> tag:

...
    <qemu:arg value='-set'/>
    <qemu:arg value='device.hostdev0.xres=1440'/>
    <qemu:arg value='-set'/>
    <qemu:arg value='device.hostdev0.yres=900'/>
...

Automatically create vGPU when VM starts via libvirt qemu hook

With libvirt, a libvirt qemu hook can be used to automatically create the virtual GPU when the machine is started, and to remove it when the machine is stopped. Replace the variables with the values you found above and the DOMAIN with the name of the machine. The file is /etc/libvirt/hooks/qemu

#!/bin/bash
GVT_PCI=<GVT_PCI>
GVT_GUID=<GVT_GUID>
MDEV_TYPE=<GVT_TYPE>
DOMAIN=<DOMAIN name>
if [ $# -ge 3 ]; then
    if [ $1 = "$DOMAIN" -a $2 = "prepare" -a $3 = "begin" ]; then
        echo "$GVT_GUID" > "/sys/bus/pci/devices/$GVT_PCI/mdev_supported_types/$MDEV_TYPE/create"
    elif [ $1 = "$DOMAIN" -a $2 = "release" -a $3 = "end" ]; then
        echo 1 > /sys/bus/pci/devices/$GVT_PCI/$GVT_GUID/remove
    fi
fi

Do not forget to make the file executable and to quote each variable value e.g. GVT_PCI="0000:00:02.0".

NOTE: If you use libvirt user session, you need to tweak the script to use privilege elevation commands, such as pkexec(1) or a no-password sudo.

NOTE: The XML of the domain is feed to the hook script through stdin. You can use xmllint and XPath expression to extract GVT_GUID from stdin, e.g.:

GVT_GUID="$(xmllint --xpath 'string(/domain/devices/hostdev[@type="mdev"][@display="on"]/source/address/@uuid)' -)"

Persist the vGPU creation at host boot up

Create a new file called /etc/systemd/system/setup-gvt.service and place the following content in it, replacing the UUID, PCI address, and vGPU size with the values we found for your system:

[Unit]
Description=Setup GVT

[Service]
Type=oneshot
ExecStart=/usr/bin/bash -c 'echo cb33ec6d-ad44-4702-b80f-c176f56afea1 > /sys/devices/pci0000:00/0000:00:02.0/mdev_supported_types/i915-GVTg_V5_8/create'

[Install]
WantedBy=multi-user.target

Make sure it starts automatically at boot with sudo systemctl enable setup-gvt.

Work with Synology/Xpenology in DS918+

You can check the device at /dev/dri/. Some reports show that in order to work properly for plex/emby hw transcoding, one has to disable the emulated GPU like before. Also, for facial recognition and video transcode in Synology’s VideoStation, a valid SN is always necessary.

Reference

Arch wiki about Intel GVT-g

Intel’s Github wiki about GVTg setup

A useful blog post

A helper github repo

A post about using it in xpenology vm

Chao Cheng
Chao Cheng
Statistician

My research interests include applied statistics and machine learning.

Related