Setup VFIO
Step 1: Load VFIO Kernel Modules¶
VFIO (Virtual Function I/O) provides the framework for safe device passthrough to virtual machines. We will want to load our modules for the current session and then ensure they load automatically on boot.
Important
This persistence configuration is critical. If VFIO bindings don't survive reboots, it's usually because this step was skipped.
Load Modules for current Session
Make modules load automatically on boot
Step 2: Identify GPU PCI Functions and Device IDs¶
Locate all PCI functions and their vendor:device IDs for your GPU.
List NVIDIA Devices with device IDs
List Devices:
Expected output (TU104/RTX 2080):
08:00.0 VGA compatible controller [0300]: NVIDIA Corporation TU104 [GeForce RTX 2080] [10de:1e82] (rev a1)
08:00.1 Audio device [0403]: NVIDIA Corporation TU104 HD Audio Controller [10de:10f8] (rev a1)
08:00.2 USB controller [0c03]: NVIDIA Corporation TU104 USB 3.1 Host Controller [10de:1ad8] (rev a1)
08:00.3 Serial bus controller [0c80]: NVIDIA Corporation TU104 USB Type-C UCSI Controller [10de:1ad9] (rev a1)
The device IDs are shown in brackets: [10de:1e82], [10de:10f8], etc.
Important
Remember to Replace 08:00.x with your actual addresses in all subsequent commands.
Verify current kernel driver
Verify current kernel driver:
Example output:
08:00.0 VGA compatible controller: NVIDIA Corporation TU104 [GeForce RTX 2080] (rev a1)
Subsystem: eVga.com. Corp. Device 2081
Flags: bus master, fast devsel, latency 0, IRQ 74, IOMMU group 14
Memory at fb000000 (32-bit, non-prefetchable) [size=16M]
Memory at d0000000 (64-bit, prefetchable) [size=256M]
Memory at e0000000 (64-bit, prefetchable) [size=32M]
I/O ports at d000 [size=128]
Expansion ROM at 000c0000 [disabled] [size=128K]
Capabilities: <access denied>
Kernel driver in use: nouveau
Kernel modules: nouveau
Driver status check:
- If "Kernel driver in use: nouveau" or another driver → Continue to Step 4
- If "Kernel driver in use: vfio-pci" → Skip to Step 6 (already configured)
Step 3: Identify the IOMMU Group¶
All devices in an IOMMU group must be passed through together or bound to VFIO.
Find the IOMMU Group Number
Find group number:
Example output:
This GPU is in IOMMU group 14.
List all devices in the group
List all devices in the group:
for d in /sys/kernel/iommu_groups/<insert-group-number>/devices/*; do
lspci -nnk -s "$(basename "$d")"
done
What to check:
- All four GPU functions should be in the same group
- Ideally, no unrelated devices share this group
- If unrelated devices are present, you may need ACS override patches (advanced topic)
If "Kernel driver in use: vfio-pci" for all devices → Skip to Step 6
Step 4: Bind All GPU Functions to vfio-pci¶
Manually bind each PCI function to the VFIO driver. Run these commands as root or with sudo.
Important
Critical: Replace
08:00.xwith your actual PCI addresses from Step 3.
VGA Core (08:00.0)
HD Audio (08:00.1)
USB Controller (08:00.2)
Note: Unbind from xhci_hcd driver first if present.
USB-C / UCM-UCSI (08:00.3)
Step 5: Verify VFIO Binding¶
Confirm all GPU functions are now using the VFIO driver.
Re-check the IOMMU group
Check IOMMU group:
Each device must show:If any device shows a different driver (e.g., xhci_hcd, nouveau):
- Manually unbind it:
echo 0000:XX:XX.X > /sys/bus/pci/drivers/<driver_name>/unbind - Then bind to vfio-pci:
echo 0000:XX:XX.X > /sys/bus/pci/drivers/vfio-pci/bind
Step 6: Make VFIO Binding Persistent¶
The manual bindings from Step 5 won't survive a reboot. We need to configure the kernel to automatically bind these devices to VFIO.
Method 1: GRUB Kernel Parameters (Recommended)
1. Edit GRUB configuration:
2. Add all device IDs to the kernel command line:
Find the line starting with GRUB_CMDLINE_LINUX_DEFAULT and add the vfio-pci.ids parameter:
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash intel_iommu=on vfio-pci.ids=10de:1e82,10de:10f8,10de:1ad8,10de:1ad9"
Device ID mapping (from Step 3):
10de:1e82→ VGA Core10de:10f8→ HD Audio10de:1ad8→ USB Controller10de:1ad9→ USB-C Controller
3. Update GRUB and rebuild initramfs:
4. Reboot to apply changes:
If kernel parameters don't work, use a systemd service to bind devices at boot.
Method 2: Systemd Service (Optional Backup)
1. Create the service file:
2. Paste this content (adjust PCI addresses):
[Unit]
Description=Bind RTX 2080 PCI devices to vfio-pci
After=local-fs.target
Requires=local-fs.target
[Service]
Type=oneshot
ExecStart=/bin/bash -c 'for dev in 0000:08:00.0 0000:08:00.1 0000:08:00.2 0000:08:00.3; do \
echo vfio-pci > /sys/bus/pci/devices/$dev/driver_override; \
echo $dev > /sys/bus/pci/drivers/vfio-pci/bind 2>/dev/null || true; \
done; \
echo 0000:08:00.2 > /sys/bus/pci/drivers/xhci_hcd/unbind 2>/dev/null || true'
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target
3. Enable and start the service:
Step 7: Confirm Configuration After Reboot¶
After the system restarts, verify VFIO bindings persisted correctly.
Check all GPU functions
Check all GPU functions:
All four functions should report:
If bindings didn't persist:
- Check kernel command line:
cat /proc/cmdline(should showvfio-pci.ids=...) - Verify initramfs was rebuilt:
ls -lh /boot/initrd.img-$(uname -r) - Check systemd service status:
systemctl status vfio-pci-bind.service
✅ Host Setup Complete¶
At this point:
- The RTX 2080 is bound to VFIO drivers
- The host OS no longer uses the GPU for graphics
- The entire IOMMU group is viable for passthrough
- KubeVirt can now attach this GPU to virtual machines
Next steps:
Configure Juno's GPU operator and create KubeVirt VMs with GPU passthrough.