GPU Passthrough in Proxmox VE: Complete Guide
Step-by-step guide to setting up GPU passthrough (PCI passthrough) in Proxmox VE for gaming VMs, AI workloads, and more.
What Is GPU Passthrough?
GPU passthrough (also called PCI passthrough) allows you to assign a physical graphics card directly to a virtual machine in Proxmox VE. The VM gets near-native GPU performance because it communicates directly with the hardware, bypassing the hypervisor entirely. This is essential for workloads like gaming, AI/ML training, video transcoding, and CAD applications that demand real GPU acceleration.
Proxmox VE supports PCI passthrough through VFIO (Virtual Function I/O), a Linux kernel framework that safely exposes hardware devices to virtual machines. While the setup requires several steps, the result is a VM with full access to your GPU — including hardware video encoding, CUDA/ROCm compute, and 3D acceleration.
Prerequisites and Hardware Requirements
Before you begin, verify that your hardware supports passthrough:
- CPU with IOMMU support — Intel VT-d or AMD-Vi. Most modern CPUs from the last 5-6 years support this.
- Motherboard with IOMMU support — The BIOS/UEFI must expose the IOMMU option. Consumer boards sometimes hide this under different names.
- Two GPUs (recommended) — One for the Proxmox host (even a basic integrated GPU) and one to pass through to the VM. Single-GPU passthrough is possible but significantly more complex.
- Clean IOMMU groups — The GPU must be in its own IOMMU group, or you will need to pass through everything in that group.
Step 1: Enable IOMMU in BIOS/UEFI
Reboot your server and enter the BIOS/UEFI settings. Look for these options and enable them:
- Intel systems: Enable VT-d (sometimes under "Advanced > System Agent" or "Chipset")
- AMD systems: Enable AMD-Vi or IOMMU (sometimes under "Advanced > NBIO" or "AMD CBS")
- Also enable SR-IOV if available
Step 2: Enable IOMMU in the Kernel
Edit the GRUB configuration to add the necessary kernel parameters:
nano /etc/default/grub
Find the line starting with GRUB_CMDLINE_LINUX_DEFAULT and add the appropriate parameter:
# For Intel CPUs:
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt"
# For AMD CPUs:
GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on iommu=pt"
The iommu=pt flag enables passthrough mode, which improves performance for devices that are not being passed through. Update GRUB and reboot:
update-grub
reboot
After reboot, verify IOMMU is active:
dmesg | grep -e DMAR -e IOMMU
You should see messages confirming that IOMMU is enabled and DMAR (for Intel) or IVRS (for AMD) tables were found.
Step 3: Check IOMMU Groups
Run this script to list all IOMMU groups and the devices within them:
#!/bin/bash
for d in /sys/kernel/iommu_groups/*/devices/*; do
n=${d#*/iommu_groups/*}; n=${n%%/*}
printf 'IOMMU Group %s ' "$n"
lspci -nns "${d##*/}"
done
Look for your GPU. Ideally, the GPU and its audio device should be in their own IOMMU group with nothing else. If other devices share the group, you may need to use an ACS override patch or choose a different PCIe slot.
Step 4: Load VFIO Modules and Blacklist GPU Drivers
Add the VFIO modules to load at boot:
echo "vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd" >> /etc/modules
Next, identify your GPU's vendor and device IDs:
lspci -nn | grep -i nvidia
# Example output: 01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GA106 [GeForce RTX 3060] [10de:2503] (rev a1)
# The IDs are 10de:2503 (and the audio device, e.g., 10de:228e)
Configure VFIO to claim the GPU before any other driver:
echo "options vfio-pci ids=10de:2503,10de:228e disable_vga=1" > /etc/modprobe.d/vfio.conf
Blacklist the default GPU drivers so they do not grab the card first:
echo "blacklist nouveau
blacklist nvidia
blacklist nvidiafb
blacklist nvidia_drm
blacklist radeon
blacklist amdgpu" > /etc/modprobe.d/blacklist-gpu.conf
Update the initramfs and reboot:
update-initramfs -u -k all
reboot
After reboot, verify VFIO has claimed the GPU:
lspci -nnk -s 01:00
# Look for: Kernel driver in use: vfio-pci
Step 5: Configure the VM for GPU Passthrough
Create a VM in Proxmox with these recommended settings:
- BIOS: OVMF (UEFI) — required for GPU passthrough
- Machine type: q35
- CPU type: host
- Display: none (once GPU is working)
Add the GPU as a PCI device through the web UI (Hardware > Add > PCI Device) or edit the VM configuration directly:
# /etc/pve/qemu-server/100.conf
machine: q35
bios: ovmf
cpu: host
hostpci0: 01:00,pcie=1,x-vga=1
vga: none
The x-vga=1 flag tells Proxmox this is the primary display adapter for the VM. The pcie=1 flag enables full PCIe passthrough rather than legacy PCI.
NVIDIA-Specific Considerations
NVIDIA consumer GPUs (GeForce series) historically detect when they are running inside a VM and refuse to load drivers — the infamous "Error 43." To work around this, add these lines to your VM configuration:
cpu: host,hidden=1,flags=+pcid
args: -cpu 'host,+kvm_pv_unhalt,+kvm_pv_eoi,hv_vendor_id=proxmox,hv_spinlocks=0x1fff'
Recent NVIDIA driver versions (535+) have relaxed these restrictions, but adding these flags is still recommended for maximum compatibility.
AMD GPU Considerations
AMD GPUs generally work more smoothly with passthrough since AMD does not block VM detection. However, AMD cards can suffer from a "reset bug" where the GPU fails to reinitialize after a VM shutdown or reboot. If you encounter this, you may need to use a vendor-reset kernel module:
apt install pve-headers-$(uname -r) dkms git
git clone https://github.com/gnif/vendor-reset.git
cd vendor-reset
dkms install .
echo "vendor-reset" >> /etc/modules
Common Use Cases
Gaming VM
Pass through a GPU to a Windows VM for near-native gaming performance. Pair it with a USB controller passthrough for keyboard, mouse, and game controllers. Tools like Looking Glass let you view the VM's display in a window on your Linux host without needing a separate monitor.
AI and Machine Learning
Pass through NVIDIA GPUs to Linux VMs running CUDA workloads, PyTorch, or TensorFlow. This lets you share a multi-GPU server across multiple researchers, each with their own dedicated GPU in an isolated VM.
Media Transcoding
Pass through a GPU (or use Intel Quick Sync via integrated GPU passthrough) to a Plex or Jellyfin VM for hardware-accelerated video transcoding.
Troubleshooting
- VM fails to start: Check
journalctl -b | grep vfiofor errors. Ensure the GPU is not in use by the host. - Black screen: Make sure you are connecting a monitor to the passed-through GPU, not the host GPU. Try adding
romfile=vbios.binto the hostpci line if the GPU needs a vBIOS dump. - Error 43 (NVIDIA): Apply the CPU hiding flags described above and ensure you are using UEFI boot with q35 machine type.
- Poor IOMMU grouping: Try moving the GPU to a different PCIe slot, or consider an ACS override patch as a last resort.
When troubleshooting passthrough issues remotely, having mobile access to your Proxmox server logs and VM status can save significant time. ProxmoxR lets you check VM states, view task logs, and restart VMs directly from your phone — which is particularly useful when you are physically at the machine adjusting monitor cables while managing the VM from your pocket.
Summary
GPU passthrough in Proxmox VE is a powerful feature that unlocks near-native GPU performance for your virtual machines. While the initial setup involves several steps — enabling IOMMU, configuring VFIO, blacklisting drivers, and adjusting VM settings — the result is a VM that can handle gaming, AI workloads, and video transcoding as if the GPU were installed in a bare-metal system. Take the time to verify your IOMMU groups and follow the steps methodically, and you will have GPU passthrough running reliably on your Proxmox server.
Take Proxmox management mobile
All the features discussed in this guide — accessible from your phone with ProxmoxR. Real-time monitoring, power control, firewall management, and more.