Advanced

Enable Nested Virtualization in Proxmox VE

How to enable nested virtualization in Proxmox VE for running hypervisors, Docker Desktop, and Hyper-V inside virtual machines.

ProxmoxR app icon

Managing Proxmox? Try ProxmoxR

Monitor and control your VMs & containers from your phone.

Try Free

What Is Nested Virtualization?

Nested virtualization lets you run a hypervisor inside a virtual machine. In practical terms, this means you can run VMs inside VMs on your Proxmox host. This is useful for testing hypervisors, running Docker Desktop (which uses a lightweight VM under the hood), running Hyper-V workloads inside a Linux-based Proxmox VM, or building lab environments for certification study. KVM has supported nested virtualization for years, but it requires explicit enablement on the host and the correct CPU type on the VM.

Check Current Nested Virtualization Status

First, SSH into your Proxmox host and check whether nested virtualization is already enabled:

# For Intel CPUs:
cat /sys/module/kvm_intel/parameters/nested

# For AMD CPUs:
cat /sys/module/kvm_amd/parameters/nested

If the output is Y or 1, nested virtualization is already enabled. If it shows N or 0, you need to enable it.

Enable Nested Virtualization on the Host

To enable nested virtualization persistently across reboots, create a modprobe configuration file:

# For Intel CPUs:
echo "options kvm_intel nested=1" > /etc/modprobe.d/kvm-intel.conf

# For AMD CPUs:
echo "options kvm_amd nested=1" > /etc/modprobe.d/kvm-amd.conf

For the change to take effect, you need to reload the KVM module. If no VMs are currently running, you can do this without a reboot:

# For Intel (stop all VMs first):
modprobe -r kvm_intel
modprobe kvm_intel

# For AMD (stop all VMs first):
modprobe -r kvm_amd
modprobe kvm_amd

If VMs are running, the safest approach is to schedule a host reboot. Verify after reload:

cat /sys/module/kvm_intel/parameters/nested
# Should output: Y

Configure the VM CPU Type

Nested virtualization requires the VM to see the host CPU virtualization extensions (VT-x or AMD-V). The default CPU type in Proxmox (kvm64) does not expose these. You must set the CPU type to host:

# Using the command line for VM ID 100:
qm set 100 --cpu cputype=host

# Or edit the config directly:
nano /etc/pve/qemu-server/100.conf
# Change or add the line:
cpu: host

In the Proxmox web UI, navigate to the VM, click Hardware, double-click on Processors, and change the Type dropdown to host.

Verify Inside the Guest VM

After starting the VM with CPU type host, SSH into the guest and confirm that virtualization extensions are available:

# Check for vmx (Intel) or svm (AMD) flags:
grep -E '(vmx|svm)' /proc/cpuinfo

# Or use egrep with a count:
egrep -c '(vmx|svm)' /proc/cpuinfo
# Output should be greater than 0

If you see the vmx or svm flags, the guest VM can now run its own hypervisor and VMs.

Use Case: Docker Desktop in a Windows VM

Docker Desktop on Windows uses WSL 2 or Hyper-V, both of which require hardware virtualization. With nested virtualization enabled and CPU type set to host, you can run Docker Desktop inside a Windows VM on Proxmox. Make sure to also enable these VM options:

# Enable hardware virtualization flags for Windows:
qm set 100 --cpu cputype=host
qm set 100 --machine q35
qm set 100 --args "-hypervisor"

Inside the Windows VM, verify that Hyper-V is available by opening PowerShell as Administrator:

# Check Hyper-V compatibility:
systeminfo | findstr "Hyper-V"

Use Case: Hyper-V Inside a Proxmox VM

For testing Hyper-V environments or running Azure Stack HCI labs, you can install the Hyper-V role inside a Windows Server VM running on Proxmox. The same prerequisites apply — CPU type host and nested virtualization enabled on the Proxmox host. Allocate sufficient RAM (at least 8 GB for the outer VM) since the inner VMs will consume memory from the outer VM allocation.

Use Case: KVM/QEMU Lab Environments

If you are studying for certifications or testing multi-node cluster configurations, you can run KVM inside a Linux VM on Proxmox:

# Inside the guest Linux VM:
apt install qemu-kvm libvirt-daemon-system virtinst -y
systemctl enable --now libvirtd

# Verify KVM acceleration works:
kvm-ok
# Should output: KVM acceleration can be used

Performance Considerations

Nested virtualization adds overhead. Each layer of virtualization introduces latency for memory access and CPU instructions. Expect a 10-30% performance reduction for nested VMs compared to first-level VMs. For production workloads, always prefer running VMs directly on the Proxmox host rather than nesting. Nested virtualization is best suited for development, testing, and lab environments.

Avoid nesting more than two levels deep — while technically possible, the performance degradation becomes severe and the configurations become fragile.

When working with nested virtualization setups, being able to quickly check VM status across multiple layers is valuable. ProxmoxR gives you mobile access to monitor your Proxmox host VMs, which is especially handy when you are iterating on nested configurations and need to restart outer VMs frequently.

Summary

Enabling nested virtualization in Proxmox VE is straightforward: enable the kernel module parameter on the host, set the VM CPU type to host, and verify the virtualization flags inside the guest. This unlocks the ability to run Docker Desktop, Hyper-V, and full KVM environments inside your Proxmox VMs — making it an essential feature for labs, development, and testing scenarios.

Take Proxmox management mobile

All the features discussed in this guide — accessible from your phone with ProxmoxR. Real-time monitoring, power control, firewall management, and more.

ProxmoxR

Manage Proxmox from your phone

Monitor, control, and manage your clusters on the go.

Free 7-day trial · No credit card required