Running TrueNAS on Proxmox VE
Guide to deploying TrueNAS as a virtual machine on Proxmox VE, covering disk passthrough, HBA passthrough, network configuration, and setting up SMB and NFS shares for your homelab.
TrueNAS and Proxmox: A Powerful Combination
TrueNAS is the most popular open-source NAS operating system, built on ZFS and offering enterprise-grade data protection. Running TrueNAS as a VM inside Proxmox VE consolidates your storage and compute into a single server, eliminating the need for a separate NAS appliance. The key to making this work reliably is passing physical disks directly to the TrueNAS VM so ZFS has full control over the hardware.
Why Not Use Proxmox's Built-in ZFS?
Proxmox includes native ZFS support, so why add TrueNAS? The answer is management. TrueNAS provides a polished web interface for managing datasets, snapshots, replication, SMB/NFS shares, user permissions, and alerts. If your primary goal is file sharing and data management, TrueNAS's dedicated tooling is significantly more convenient than managing everything through the Proxmox CLI.
Creating the TrueNAS VM
TrueNAS SCALE (Linux-based) is recommended over TrueNAS CORE (FreeBSD-based) for Proxmox VMs due to better virtio driver support. Create the VM with generous resources:
# Create the VM
qm create 150 \
--name truenas \
--memory 16384 \
--cores 4 \
--cpu cputype=host \
--scsihw virtio-scsi-single \
--scsi0 local-lvm:32,iothread=1 \
--net0 virtio,bridge=vmbr0 \
--ostype l26 \
--bios ovmf \
--efidisk0 local-lvm:1 \
--machine q35
# Attach the TrueNAS ISO
qm set 150 --cdrom local:iso/TrueNAS-SCALE-24.10.iso
qm set 150 --boot order='cdrom;scsi0'
Allocate at least 8 GB of RAM — ZFS uses memory aggressively for caching (ARC), and 16 GB is recommended if you plan to store more than a few terabytes. Each terabyte of storage benefits from roughly 1 GB of ARC.
Disk Passthrough: Entire Disks
The most important step is passing your data disks directly to the TrueNAS VM. Never use virtual disks for ZFS pools — ZFS needs to see the real disk hardware to manage wear leveling, SMART data, and error recovery correctly.
First, identify your data disks on the Proxmox host by their persistent IDs:
# List disks by ID (stable across reboots)
ls -la /dev/disk/by-id/ | grep -v part
# Example output:
# ata-WDC_WD40EFRX-68N32N0_WD-WCC7K0ABC123
# ata-WDC_WD40EFRX-68N32N0_WD-WCC7K0DEF456
# ata-Samsung_SSD_870_EVO_1TB_S1234567890
Pass each disk to the VM using its stable /dev/disk/by-id/ path:
# Pass through data disks (do NOT pass through the Proxmox boot disk)
qm set 150 --scsi1 /dev/disk/by-id/ata-WDC_WD40EFRX-68N32N0_WD-WCC7K0ABC123
qm set 150 --scsi2 /dev/disk/by-id/ata-WDC_WD40EFRX-68N32N0_WD-WCC7K0DEF456
qm set 150 --scsi3 /dev/disk/by-id/ata-Samsung_SSD_870_EVO_1TB_S1234567890
Always use /dev/disk/by-id/ paths rather than /dev/sdX names, which can change between reboots.
HBA Passthrough (Advanced)
If you have an HBA (Host Bus Adapter) card like an LSI SAS controller, passing the entire card to the VM is the cleanest approach. This gives TrueNAS direct access to all disks connected to the HBA without configuring individual disk passthrough.
# Find the HBA's IOMMU group
# First, ensure IOMMU is enabled in BIOS (VT-d for Intel, AMD-Vi for AMD)
# Add to /etc/default/grub (Intel):
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt"
# Update GRUB and reboot
update-grub && reboot
# Find the HBA's PCI address
lspci -nn | grep -i "SAS\|SATA\|LSI"
# Example: 03:00.0 Serial Attached SCSI controller [0107]: LSI Logic
# Check the IOMMU group
find /sys/kernel/iommu_groups/ -type l | grep "03:00.0"
# Add PCI passthrough to the VM
qm set 150 --hostpci0 03:00.0,pcie=1
Load the required kernel modules on the Proxmox host to prevent the host from claiming the HBA:
# /etc/modprobe.d/vfio.conf
options vfio-pci ids=1000:0097
# Replace 1000:0097 with your HBA's vendor:device ID from lspci -nn
# Blacklist the host driver
echo "blacklist mpt3sas" >> /etc/modprobe.d/blacklist.conf
update-initramfs -u
Network Configuration
TrueNAS needs reliable network access for file sharing. For best performance, consider dedicating a network interface to the TrueNAS VM or using a VLAN:
# Option 1: Use virtio (default, good for most setups)
# Already configured during VM creation
# Option 2: Pass through a dedicated NIC
qm set 150 --hostpci1 04:00.0,pcie=1
# Option 3: Use a bridge with jumbo frames for NFS performance
# Edit /etc/network/interfaces on the Proxmox host
auto vmbr1
iface vmbr1 inet manual
bridge-ports enp5s0
bridge-stp off
bridge-fd 0
mtu 9000
Setting Up SMB and NFS Shares
After installing TrueNAS and creating your ZFS pool, set up shares through the TrueNAS web interface (accessible at the VM's IP address). For CLI reference, here are the key share types:
# SMB Share (Windows/Mac file sharing)
# In TrueNAS web UI: Shares > SMB > Add
# Path: /mnt/pool/shared
# Purpose: General file sharing
# Enable: Apple-style (fruit) extensions for macOS Time Machine
# NFS Share (Linux/Proxmox storage)
# In TrueNAS web UI: Shares > NFS > Add
# Path: /mnt/pool/vms
# Maproot User: root
# Authorized Networks: 192.168.1.0/24
# This share can be mounted as Proxmox storage for VM disks
To use TrueNAS NFS shares as Proxmox storage (for VM images or backups), add them in the Proxmox web UI under Datacenter > Storage > Add > NFS.
VM Startup Order
Since other VMs and containers may depend on TrueNAS for storage, configure it to start first:
# Set TrueNAS to start first with a delay
qm set 150 --onboot 1 --startup order=1,up=60
The up=60 parameter tells Proxmox to wait 60 seconds after starting TrueNAS before starting other VMs, giving ZFS pools time to import.
Monitoring and Backup
TrueNAS handles its own ZFS snapshots and replication, but you should still monitor the VM itself. SMART alerts and pool status notifications can be configured within TrueNAS. For a quick check that the TrueNAS VM is running — especially after a power outage or host maintenance — ProxmoxR lets you verify VM status from your phone before other dependent services try to access their storage.
Conclusion
Running TrueNAS on Proxmox consolidates your homelab into fewer physical machines without sacrificing storage reliability. Disk passthrough ensures ZFS has full hardware access, while Proxmox handles the compute side. The combination gives you the best of both worlds — TrueNAS's excellent storage management and Proxmox's flexible virtualization — all on a single server.
Take Proxmox management mobile
All the features discussed in this guide — accessible from your phone with ProxmoxR. Real-time monitoring, power control, firewall management, and more.