Storage

Proxmox ZFS Storage: Complete Setup Guide

Everything you need to know about using ZFS with Proxmox VE. Covers pools, datasets, snapshots, replication, and performance tuning.

ProxmoxR app icon

Managing Proxmox? Try ProxmoxR

Monitor and control your VMs & containers from your phone.

Try Free

Why Use ZFS with Proxmox?

ZFS is arguably the most powerful filesystem available for Proxmox VE, and for good reason. It combines a filesystem and volume manager into one, delivering features that would normally require multiple separate tools: checksumming for data integrity, built-in RAID, snapshots, compression, and replication. Proxmox has first-class ZFS support, making it easy to set up during installation or add later.

Unlike traditional RAID controllers, ZFS handles everything in software. This means you get end-to-end data integrity verification, self-healing capabilities when using redundant configurations, and the ability to expand your storage without expensive hardware RAID cards.

Creating ZFS Pools

A ZFS pool (zpool) is the foundation of your ZFS storage. Proxmox lets you create pools during installation, but you can also create them from the command line at any time. Here are the most common configurations:

Mirror (RAID 1 Equivalent)

A mirror provides redundancy by writing data to two or more disks simultaneously. You lose 50% of total capacity but gain excellent read performance and full redundancy.

# Create a mirror pool with two disks
zpool create rpool mirror /dev/sda /dev/sdb

# Create a 3-way mirror for critical data
zpool create tank mirror /dev/sda /dev/sdb /dev/sdc

RAIDZ Configurations

RAIDZ offers different levels of parity protection. Choose based on your redundancy needs and disk count:

# RAIDZ1 - single parity (like RAID 5), minimum 3 disks
zpool create tank raidz /dev/sda /dev/sdb /dev/sdc /dev/sdd

# RAIDZ2 - double parity (like RAID 6), minimum 4 disks
zpool create tank raidz2 /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde

# RAIDZ3 - triple parity, minimum 5 disks
zpool create tank raidz3 /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf
For most homelab setups, a mirror or RAIDZ1 with 3-4 disks strikes the best balance between capacity, performance, and redundancy.

Working with Datasets

Datasets are ZFS filesystems within a pool. They allow you to apply different settings (compression, quotas, record sizes) to different workloads without creating separate pools.

# Create datasets for different purposes
zfs create tank/vms
zfs create tank/backups
zfs create tank/isos

# Set a quota on the backups dataset
zfs set quota=500G tank/backups

# Set a specific record size for VM images (recommended)
zfs set recordsize=64K tank/vms

Enabling Compression

ZFS compression is almost always worth enabling. Modern algorithms like LZ4 are so fast that compression can actually improve performance by reducing the amount of data written to disk.

# Enable LZ4 compression (recommended default)
zfs set compression=lz4 tank

# Check compression ratio
zfs get compressratio tank

# For backups, ZSTD offers better ratios at slightly higher CPU cost
zfs set compression=zstd tank/backups

Understanding ARC and L2ARC

ZFS uses system RAM as a read cache called the ARC (Adaptive Replacement Cache). This is why ZFS is often called memory-hungry, but it is also why ZFS delivers excellent read performance.

ARC (RAM Cache)

By default, ZFS will use up to 50% of system RAM for ARC on Proxmox. You can adjust this limit:

# Check current ARC usage
arc_summary

# Limit ARC to 8 GB (set in bytes)
echo "options zfs zfs_arc_max=8589934592" > /etc/modprobe.d/zfs.conf

# Apply without reboot
echo 8589934592 > /sys/module/zfs/parameters/zfs_arc_max

L2ARC (SSD Read Cache)

L2ARC extends the ARC to a fast SSD, useful when your working set exceeds available RAM:

# Add an SSD as L2ARC
zpool add tank cache /dev/nvme0n1p1

Only add L2ARC if you have at least 32 GB of RAM. Each L2ARC entry consumes roughly 70-100 bytes of RAM for metadata, so a small ARC with a large L2ARC is counterproductive.

ZIL and SLOG

The ZFS Intent Log (ZIL) records synchronous writes. By default it lives on the pool disks, but you can add a dedicated SLOG (Separate Log) device to speed up sync writes dramatically.

# Add a mirrored SLOG using NVMe partitions
zpool add tank log mirror /dev/nvme0n1p2 /dev/nvme1n1p2

A SLOG is most beneficial for NFS storage, databases, and any workload with heavy synchronous write requirements. Use enterprise-grade SSDs or Optane drives for SLOG, as consumer SSDs may lack power-loss protection.

Snapshots and Replication

ZFS snapshots are instantaneous, space-efficient point-in-time copies. They are one of the strongest reasons to choose ZFS.

# Create a snapshot
zfs snapshot tank/vms@before-upgrade

# List snapshots
zfs list -t snapshot

# Rollback to a snapshot
zfs rollback tank/vms@before-upgrade

# Delete a snapshot
zfs destroy tank/vms@before-upgrade

ZFS Send/Receive for Offsite Replication

You can replicate datasets to another machine for disaster recovery:

# Send a full snapshot to a remote server
zfs send tank/vms@snap1 | ssh backup-server zfs recv backup/vms

# Send only incremental changes (much faster after initial sync)
zfs send -i tank/vms@snap1 tank/vms@snap2 | ssh backup-server zfs recv backup/vms

Proxmox also has built-in replication that uses ZFS send/receive under the hood, configurable directly from the web UI or through tools like ProxmoxR on your mobile device.

Scrub and Monitoring

Regular scrubs verify data integrity by reading all data and checking checksums. Schedule scrubs at least monthly:

# Start a scrub
zpool scrub tank

# Check scrub status
zpool status tank

# View pool I/O statistics
zpool iostat tank 5

Proxmox automatically schedules monthly scrubs via a systemd timer. You can verify this:

# Check the scrub timer
systemctl list-timers | grep zfs

Set up email alerts so you are notified immediately if a scrub finds errors or a disk fails. Also watch for checksum errors in zpool status output; they indicate potential disk problems even before a full failure.

Memory Requirements

Plan your RAM allocation carefully when using ZFS with Proxmox:

  • Minimum: 8 GB total, but this limits both ARC and VM capacity
  • Recommended: 16-32 GB for a homelab with moderate VM counts
  • Rule of thumb: 1 GB of RAM per TB of ZFS storage for basic deduplication-free setups
  • Avoid deduplication unless you have massive amounts of RAM (5 GB per TB of deduplicated data)
If you are running ZFS on a memory-constrained system, disable deduplication and limit the ARC. Compression gives you most of the space savings of dedup with a fraction of the memory cost.

Putting It All Together

A solid ZFS configuration for a typical Proxmox server might look like this:

# Create a RAIDZ1 pool with 4 disks
zpool create -o ashift=12 tank raidz /dev/sda /dev/sdb /dev/sdc /dev/sdd

# Add NVMe SLOG and L2ARC
zpool add tank log mirror /dev/nvme0n1p1 /dev/nvme1n1p1
zpool add tank cache /dev/nvme0n1p2

# Create datasets with appropriate settings
zfs create -o recordsize=64K -o compression=lz4 tank/vms
zfs create -o compression=zstd -o quota=1T tank/backups
zfs create tank/isos

# Limit ARC to leave RAM for VMs
echo "options zfs zfs_arc_max=8589934592" >> /etc/modprobe.d/zfs.conf

ZFS paired with Proxmox gives you enterprise-grade storage without the enterprise price tag. Take the time to plan your pool layout and tune the settings for your workload, and you will have a storage backend that protects your data while delivering strong performance.

Take Proxmox management mobile

All the features discussed in this guide — accessible from your phone with ProxmoxR. Real-time monitoring, power control, firewall management, and more.

ProxmoxR

Manage Proxmox from your phone

Monitor, control, and manage your clusters on the go.

Free 7-day trial · No credit card required