How to Benchmark Storage Performance on Proxmox VE
Step-by-step guide to benchmarking Proxmox storage using fio, dd, and IOzone. Learn to test IOPS, throughput, and latency across local, NFS, and Ceph storage backends.
Why Benchmark Your Proxmox Storage?
Storage is the most common bottleneck in virtualized environments. A VM that feels sluggish is usually waiting on disk I/O, not CPU or RAM. Before you deploy workloads, benchmark your storage to establish a baseline. After changes — adding a Ceph OSD, tuning NFS, or switching cache modes — run the same tests again to measure the impact.
Setting Up fio for IOPS Testing
fio (Flexible I/O Tester) is the gold standard for storage benchmarking. Install it on your Proxmox host or inside a test VM:
apt update && apt install -y fio
Random Read/Write IOPS Test
This test simulates a database workload with small random I/O operations. It is the most important benchmark for VM-dense environments:
# Random read IOPS (4K block size, 16 jobs, queue depth 32)
fio --name=rand-read \
--ioengine=libaio \
--direct=1 \
--rw=randread \
--bs=4k \
--numjobs=16 \
--iodepth=32 \
--size=1G \
--runtime=60 \
--time_based \
--group_reporting \
--filename=/mnt/test-storage/fio-test
# Random write IOPS
fio --name=rand-write \
--ioengine=libaio \
--direct=1 \
--rw=randwrite \
--bs=4k \
--numjobs=16 \
--iodepth=32 \
--size=1G \
--runtime=60 \
--time_based \
--group_reporting \
--filename=/mnt/test-storage/fio-test
# Mixed 70/30 read/write (realistic workload)
fio --name=mixed-rw \
--ioengine=libaio \
--direct=1 \
--rw=randrw \
--rwmixread=70 \
--bs=4k \
--numjobs=8 \
--iodepth=16 \
--size=1G \
--runtime=60 \
--time_based \
--group_reporting \
--filename=/mnt/test-storage/fio-test
Sequential Throughput with dd
For a quick sequential write/read throughput test, dd is simple and available everywhere:
# Sequential write speed (1GB file, 1MB blocks)
dd if=/dev/zero of=/mnt/test-storage/dd-test bs=1M count=1024 \
conv=fdatasync status=progress
# Output: 1073741824 bytes copied, 2.14 s, 502 MB/s
# Flush cache before read test
echo 3 > /proc/sys/vm/drop_caches
# Sequential read speed
dd if=/mnt/test-storage/dd-test of=/dev/null bs=1M status=progress
# Output: 1073741824 bytes copied, 1.05 s, 1.0 GB/s
# Clean up
rm /mnt/test-storage/dd-test
Note: dd only measures sequential throughput. It tells you nothing about random IOPS or latency, so always pair it with fio for a complete picture.
IOzone for Detailed Analysis
IOzone tests a wider range of I/O patterns and record sizes, producing detailed reports useful for comparing storage configurations:
# Install IOzone
apt install -y iozone3
# Full automatic test with varied record and file sizes
iozone -a -s 1G -r 4k -r 64k -r 1m -i 0 -i 1 -i 2 \
-f /mnt/test-storage/iozone-test \
-b /tmp/iozone-results.xls
# Targeted test: random read/write with specific record size
iozone -i 0 -i 2 -s 512m -r 4k -t 4 \
-f /mnt/test-storage/iozone-test
IOzone outputs results in KB/s across different file and record sizes. Import the Excel file into a spreadsheet for easy comparison across storage backends.
Comparing Storage Backends
Run the same fio tests on each storage type and record the results. Here is what you should typically expect:
# Typical baseline results (your numbers will vary):
#
# Storage Type | Rand Read IOPS | Rand Write IOPS | Seq Write MB/s
# -----------------+----------------+-----------------+---------------
# Local NVMe | 200,000+ | 150,000+ | 2,000+
# Local SSD (SATA) | 40,000-90,000 | 20,000-50,000 | 400-550
# LVM-Thin (SSD) | 35,000-80,000 | 18,000-45,000 | 380-520
# Ceph (3x NVMe) | 50,000-120,000 | 30,000-80,000 | 800-1,500
# Ceph (3x SSD) | 15,000-40,000 | 8,000-20,000 | 300-500
# NFS (1Gbps) | 2,000-5,000 | 1,000-3,000 | 100-115
# NFS (10Gbps) | 5,000-15,000 | 3,000-8,000 | 500-1,000
Interpreting the Results
Focus on the metrics that matter for your workload:
- Random IOPS – Critical for databases, mail servers, and any workload with many small reads/writes. If your random IOPS are low, consider NVMe drives or a faster Ceph pool.
- Sequential throughput – Important for backups, video streaming, and large file operations. NFS over 10Gbps or local drives usually shine here.
- Latency (avg and p99) – fio reports this in the output. Average latency under 1ms is good for SSDs; p99 latency spikes indicate inconsistency.
- CPU usage during I/O – Ceph and ZFS can consume significant CPU. Monitor with
toporhtopduring benchmarks.
# Extract latency stats from fio output - look for:
# lat (usec): min=42, max=12500, avg=185.20, stdev=95.30
# clat percentiles (usec):
# | 1.00th=[ 78], 5.00th=[ 97], 50.00th=[ 159]
# | 95.00th=[ 330], 99.00th=[ 545], 99.99th=[ 3818]
Practical Tips
- Always test on the actual storage path your VMs will use, not just the host filesystem.
- Run benchmarks during off-peak hours since active VMs will skew results.
- Test with and without cache (
--direct=1in fio bypasses the OS page cache for honest results). - Keep a record of your results. When you notice performance degradation later, you can compare against your baseline to pinpoint the issue. Tools like ProxmoxR help you spot unusual I/O wait patterns from your phone, giving you an early warning before users start complaining.
Storage performance is the foundation of a responsive Proxmox environment. Benchmark early, benchmark after every change, and let the numbers guide your architecture decisions.
Take Proxmox management mobile
All the features discussed in this guide — accessible from your phone with ProxmoxR. Real-time monitoring, power control, firewall management, and more.