Proxmox Storage Full: How to Free Up Disk Space
Fix Proxmox storage full issues by cleaning old backups, removing orphaned snapshots, vacuuming journal logs, clearing /var/log, and extending thin pools.
Running out of disk space on a Proxmox node can cause VM failures, backup job errors, and even prevent the web interface from loading. The root filesystem, backup storage, and VM disk storage can all fill up independently, each with different symptoms. This guide covers systematic approaches to diagnosing and resolving storage issues on your Proxmox host.
Diagnosing the Problem
Start by identifying which filesystems are full and what is consuming the most space:
# Check all mounted filesystems
df -h
# Show disk usage for key directories
du -sh /var/log /var/lib/vz /var/cache /tmp
# Check for large files across the system
find / -xdev -type f -size +100M -exec ls -lh {} \; 2>/dev/null | sort -k5 -h
# For ZFS, check pool usage
zpool list
zfs list -o name,used,avail,refer
The most common culprits are old backups in /var/lib/vz/dump/, accumulated system logs in /var/log/, orphaned VM snapshots, and journal logs that have grown unchecked.
Cleaning Up Old Backups
Backup files are often the single largest consumer of disk space. Proxmox stores vzdump backups in the configured storage location, with /var/lib/vz/dump/ being the default.
# List all backup files sorted by size
ls -lhS /var/lib/vz/dump/
# See how much space backups consume in total
du -sh /var/lib/vz/dump/
# Remove backups older than 30 days
find /var/lib/vz/dump/ -name "vzdump-*" -mtime +30 -delete
# Or remove backups for a specific VM (keep only the latest 2)
ls -t /var/lib/vz/dump/vzdump-qemu-100-* | tail -n +3 | xargs rm -f
# Clean up associated log and notes files
find /var/lib/vz/dump/ -name "*.log" -mtime +30 -delete
To prevent this problem from recurring, configure retention policies in your backup job. Under Datacenter > Backup, edit each job and set the "Keep Last" value to limit how many backups are retained per VM.
Removing Orphaned Snapshots
VM snapshots consume increasing amounts of space over time, especially if the guest is write-heavy. Orphaned snapshots from failed operations can also linger.
# List snapshots for a specific VM
qm listsnapshot 100
# List snapshots for all VMs
for vmid in $(qm list | awk 'NR>1 {print $1}'); do
echo "=== VM $vmid ==="
qm listsnapshot $vmid
done
# Remove a specific snapshot
qm delsnapshot 100 snap_2026-01-15
# For LXC containers
pct listsnapshot 200
pct delsnapshot 200 old_snapshot
If a snapshot deletion hangs, it may be because the VM is locked. See the qm unlock procedure, then retry the deletion.
Cleaning System Logs
System logs can accumulate significantly, especially on busy nodes with many VMs:
# Check journal disk usage
journalctl --disk-usage
# Vacuum journal logs — keep only the last 3 days
journalctl --vacuum-time=3d
# Or limit journal size to 500MB
journalctl --vacuum-size=500M
# Set a permanent journal size limit
# Edit /etc/systemd/journald.conf:
# SystemMaxUse=500M
# Clean compressed old log files
find /var/log -name "*.gz" -delete
find /var/log -name "*.1" -delete
# Truncate large active log files (safer than deleting)
truncate -s 0 /var/log/syslog
truncate -s 0 /var/log/daemon.log
# Restart rsyslog after truncating
systemctl restart rsyslog
Cleaning Package Cache
Downloaded package files accumulate in the apt cache:
# Check cache size
du -sh /var/cache/apt/archives/
# Clean the package cache
apt clean
# Remove old package versions no longer needed
apt autoremove -y
Extending LVM Thin Pools
If your VM storage uses LVM-thin (the Proxmox default for local-lvm), you may need to extend the thin pool rather than just cleaning files:
# Check thin pool usage
lvs -a -o +devices,seg_pe_ranges
# Check if the volume group has free space
vgdisplay pve | grep Free
# Extend the thin pool data LV by 20GB
lvextend -L +20G pve/data
# If the VG has no free space, shrink the root LV first
# WARNING: this requires careful planning — back up first
# Reduce root filesystem (if ext4, must be unmounted or use live USB)
# Then shrink the LV:
# lvreduce -L 30G pve/root
# Then extend thin pool:
# lvextend -l +100%FREE pve/data
For ZFS-based storage, space management is different:
# Check ZFS pool space
zpool list
zfs list
# Destroy old snapshots consuming space
zfs list -t snapshot -o name,used,refer | sort -k2 -h
# Remove a specific ZFS snapshot
zfs destroy rpool/data/vm-100-disk-0@autosnap_2026-01-01
Preventing Future Issues
- Set backup retention — Configure "Keep Last" on every backup job
- Monitor disk usage — Set up alerts before storage hits 90%. Tools like ProxmoxR can push storage alerts to your phone so you can act before a full disk causes downtime.
- Schedule log rotation — Configure journald size limits in
/etc/systemd/journald.conf - Audit snapshots weekly — Snapshots are meant to be temporary; delete them after verifying your changes work
- Use separate storage — Keep backups on a different storage target than your VM disks
A full disk is one of the most preventable problems in Proxmox. With retention policies, log rotation, and basic monitoring, you can avoid this issue entirely.
Take Proxmox management mobile
All the features discussed in this guide — accessible from your phone with ProxmoxR. Real-time monitoring, power control, firewall management, and more.