Proxmox LVM-Thin Storage: Thin Provisioning, Over-Provisioning, and Monitoring
Learn how to create and manage LVM thin pools in Proxmox VE for efficient disk space usage, including over-provisioning strategies, auto-extend configuration, and usage monitoring.
What Is LVM Thin Provisioning?
LVM thin provisioning lets you allocate more virtual disk space to VMs than you physically have available. Instead of reserving the full disk size immediately (thick provisioning), thin pools only consume physical storage as data is actually written. A VM configured with a 100 GB disk that only uses 15 GB only occupies 15 GB in the thin pool. This dramatically improves storage efficiency, especially when running many VMs that do not use their full allocation.
Proxmox VE uses LVM-thin as its default local storage type during installation. The installer creates a thin pool called data inside the volume group pve, and this is where VM disks and container volumes are stored.
Understanding the Default Layout
After a standard Proxmox installation, your storage layout typically looks like this:
pvs
PV VG Fmt Attr PSize PFree
/dev/sda3 pve lvm2 a-- 930.00g 16.00g
lvs
LV VG Attr LSize Pool Origin Data%
data pve twi-a-t--- 800.00g 5.42
root pve -wi-ao--- 96.00g
swap pve -wi-ao--- 8.00g
The data volume is your thin pool. The Data% column shows what percentage of the pool's physical space is actually used.
Creating a New Thin Pool
If you have additional disks, you can create new thin pools. For example, adding an NVMe drive for high-performance VM storage:
# Create physical volume
pvcreate /dev/nvme0n1
# Create volume group
vgcreate vg-nvme /dev/nvme0n1
# Create thin pool using 95% of VG space (leave some for metadata)
lvcreate -l 95%VG --type thin-pool -n thinpool vg-nvme
Register the thin pool in Proxmox by going to Datacenter > Storage > Add > LVM-Thin:
- ID:
nvme-thin - Volume Group:
vg-nvme - Thin Pool:
thinpool - Content: Disk image, Container
Or via the command line:
pvesm add lvmthin nvme-thin --vgname vg-nvme --thinpool thinpool --content images,rootdir
Over-Provisioning Strategies
Thin provisioning enables over-provisioning, where the sum of all allocated VM disks exceeds the physical pool size. This is common and safe when done thoughtfully. For example, you might have a 500 GB thin pool with 20 VMs allocated 50 GB each (1 TB total), knowing that actual usage typically stays well below 50%.
Guidelines for safe over-provisioning:
- Monitor continuously. Set up alerts when pool usage exceeds 75-80%.
- Know your growth rate. Track how quickly VMs consume new storage.
- Keep emergency reserve. Always have a plan to add physical capacity or migrate VMs before the pool fills completely.
- Avoid 100% fill. If a thin pool fills completely, all VMs using it will freeze with I/O errors.
Auto-Extend Configuration
LVM can automatically extend the thin pool when it gets full, provided free space exists in the volume group. Configure this in /etc/lvm/lvm.conf:
thin_pool_autoextend_threshold = 80
thin_pool_autoextend_percent = 20
This tells LVM to automatically grow the thin pool by 20% when it reaches 80% capacity. The dmeventd daemon handles this, so make sure it is running:
systemctl enable --now dm-event
systemctl status dm-event
Note that auto-extend only works if there is free space in the volume group. If the VG is fully allocated to the thin pool, there is nothing to extend into. That is why the earlier step used 95%VG instead of 100%VG.
Monitoring Thin Pool Usage
Regular monitoring is essential when running thin-provisioned storage. Check pool usage with:
# Overview of all LVs with thin pool data usage
lvs -o +lv_size,data_percent,metadata_percent
# Detailed thin pool status
lvs -a -o name,size,data_percent,metadata_percent,pool_lv vg-nvme
Sample output:
LV LSize Data% Meta% Pool
thinpool 465.00g 42.31 8.15
vm-100-disk-0 50.00g thinpool
vm-101-disk-0 50.00g thinpool
vm-102-disk-0 100.00g thinpool
Pay attention to both Data% and Meta%. The metadata pool can fill up independently if you have many thin volumes or snapshots. If metadata reaches 100%, the pool becomes read-only.
Set up a simple cron-based alert by adding this to /etc/cron.d/thin-pool-alert:
*/15 * * * * root /usr/sbin/lvs --noheadings -o data_percent pve/data | awk '{if ($1+0 > 85) system("echo \"Thin pool at "$1"%\" | mail -s \"Storage Alert\" admin@example.com")}'
Reclaiming Space
Deleting files inside a VM does not automatically free space in the thin pool because the thin pool does not know which blocks the guest filesystem considers free. To reclaim space, issue a discard/TRIM from inside the VM:
# Linux guest
fstrim -av
# Or enable continuous discard in /etc/fstab
/dev/vda1 / ext4 defaults,discard 0 1
Ensure the VM disk is configured with the discard option enabled in Proxmox (set under Hardware > Disk > Advanced > Discard).
LVM-thin is the backbone of local storage in most Proxmox deployments. For administrators managing thin pools across multiple nodes, ProxmoxR provides at-a-glance storage utilization data so you can catch capacity issues before they impact your workloads.
Take Proxmox management mobile
All the features discussed in this guide — accessible from your phone with ProxmoxR. Real-time monitoring, power control, firewall management, and more.