Understanding Proxmox Storage Types: Which One to Use
Guide to all Proxmox VE storage types including local, NFS, CIFS, iSCSI, Ceph, ZFS, and LVM. Learn when to use each type.
Why Storage Choice Matters in Proxmox
Proxmox VE supports a wide range of storage backends, and the one you choose directly impacts performance, features, and how you manage your virtual machines and containers. Picking the wrong storage type can mean missing out on live snapshots, slow backup performance, or unnecessary complexity. This guide breaks down every storage type so you can make the right decision for your environment.
Storage Types at a Glance
The following table summarizes all major Proxmox storage types, what content they support, and their key characteristics:
| Storage Type | Shared? | Snapshots | VM Disks | Containers | ISOs/Backups |
|---|---|---|---|---|---|
| Local Directory | No | qcow2 only | Yes | Yes | Yes |
| LVM | No* | No | Yes | Yes | No |
| LVM-Thin | No | Yes | Yes | Yes | No |
| ZFS | No | Yes | Yes | Yes | No** |
| NFS | Yes | qcow2 only | Yes | Yes | Yes |
| CIFS/SMB | Yes | qcow2 only | Yes | Yes | Yes |
| iSCSI | Yes | No | Yes | No | No |
| Ceph RBD | Yes | Yes | Yes | Yes | No |
| CephFS | Yes | qcow2 only | Yes | Yes | Yes |
| GlusterFS | Yes | qcow2 only | Yes | Yes | Yes |
* LVM can be shared via iSCSI but is not shared by default. ** ZFS stores ISOs/backups on a separate directory-type storage backed by a ZFS dataset.
Local Directory Storage
The simplest storage type. Proxmox creates a /var/lib/vz directory by default that stores VM images (as qcow2 files), container templates, ISOs, and backups. It works on any filesystem.
When to use it: Single-node setups, storing ISOs and backup files, or when you need maximum compatibility. The qcow2 format supports snapshots and thin provisioning, though with some performance overhead compared to raw block storage.
# Add a directory storage via CLI
pvesm add dir my-storage --path /mnt/data --content images,iso,backup,rootdir
LVM Storage
LVM provides raw block devices to VMs, offering near-native disk performance. However, standard LVM does not support snapshots in Proxmox and does not do thin provisioning, meaning disk space is allocated upfront.
When to use it: Rarely as a standalone choice today. LVM-Thin is almost always a better option unless you specifically need thick provisioning for predictable I/O performance.
LVM-Thin Storage
LVM-Thin adds thin provisioning and snapshot support to LVM. Disk space is allocated on demand, so a 100 GB virtual disk only consumes actual space as data is written. This is the default storage type for the Proxmox installer when you choose ext4 or xfs.
# Create a thin pool manually
lvcreate -L 400G -T pve/data
# Add it as Proxmox storage
pvesm add lvmthin local-lvm --vgname pve --thinpool data --content images,rootdir
When to use it: Single-node setups where you want snapshot support and efficient space usage without the memory overhead of ZFS. It is a solid, low-maintenance default choice.
ZFS Storage
ZFS is a combined filesystem and volume manager offering checksumming, compression, snapshots, and replication. Proxmox integrates deeply with ZFS, and it is the recommended choice for users who want maximum data integrity.
When to use it: When data integrity is critical, you have sufficient RAM (at least 1 GB per TB of storage), and you want features like compression and built-in replication. See our complete ZFS guide for detailed setup instructions.
Trade-offs: Higher memory usage than LVM-Thin and slightly more complex administration, though the data protection features are well worth it for most users.
NFS Storage
NFS (Network File System) is the most common shared storage for Proxmox clusters. It provides a file-level storage interface accessible by all nodes in a cluster, enabling live migration of VMs.
# Add NFS storage
pvesm add nfs nas-storage --server 192.168.1.50 --export /volume1/proxmox \
--content images,iso,backup,rootdir
When to use it: Multi-node clusters with a NAS (Synology, TrueNAS, etc.), storing ISOs and backups on shared storage, or when you need a simple shared storage solution. Performance depends heavily on your network speed and NAS hardware.
CIFS/SMB Storage
CIFS (Common Internet File System) is the Windows-native file sharing protocol. It works similarly to NFS in Proxmox but uses the SMB protocol instead.
When to use it: When your NAS or file server only supports SMB, or in mixed Windows/Linux environments. NFS generally offers better performance and lower overhead for Linux-to-Linux communication.
iSCSI Storage
iSCSI presents remote block devices over the network. It offers better performance than file-level protocols like NFS for VM disk images because it operates at the block level.
When to use it: Enterprise environments with dedicated SANs, or when you need block-level shared storage with maximum performance. Configuration is more complex than NFS and typically requires LVM on top for multi-VM use.
Ceph RBD
Ceph is a distributed storage system that Proxmox integrates natively. Ceph RBD (RADOS Block Device) provides highly available, replicated block storage across multiple nodes with no single point of failure.
# After Ceph is installed and configured, add RBD storage
pvesm add rbd ceph-storage --monhost 10.0.0.1,10.0.0.2,10.0.0.3 \
--pool vm-pool --content images,rootdir --username admin
When to use it: Clusters with 3 or more nodes where you need fully redundant shared storage without a dedicated NAS. Ceph eliminates the single point of failure that NFS introduces, but it requires careful planning and dedicated network bandwidth.
CephFS
CephFS is the file-level interface to a Ceph cluster. It provides a POSIX-compatible filesystem suitable for storing ISOs, backups, and container templates alongside your Ceph RBD block storage.
When to use it: When you already run Ceph for RBD and need shared file storage for ISOs, snippets, and backups without adding a separate NFS server.
GlusterFS
GlusterFS is another distributed filesystem option, though it has seen less adoption in the Proxmox community compared to Ceph. It aggregates storage from multiple servers into a single namespace.
When to use it: Existing GlusterFS deployments that you want to integrate with Proxmox. For new setups, Ceph is generally the preferred distributed storage solution.
Making the Right Choice
For most users, the decision tree is straightforward:
- Single node, simple setup: LVM-Thin for VM disks, local directory for ISOs and backups
- Single node, data integrity focus: ZFS for everything
- Small cluster (2-3 nodes) with NAS: NFS for shared storage, local LVM-Thin or ZFS for performance-sensitive VMs
- Large cluster (3+ nodes), no external NAS: Ceph RBD for VM disks, CephFS for file storage
- Enterprise with SAN: iSCSI for VM disks, NFS for file storage
You can add and manage storage backends directly from the Proxmox web interface, the command line, or remotely using mobile tools like ProxmoxR to keep tabs on storage usage and health across all your nodes. Whatever you choose, make sure to test backup and restore procedures before relying on any storage configuration in production.
Take Proxmox management mobile
All the features discussed in this guide — accessible from your phone with ProxmoxR. Real-time monitoring, power control, firewall management, and more.