Storage

Proxmox iSCSI Storage: Target Setup, LUN Configuration, and Multipath

Configure iSCSI storage for Proxmox VE, including targetcli LUN creation, iscsiadm discovery, multipath I/O, and LVM over iSCSI for flexible VM disk management.

ProxmoxR app icon

Managing Proxmox? Try ProxmoxR

Monitor and control your VMs & containers from your phone.

Try Free

iSCSI Storage for Proxmox

iSCSI (Internet Small Computer Systems Interface) presents block-level storage over standard TCP/IP networks, giving Proxmox VE access to remote disks as if they were locally attached. Unlike NFS, which is file-level, iSCSI provides raw block devices, making it well-suited for high-performance VM workloads. It is commonly used with dedicated SAN appliances, but you can also build a software-defined iSCSI target using a standard Linux server.

Setting Up the iSCSI Target Server

On a Debian or Ubuntu server that will serve as your iSCSI target, install targetcli:

apt update
apt install targetcli-fb -y

Create a logical volume or file-backed storage object to export as a LUN. Using LVM is recommended for production:

# Create a logical volume for the LUN
lvcreate -L 100G -n iscsi-lun0 vg-storage

Now configure the target using targetcli:

targetcli

Inside the targetcli shell, run:

/> backstores/block create lun0 /dev/vg-storage/iscsi-lun0
/> iscsi/ create iqn.2026-03.com.lab:storage.target01
/> iscsi/iqn.2026-03.com.lab:storage.target01/tpg1/luns/ create /backstores/block/lun0
/> iscsi/iqn.2026-03.com.lab:storage.target01/tpg1/acls/ create iqn.2026-03.com.lab:pve1
/> iscsi/iqn.2026-03.com.lab:storage.target01/tpg1/acls/ create iqn.2026-03.com.lab:pve2
/> iscsi/iqn.2026-03.com.lab:storage.target01/tpg1/acls/ create iqn.2026-03.com.lab:pve3
/> saveconfig
/> exit

The ACLs restrict access to only your Proxmox nodes. Each node must use the matching IQN (iSCSI Qualified Name) when connecting.

Enable and start the target service:

systemctl enable --now rtslib-fb-targetctl

Configuring the iSCSI Initiator on Proxmox

On each Proxmox node, install the initiator tools (usually pre-installed):

apt install open-iscsi -y

Set the initiator name to match the ACL you created on the target. Edit /etc/iscsi/initiatorname.iscsi:

InitiatorName=iqn.2026-03.com.lab:pve1

Restart the iSCSI service:

systemctl restart iscsid open-iscsi

Discover available targets on the storage server:

iscsiadm -m discovery -t sendtargets -p 192.168.1.50

Log in to the target:

iscsiadm -m node --targetname iqn.2026-03.com.lab:storage.target01 --portal 192.168.1.50 --login

Make the login persistent across reboots:

iscsiadm -m node --targetname iqn.2026-03.com.lab:storage.target01 --portal 192.168.1.50 -o update -n node.startup -v automatic

Verify the new block device appeared:

lsblk
fdisk -l /dev/sdb

Adding iSCSI Storage in Proxmox

In the web UI, go to Datacenter > Storage > Add > iSCSI:

  • ID: iscsi-san
  • Portal: 192.168.1.50
  • Target: select the discovered target IQN

Raw iSCSI in Proxmox presents entire LUNs as single disks. For more flexibility (multiple VM disks per LUN), add LVM on top.

LVM Over iSCSI

Create a PV and VG on the iSCSI device from any one node:

pvcreate /dev/sdb
vgcreate vg-iscsi /dev/sdb

Then add an LVM storage entry in Proxmox referencing this VG. Go to Datacenter > Storage > Add > LVM:

  • ID: iscsi-lvm
  • Base Storage: iscsi-san
  • Volume Group: vg-iscsi
  • Content: Disk image, Container
  • Shared: checked (so all nodes can use it)

Proxmox will now create thin logical volumes inside this VG for each VM disk, giving you much more efficient space usage than whole-LUN allocation.

Multipath I/O

For redundancy and performance, configure multipath so your Proxmox nodes connect to the iSCSI target over two or more network paths. Install multipath tools:

apt install multipath-tools -y

Create or edit /etc/multipath.conf:

defaults {
    user_friendly_names yes
    find_multipaths     yes
    path_grouping_policy failover
    failback            immediate
    no_path_retry       5
}

blacklist {
    devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
    devnode "^sd[a]$"
}

Log in to the target from a second network path:

iscsiadm -m discovery -t sendtargets -p 10.10.10.50
iscsiadm -m node --targetname iqn.2026-03.com.lab:storage.target01 --portal 10.10.10.50 --login

Restart multipathd and verify:

systemctl restart multipathd
multipath -ll

You should see a multipath device (e.g., /dev/mapper/mpathX) with two active paths. Use this device instead of /dev/sdb for your LVM PV.

Performance Considerations

  • Use a dedicated storage network (separate VLAN or physical NIC) for iSCSI traffic.
  • Enable jumbo frames (MTU 9000) on iSCSI network interfaces for reduced overhead.
  • Use queue_depth tuning in /etc/iscsi/iscsid.conf for high-IOPS workloads.
  • Consider CHAP authentication for environments where network security is a concern.

iSCSI provides block-level storage performance that file-based protocols cannot match. For environments with multiple Proxmox nodes connected to iSCSI targets, ProxmoxR helps you track storage allocation and VM placement across your cluster from any device.

Take Proxmox management mobile

All the features discussed in this guide — accessible from your phone with ProxmoxR. Real-time monitoring, power control, firewall management, and more.

ProxmoxR

Manage Proxmox from your phone

Monitor, control, and manage your clusters on the go.

Free 7-day trial · No credit card required