As data centers evolve toward scalable and highly available infrastructures, centralized storage becomes the backbone of every virtualization environment. Proxmox VE 9, built on a modern Debian 12 base, provides seamless integration with iSCSI SAN storage, allowing multiple nodes to share block-level access to the same storage.
When combined with Multipath I/O (MPIO) and Proxmox High Availability (HA), you can achieve an enterprise-grade virtualization platform with both performance and redundancy — using standard Ethernet hardware.
In this comprehensive guide, we’ll walk you through:
Setting up iSCSI SAN storage on Proxmox VE 9
Configuring Multipath (MPIO) for redundancy
Enabling Proxmox High Availability using shared SAN storage
Reviewing pros and cons versus Ceph, NFS, and ZFS
1. Understanding iSCSI SAN in Proxmox VE
An iSCSI SAN (Storage Area Network) provides block-level access to centralized storage devices over IP networks.
Instead of using expensive Fibre Channel SANs, iSCSI runs over standard Ethernet, making it cost-effective and widely compatible.
A typical Proxmox iSCSI SAN setup includes:
2–5 Proxmox VE 9 nodes
1 or more SAN storage servers (TrueNAS, Synology, Dell EMC, etc.)
Dedicated storage network (10GbE or higher recommended)
Optional redundant switches or VLANs for multipathing
This design allows each node to access the same shared LUNs (Logical Unit Numbers) — essential for live migration and HA.
2. Prerequisites
Before starting, ensure you have:
Proxmox VE 9 fully updated (
apt update && apt full-upgrade
)A configured iSCSI target on your SAN (with its IQN and LUNs)
Proper network connectivity (ideally a dedicated 10GbE iSCSI VLAN)
Root access to both Proxmox and SAN systems
3. Configure iSCSI SAN Storage on Proxmox VE 9
Step 1: Discover the iSCSI Target
Run the following command from your Proxmox node:
iscsiadm -m discovery -t sendtargets -p <SAN_Target_IP>
You’ll see an output like:
<target_ip>:3260,1 iqn.2025-10.com.san:target01
Step 2: Log In to the Target
iscsiadm -m node -T iqn.2025-10.com.san:target01 -p <SAN_Target_IP> --login
Enable automatic login at boot:
iscsiadm -m node -T iqn.2025-10.com.san:target01 -p <SAN_Target_IP> --op update -n node.startup -v automatic
Step 3: Verify the Connected Disk
After successful login:
lsblk
You should see a new device such as /dev/sdb
representing your SAN LUN.
Step 4: Add iSCSI Storage in Proxmox GUI
Navigate to Datacenter → Storage → Add → iSCSI
Enter:
ID:
iscsi-san
Portal: SAN IP address
Target: Select from the discovered list
Click Add
Step 5: Create LVM on iSCSI Device
Once connected, create an LVM group to manage VM disks:
Go to Datacenter → Storage → Add → LVM
Choose:
Base storage: iSCSI device
Volume group name:
vg_san_iscsi
Click Add
Now all nodes in the cluster can use this shared LVM-based iSCSI SAN storage.
4. Configuring Multipath I/O (MPIO)
Multipathing provides redundancy and load balancing by allowing multiple network paths to the same storage target. If one path fails, I/O automatically switches to another — ensuring uninterrupted service.
Step 1: Install Multipath Tools
On all Proxmox nodes:
apt install multipath-tools
systemctl enable multipathd
systemctl start multipathd
Step 2: Configure Multipath
Create or edit /etc/multipath.conf
:
defaults {
user_friendly_names yes
path_grouping_policy multibus
failback immediate
no_path_retry 5
}blacklist {
devnode "^sd[a]"
}
Then restart the service:
systemctl restart multipathd
Step 3: Verify Multipath Configuration
Check that multiple paths are active:
multipath -ll
You should see your LUN listed as something like /dev/mapper/mpathX
with multiple active paths.
Step 4: Update Proxmox Storage Configuration
In /etc/pve/storage.cfg
, replace your iSCSI device path with the multipath device:
lvm: san-lvm
vgname vg_san_iscsi
base /dev/mapper/mpathX
shared 1
Restart Proxmox services or refresh the GUI to apply.
5. Setting Up Proxmox High Availability (HA)
With iSCSI SAN and MPIO configured, you can now enable Proxmox High Availability.
Step 1: Create a Proxmox Cluster
On the first node:
pvecm create proxmox-cluster
On the remaining nodes:
pvecm add <Cluster_Master_IP>
Confirm the cluster:
pvecm status
Step 2: Enable the HA Manager
systemctl enable pve-ha-lrm pve-ha-crm
systemctl start pve-ha-lrm pve-ha-crm
Step 3: Add Shared Storage for HA
The iSCSI LVM you configured earlier should be marked as shared in /etc/pve/storage.cfg
.
Example:
lvm: san-lvm
vgname vg_san_iscsi
shared 1
Step 4: Create an HA Group
In the Proxmox Web UI:
Go to Datacenter → HA → Groups → Create
Define a group name (e.g.,
ha-group1
)Select nodes that will participate in HA.
Step 5: Assign VMs to the HA Group
Select a VM → Options → HA Settings
Enable “Managed by HA”
Assign it to the created HA group.
Now, if one node fails, Proxmox automatically restarts the affected VMs on another available node using the shared iSCSI SAN storage.
6. Best Practices for iSCSI SAN + HA
Use dedicated 10GbE NICs or bonded interfaces for iSCSI traffic.
Always enable MPIO for redundancy.
Regularly test HA failover scenarios.
Monitor network latency (pveperf
, iostat
, ping
) regularly.
Backup /etc/iscsi
and /etc/pve/storage.cfg
frequently.
7. Pros and Cons of iSCSI SAN with HA
Feature | Advantages | Disadvantages |
---|---|---|
Performance | High, especially on 10/25GbE | Depends on network design |
Redundancy | MPIO ensures fault tolerance | Complex to configure initially |
Scalability | Easy to expand with LUNs | Requires SAN expertise |
Cost | Cheaper than Fibre Channel SAN | Slightly higher CPU overhead |
Compatibility | Works with all major SAN vendors | Limited snapshot support on raw LUNs |
8. iSCSI SAN vs. Ceph and NFS
Feature | iSCSI SAN | Ceph | NFS |
---|---|---|---|
Storage Type | Block | Distributed Object | File |
HA Support | Yes (via SAN + MPIO) | Native | Yes |
Performance | High | Very High | Medium |
Complexity | Medium | High | Low |
Best Use Case | Enterprise shared LUNs | Hyper-converged clusters | Backups, ISO storage |
Conclusion
By combining iSCSI SAN, Multipath I/O, and Proxmox High Availability, you can build an enterprise-grade virtualization cluster that delivers both performance and fault tolerance without requiring proprietary SAN hardware.
This setup is ideal for medium to large IT environments looking for centralized, resilient, and scalable storage to power mission-critical workloads — all using open-source technology.