As data centers evolve toward scalable and highly available infrastructures, centralized storage becomes the backbone of every virtualization environment. Proxmox VE 9, built on a modern Debian 12 base, provides seamless integration with iSCSI SAN storage, allowing multiple nodes to share block-level access to the same storage.

When combined with Multipath I/O (MPIO) and Proxmox High Availability (HA), you can achieve an enterprise-grade virtualization platform with both performance and redundancy — using standard Ethernet hardware.

In this comprehensive guide, we’ll walk you through:

  • Setting up iSCSI SAN storage on Proxmox VE 9

  • Configuring Multipath (MPIO) for redundancy

  • Enabling Proxmox High Availability using shared SAN storage

  • Reviewing pros and cons versus Ceph, NFS, and ZFS

 


1. Understanding iSCSI SAN in Proxmox VE

An iSCSI SAN (Storage Area Network) provides block-level access to centralized storage devices over IP networks.
Instead of using expensive Fibre Channel SANs, iSCSI runs over standard Ethernet, making it cost-effective and widely compatible.

A typical Proxmox iSCSI SAN setup includes:

  • 2–5 Proxmox VE 9 nodes

  • 1 or more SAN storage servers (TrueNAS, Synology, Dell EMC, etc.)

  • Dedicated storage network (10GbE or higher recommended)

  • Optional redundant switches or VLANs for multipathing

This design allows each node to access the same shared LUNs (Logical Unit Numbers) — essential for live migration and HA.


2. Prerequisites

Before starting, ensure you have:

  • Proxmox VE 9 fully updated (apt update && apt full-upgrade)

  • A configured iSCSI target on your SAN (with its IQN and LUNs)

  • Proper network connectivity (ideally a dedicated 10GbE iSCSI VLAN)

  • Root access to both Proxmox and SAN systems

 


3. Configure iSCSI SAN Storage on Proxmox VE 9

Step 1: Discover the iSCSI Target

Run the following command from your Proxmox node:

iscsiadm -m discovery -t sendtargets -p <SAN_Target_IP>

You’ll see an output like:

<target_ip>:3260,1 iqn.2025-10.com.san:target01

Step 2: Log In to the Target

iscsiadm -m node -T iqn.2025-10.com.san:target01 -p <SAN_Target_IP> --login

Enable automatic login at boot:

iscsiadm -m node -T iqn.2025-10.com.san:target01 -p <SAN_Target_IP> --op update -n node.startup -v automatic

Step 3: Verify the Connected Disk

After successful login:

lsblk

You should see a new device such as /dev/sdb representing your SAN LUN.


Step 4: Add iSCSI Storage in Proxmox GUI

  1. Navigate to Datacenter → Storage → Add → iSCSI

  2. Enter:

    • ID: iscsi-san

    • Portal: SAN IP address

    • Target: Select from the discovered list

  3. Click Add

 


Step 5: Create LVM on iSCSI Device

Once connected, create an LVM group to manage VM disks:

  1. Go to Datacenter → Storage → Add → LVM

  2. Choose:

    • Base storage: iSCSI device

    • Volume group name: vg_san_iscsi

  3. Click Add

Now all nodes in the cluster can use this shared LVM-based iSCSI SAN storage.


4. Configuring Multipath I/O (MPIO)

Multipathing provides redundancy and load balancing by allowing multiple network paths to the same storage target. If one path fails, I/O automatically switches to another — ensuring uninterrupted service.

Step 1: Install Multipath Tools

On all Proxmox nodes:

apt install multipath-tools
systemctl enable multipathd
systemctl start multipathd

Step 2: Configure Multipath

Create or edit /etc/multipath.conf:

defaults {
user_friendly_names yes
path_grouping_policy multibus
failback immediate
no_path_retry 5
}

blacklist {
devnode "^sd[a]"
}

Then restart the service:

systemctl restart multipathd

 


Step 3: Verify Multipath Configuration

Check that multiple paths are active:

multipath -ll

You should see your LUN listed as something like /dev/mapper/mpathX with multiple active paths.


Step 4: Update Proxmox Storage Configuration

In /etc/pve/storage.cfg, replace your iSCSI device path with the multipath device:

lvm: san-lvm
vgname vg_san_iscsi
base /dev/mapper/mpathX
shared 1

Restart Proxmox services or refresh the GUI to apply.


5. Setting Up Proxmox High Availability (HA)

With iSCSI SAN and MPIO configured, you can now enable Proxmox High Availability.

Step 1: Create a Proxmox Cluster

On the first node:

pvecm create proxmox-cluster

On the remaining nodes:

pvecm add <Cluster_Master_IP>

Confirm the cluster:

pvecm status

Step 2: Enable the HA Manager

systemctl enable pve-ha-lrm pve-ha-crm
systemctl start pve-ha-lrm pve-ha-crm

Step 3: Add Shared Storage for HA

The iSCSI LVM you configured earlier should be marked as shared in /etc/pve/storage.cfg.

Example:

lvm: san-lvm
vgname vg_san_iscsi
shared 1

Step 4: Create an HA Group

In the Proxmox Web UI:

  1. Go to Datacenter → HA → Groups → Create

  2. Define a group name (e.g., ha-group1)

  3. Select nodes that will participate in HA.

 


Step 5: Assign VMs to the HA Group

  1. Select a VM → Options → HA Settings

  2. Enable “Managed by HA”

  3. Assign it to the created HA group.

Now, if one node fails, Proxmox automatically restarts the affected VMs on another available node using the shared iSCSI SAN storage.


6. Best Practices for iSCSI SAN + HA

Use dedicated 10GbE NICs or bonded interfaces for iSCSI traffic.
Always enable MPIO for redundancy.
Regularly test HA failover scenarios.
Monitor network latency (pveperf, iostat, ping) regularly.
Backup /etc/iscsi and /etc/pve/storage.cfg frequently.


7. Pros and Cons of iSCSI SAN with HA

 

FeatureAdvantagesDisadvantages
PerformanceHigh, especially on 10/25GbEDepends on network design
RedundancyMPIO ensures fault toleranceComplex to configure initially
ScalabilityEasy to expand with LUNsRequires SAN expertise
CostCheaper than Fibre Channel SANSlightly higher CPU overhead
CompatibilityWorks with all major SAN vendorsLimited snapshot support on raw LUNs

 


8. iSCSI SAN vs. Ceph and NFS

FeatureiSCSI SANCephNFS
Storage TypeBlockDistributed ObjectFile
HA SupportYes (via SAN + MPIO)NativeYes
PerformanceHighVery HighMedium
ComplexityMediumHighLow
Best Use CaseEnterprise shared LUNsHyper-converged clustersBackups, ISO storage

 


Conclusion

By combining iSCSI SAN, Multipath I/O, and Proxmox High Availability, you can build an enterprise-grade virtualization cluster that delivers both performance and fault tolerance without requiring proprietary SAN hardware.

This setup is ideal for medium to large IT environments looking for centralized, resilient, and scalable storage to power mission-critical workloads — all using open-source technology.