Fibre Channel (FC) has long been a trusted choice in enterprise datacenters for its reliability, performance, and low latency. While Proxmox VE (Virtual Environment) is best known for its support of Ceph, iSCSI, and NFS, it also works seamlessly with Fibre Channel when configured correctly at the Linux level.

In this guide, we’ll walk through how to configure Fibre Channel storage on Proxmox VE 9 — step by step.


Understanding Fibre Channel with Proxmox

Proxmox VE doesn’t have a dedicated Fibre Channel storage plugin. Instead, it leverages Linux’s native block device management. Once the FC-attached LUNs are visible to the Proxmox nodes, you can use them as LVM, LVM-Thin, or even ZFS storage pools — just like any local disk.

In other words, Proxmox sees FC storage as a local or shared block device, depending on your setup.


Typical Use Cases

  • Shared LVM over FC for clustered environments (multiple nodes accessing the same LUNs)
  • Local ZFS or LVM on FC for single-node high-speed storage
  • SAN-based storage exported over FC, presented to all Proxmox nodes

Prerequisites

Before starting, make sure you have:

  • A working Fibre Channel SAN with configured zoning and LUNs
  • Supported FC HBAs (Host Bus Adapters) in each Proxmox node
  • Proxmox VE 9 installed on Debian 12 base
  • Administrative access to all nodes (root or sudo)

Step-by-Step Configuration

Step 1: Verify FC HBA Detection

After connecting your FC cables and zoning the SAN, check if the Proxmox node detects the HBA:

lspci | grep -i fibre

You should see entries similar to:

Fibre Channel: QLogic Corp. QLE2562 8Gb FC HBA

Then check the host and port information:

systool -c fc_host -v

If you see WWNs (World Wide Names) and “port_state: Online,” the HBA is communicating with the fabric correctly.


Step 2: Install Multipath Tools

Multipath ensures redundancy when you have multiple paths to the same LUN.

apt update
apt install multipath-tools
systemctl enable multipathd --now

Edit the multipath configuration file:

nano /etc/multipath.conf

A basic config might look like:

defaults {
    user_friendly_names yes
    find_multipaths yes
}

Then restart the service:

systemctl restart multipathd

List discovered multipath devices:

multipath -ll

Step 3: Confirm LUN Visibility

Run:

lsblk

or

fdisk -l

You should see your SAN LUNs as /dev/mapper/mpathX devices.

Example:

NAME        MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
mpatha      253:0    0  500G  0 mpath
└─mpatha1   253:1    0  500G  0 part

Step 4: Create an LVM Volume Group

Now that the LUN is visible, create an LVM group on it.

pvcreate /dev/mapper/mpatha
vgcreate fc-vg /dev/mapper/mpatha

Step 5: Add FC Storage to Proxmox

From the Proxmox web interface:

  1. Navigate to Datacenter → Storage → Add → LVM
  2. Set:
    • ID: fc-storage
    • Base Volume: fc-vg
    • Shared: Yes (if used across multiple nodes)
  3. Click Add

Alternatively, edit /etc/pve/storage.cfg manually:

lvm: fc-storage
    vgname fc-vg
    content images,iso,backup
    shared 1

After saving, the new FC-based LVM storage appears under your storage list.


Step 6: (Optional) Use LVM-Thin for Thin Provisioning

If your SAN supports it, you can create a thin pool:

lvcreate -L 400G -T fc-vg/fc-thinpool

Then add it in the UI as LVM-Thin, selecting the fc-thinpool volume.


Step 7: Test VM Storage

Create a new VM or migrate an existing one to the FC storage:

  • In the VM → Hardware tab → Disk → Move Storage
  • Select fc-storage
  • Monitor performance using iostat or pvesh tools.

You should now see blazing-fast I/O backed by your FC SAN.


Troubleshooting Tips

IssuePossible Fix
LUNs not visibleCheck SAN zoning or HBA firmware
Multipath shows “ghost” pathsUse multipath -F to flush old paths
LVM not refreshingRun pvscan, vgscan, lvscan
Storage not appearing in UIEnsure /etc/pve/storage.cfg syncs across cluster nodes

Performance Tips

  • Enable multipathing and test failover.
  • Avoid ZFS on shared FC — it’s designed for local disks.
  • Use separate FC LUNs per Proxmox cluster for simplicity.
  • Consider enabling write cache on your SAN for better performance.

Summary

While Fibre Channel may not be as “plug-and-play” in Proxmox as Ceph or NFS, it’s a rock-solid option for enterprises with existing SAN infrastructure. By leveraging Linux’s multipath and LVM tools, Proxmox VE 9 can easily integrate with your FC storage to deliver high-performance, reliable virtualization.


Final Thoughts

Fibre Channel remains a key technology in many datacenters, and Proxmox VE 9 fully supports it through Linux’s native stack. With the right configuration, you can achieve enterprise-grade performance and stability — all while keeping the simplicity and flexibility that Proxmox is known for.