A two-node Proxmox cluster is a special case: it cannot support native high availability (HA) without a third quorum vote. However, you can still build a resilient and practical storage setup depending on your budget and goals.

 


Key Principles for a 2-Node Proxmox Setup

  • No quorum = no HA failover. You must add a QDevice or a third small node (can be low-power) to act as a quorum voter.
  • For HA with shared storage, both nodes must access the same storage.

Ideal Storage Architectures for 2 Nodes

Option 1: Add QDevice + Ceph (Best Long-Term)

  • Add a third small node (e.g., Raspberry Pi, Intel NUC, VM) as a QDevice.
  • Set up Ceph across the 2 Proxmox nodes.
    • Use 3x replication.
    • Third node is only for quorum, not storage.

Pros:

  • HA works (with QDevice or 3rd node)
  • Fully distributed, resilient storage (no SPOF)
  • Great for VM disk storage

Cons:

  • Complexity
  • Needs 3 machines for full HA (even if one is small)

Option 2: External ZFS Storage via NFS or iSCSI

  • Run TrueNAS, ZFS-on-Linux, or similar on a dedicated NAS/server.
  • Share storage to both Proxmox nodes over NFS or iSCSI.
  • Add a third device as QDevice for quorum.

Pros:

  • Simpler than Ceph
  • Easy ZFS snapshots & replication
  • Compatible with HA if quorum is achieved

Cons:

  • Shared storage becomes a single point of failure (unless the NAS is clustered)
  • NFS has more latency than block-based iSCSI

Option 3: ZFS Replication (No Real HA, but Good Redundancy)

  • Each node has local ZFS pools
  • Use Proxmox built-in ZFS replication for VMs
  • No live migration or HA; failover is manual but fast

Pros:

  • Simple, no shared storage
  • ZFS benefits: checksums, compression, snapshots
  • Works well for small setups or labs

Cons:

  • No automatic HA
  • Replication is asynchronous (last-minute data may be lost on failover)

Example Configurations

SetupNodesStorage TypeHA Capable?Notes
Ceph + QDevice2 + 1Ceph RBDYesBest choice for future scalability
NFS/iSCSI + QDevice2 + 1Shared NASYesSimpler, cheaper
ZFS Replication2Local ZFSNoManual failover only

Networking and Disk Recommendations

ComponentSpec
NetworkDual 1–10 GbE (bonding or separate cluster/storage nets)
DisksSSD/NVMe for VM pools, RAIDZ1/RAID10 on ZFS
QDeviceSmall VM or Pi with corosync-qdevice
BackupProxmox Backup Server (PBS) recommended

Summary: What’s Ideal for You?

  • For real HA: Use shared storage (Ceph or NFS/iSCSI) + QDevice
  • For budget-friendly: Use ZFS on each node + ZFS replication
  • To enable HA at all: Always have a 3rd node or QDevice for quorum

 

Get in touch with Saturn ME today for a free Proxmox consulting session—no strings attached.