Introduction

If you’re building a two-node Proxmox VE 9 cluster, you might wonder what happens when one of the nodes goes offline.
Does the remaining node keep running virtual machines?
Can it take over automatically?

This guide explains exactly what happens inside a two-node Proxmox cluster when one node fails — and how to prevent downtime using a QDevice for quorum.

Note: A two node Proxmox VE cluster is NOT recommended in production environments.


Understanding Quorum in a Proxmox Cluster

Proxmox uses Corosync for cluster communication and quorum management. Each node has one vote, and the cluster must have more than 50% of votes to make configuration changes safely.

Quorum protects your cluster from “split-brain” — a dangerous situation where two nodes believe they are both in charge and might run the same VM simultaneously.

Number of NodesVotes Needed for QuorumIf One Node Fails
11Cluster offline — no redundancy
22Quorum lost — limited functionality
32Quorum maintained — cluster stays operational

 


What Happens When One Node Goes Down (Without QDevice)

In a two-node Proxmox VE 9 cluster without a QDevice, if one node goes down:

  • The surviving node stays physically online

  • Virtual machines already running on that node continue to run

  • But the cluster loses quorum, so:

    • You cannot start or migrate VMs

    • You cannot make configuration changes

    • High Availability (HA) fails over is disabled

Typical CLI message:

cluster not quorate - unable to write data

Essentially, the node becomes isolated — it still runs workloads, but the cluster behaves as read-only until quorum is restored.


Why Quorum Loss Happens

With only two nodes, each has 1 vote (total 2).
When one node goes down, only 1 vote remains — not enough for majority.

Proxmox intentionally pauses all cluster operations to prevent data corruption.


The Solution: Add a QDevice (Quorum Device)

A QDevice (or quorum device) acts as a third vote in the cluster.
It doesn’t host VMs — it only participates in quorum decisions.

When one Proxmox node fails, the QDevice and the remaining node together keep quorum alive, so your cluster continues to function normally.

Typical QDevice Setup

NodeRoleIP AddressVote
pve1Proxmox Node10.0.0.11
pve2Proxmox Node10.0.0.21
qdeviceQuorum Device10.0.0.31

Total votes = 3
If one node fails → 2 votes remain → quorum is still valid.


Benefits of Using a QDevice

Maintains quorum when one node goes down
Enables automatic HA failover
Prevents split-brain
Simple to configure — can run on a lightweight VM, Raspberry Pi, or small Linux host


Checking Cluster and Quorum Status

Use these commands to check your cluster health:

pvecm status

Output example:

Quorum provider: corosync_votequorum
Nodes: 2
Node ID: 1
Quorate: No

List all nodes:

pvecm nodes

If quorum is lost, you can still manage local VMs through the node shell, but cluster-wide management will be blocked until quorum returns.


Temporary Emergency Fix (Single Node Mode)

If one node is permanently down and you need to regain control temporarily:

pvecm expected 1

⚠️ Use only when you’re 100% sure the other node is offline, not just disconnected — otherwise you risk split-brain.

After repairs:

pvecm expected 2

Best Practices for Two-Node Proxmox Clusters

ComponentBest Practice
Cluster SizeUse 2 nodes + 1 QDevice
QDevice LocationExternal VM or physical host
NetworkDedicated Corosync link (1 GbE or faster)
StorageShared (Ceph, NFS, iSCSI) or ZFS replication
HAEnable only if QDevice is configured
MonitoringUse pvecm status and journalctl -u corosync

 


Comparison: With vs Without QDevice

EventWithout QDeviceWith QDevice
One node failsCluster loses quorumCluster stays operational
HA restartDisabledWorks automatically
Configuration changesBlockedAllowed
Risk of split-brainMediumVery Low

 


Conclusion

In a two-node Proxmox VE 9 cluster, losing one node means losing quorum — your running VMs survive, but the cluster becomes read-only. To achieve true high availability, always deploy a QDevice or a lightweight third quorum node. It’s a simple setup that ensures your cluster stays functional and safe even if one node fails.