High Availability (HA) in Proxmox VE 9 becomes tricky when you only have two nodes. In a normal 3-node cluster, quorum is automatically maintained, but with two nodes, if one goes offline, the other loses quorum and the cluster becomes read-only — effectively halting HA operations.

The solution? QDevice (Quorum Device) — a lightweight, third-party system that acts as a voting tie-breaker. It doesn’t host VMs or storage, but helps your 2-node cluster maintain quorum during failures.

 


What is a QDevice?

A QDevice is a small service that integrates with the Proxmox Cluster (based on Corosync). It provides an additional “vote” to help determine cluster quorum in case one of the two nodes goes down.

Think of it as the referee between the two nodes — it ensures the surviving node can continue operating safely.

 


Prerequisites

Before starting, make sure you have:

  • Two Proxmox VE 9 nodes already configured in a cluster.
    Example:
    • Node1: pve1.example.local
    • Node2: pve2.example.local
  • A third lightweight Linux system (VM, LXC, or small server) to act as the QDevice.
    • Example: qdevice.example.local
  • Basic SSH access between all nodes.
  • Root privileges on all three machines.

Recommended: Use Debian 12 or Ubuntu 22.04 for the QDevice node since it aligns well with Proxmox dependencies.

 


Step-by-Step Installation Guide

Step 1: Install the Required Packages on the QDevice Node

Run the following commands on the QDevice node:

apt update
apt install corosync-qdevice corosync-qnetd -y

This installs both:

  • corosync-qnetd — the daemon service for the QDevice
  • corosync-qdevice — the client part used by Proxmox nodes to communicate with it

Install the Required Packages on the both PVE Nodes

Run the following commands on the both PVE nodes:

apt update
apt install corosync-qdevice -y

Step 2: Enable and Start the QNetd Service on QDevice Node

systemctl enable corosync-qnetd
systemctl start corosync-qnetd

You can verify the service is active using:

systemctl status corosync-qnetd


Step 3: Configure QDevice on the Proxmox Cluster

Now, move to any one of your Proxmox nodes (e.g., pve1).

Run:

pvecm qdevice setup qdevice.example.local

This command will:

  • Connect to the QDevice node
  • Configure it as a voter for the Proxmox cluster
  • Automatically distribute necessary keys and configs

⚠️ Make sure the hostname or IP of qdevice.example.local is resolvable and reachable from both cluster nodes.

 


Step 4: Verify QDevice Status

After setup, check the QDevice status from any Proxmox node:

pvecm status

You should see output similar to:

Cluster information
-------------------
Name:             proxcluster
Config Version:   4
Transport:        knet
Secure auth:      on

Quorum information
------------------
Date:             Sat Oct 18 22:05:13 2025
Quorum provider:  corosync_votequorum
Nodes:            2
Node ID:          1
Ring ID:          1/123
Quorate:          Yes

Votequorum information
----------------------
Expected votes:   3
Highest expected: 3
Total votes:      3
Quorum:           2
Flags:            Quorate

Membership information
-----------------------
 Nodeid      Votes Name
      1          1 pve1 (local)
      2          1 pve2
      0          1 Qdevice

If you see the QDevice listed and the cluster status is “Quorate”, everything is working correctly.

 


Step 5: Test the Setup

You can now test the resilience of your setup.

  1. Shutdown one of the Proxmox nodes.
  2. Run pvecm status on the remaining node.

If quorum is maintained and the cluster remains quorate, your QDevice setup is successful.



Troubleshooting Tips

  • If the QDevice setup fails, ensure:
    • Port 5403/tcp is open between Proxmox nodes and QDevice node.
    • Hostnames resolve correctly.
    • Time synchronization (NTP) is enabled on all nodes.
  • Check logs for details:
    journalctl -u corosync-qnetd
    journalctl -u corosync
    

 

 


Conclusion

Setting up a QDevice is essential for a 2-node Proxmox VE 9 cluster if you want real HA capabilities. It provides an additional vote, ensuring your cluster remains operational even if one node fails — without needing a full third Proxmox host.

With this simple setup, you can achieve reliable quorum and a stable high-availability environment for small clusters.

 

 


References