Clusters & HA

Proxmox Two-Node Cluster: Setup, QDevice, and Failover Strategies

Learn how to build a reliable Proxmox VE two-node cluster with QDevice, shared storage options, HA configuration, and practical failover scenarios.

ProxmoxR app icon

Managing Proxmox? Try ProxmoxR

Monitor and control your VMs & containers from your phone.

Try Free

Is a Two-Node Cluster Right for You?

A two-node Proxmox VE cluster is a popular choice for small businesses, home labs, and branch offices that want redundancy without the cost of a third server. While it comes with some trade-offs compared to a three-node setup, a properly configured two-node cluster can provide reliable VM migration and high availability.

Pros and Cons of Two-Node Clusters

Before committing to a two-node design, understand what you gain and what you give up:

  • Pro: Lower hardware cost than three nodes
  • Pro: Live migration between nodes for maintenance
  • Pro: Centralized management via single web interface
  • Con: No natural quorum majority (both nodes have equal votes)
  • Con: Requires a QDevice or manual quorum workaround
  • Con: Less capacity headroom if one node fails
  • Con: Cannot use Ceph (minimum 3 nodes required)

Setting Up the Two-Node Cluster

The initial setup follows the standard cluster creation process. Create the cluster on the first node, then join the second.

# On node 1: create the cluster
pvecm create mycluster

# On node 2: join the cluster
pvecm add 192.168.1.10

# Verify both nodes are in the cluster
pvecm nodes
pvecm status

At this point, your cluster works but has a quorum problem. With 2 expected votes and quorum requiring 2, losing either node will make the cluster non-functional.

Adding a QDevice for Reliable Quorum

A QDevice provides a third vote from a lightweight external service, solving the quorum problem without adding a full third node. The QDevice host can be any Debian-based machine, even a small VM or Raspberry Pi on a separate network segment.

# On the QDevice host (NOT a cluster node)
apt update
apt install corosync-qnetd

# On EACH Proxmox cluster node
apt install corosync-qdevice

# From one cluster node, configure the QDevice
pvecm qdevice setup 192.168.1.50

# Verify QDevice is active
pvecm qdevice status

# Check the new vote count
pvecm status
# Expected votes: 3 (2 nodes + 1 QDevice)
# Quorum: 2

# Now if one node goes down:
# Remaining node (1 vote) + QDevice (1 vote) = 2 votes
# Quorum (2) is maintained!
Place the QDevice host on independent infrastructure. If it shares a switch or power source with one of your cluster nodes, it cannot provide truly independent quorum arbitration.

Shared Storage Options for Two Nodes

For live migration and HA to work, both nodes need access to the same storage. Since Ceph requires a minimum of three nodes, you need alternative shared storage solutions.

# Option 1: NFS server (external NAS or dedicated server)
# Add NFS storage via CLI
pvesm add nfs shared-nfs \
    --server 192.168.1.100 \
    --export /mnt/data/proxmox \
    --content images,iso,vztmpl,backup

# Option 2: iSCSI with LVM
pvesm add iscsi iscsi-target \
    --portal 192.168.1.100 \
    --target iqn.2024-01.com.storage:proxmox
pvesm add iscsidirect lvm-iscsi \
    --portal 192.168.1.100 \
    --target iqn.2024-01.com.storage:proxmox

# Option 3: Proxmox Backup Server for backups (separate machine)
pvesm add pbs pbs-storage \
    --server 192.168.1.200 \
    --datastore main \
    --username backup@pbs \
    --password yourpassword \
    --content backup

# Option 4: DRBD (replicated block storage between 2 nodes)
# Requires additional setup but provides storage redundancy
# without external hardware

Configuring HA with Two Nodes

With a QDevice providing the third vote, you can use Proxmox HA to automatically restart VMs on the surviving node if one node fails.

# Add a VM to HA management
ha-manager add vm:100 --state started --group mygroup

# Create an HA group with both nodes
ha-manager groupset mygroup --nodes pve1,pve2 --nofailback 1

# The --nofailback flag prevents VMs from automatically migrating
# back to the original node after it recovers

# Check HA status
ha-manager status

# Important: ensure your surviving node has enough resources
# to run ALL VMs from both nodes simultaneously
# Plan for at most 50% resource utilization per node

Failover Scenarios

Understanding how your cluster behaves during failures helps you plan capacity and set expectations. If you manage your Proxmox environment with ProxmoxR, you can monitor resource utilization across both nodes to ensure failover capacity is always available.

# Scenario 1: Node 2 fails, QDevice is healthy
# - Node 1 (1 vote) + QDevice (1 vote) = 2 = quorum maintained
# - HA restarts Node 2's VMs on Node 1
# - Result: All VMs running on Node 1

# Scenario 2: QDevice fails, both nodes healthy
# - Node 1 (1 vote) + Node 2 (1 vote) = 2 = quorum maintained
# - Cluster operates normally, but has no failover safety net
# - Fix: restore QDevice ASAP

# Scenario 3: Node 2 AND QDevice fail simultaneously
# - Node 1 has only 1 vote out of 3 expected
# - Quorum (2) is NOT met -> cluster stops
# - Manual intervention required: pvecm expected 1

# Scenario 4: Network partition (both nodes up, cannot communicate)
# - QDevice votes for one side (based on connectivity)
# - One node gets quorum, the other stops
# - This is the correct behavior - prevents split-brain

Best Practices for Two-Node Clusters

  • Always use a QDevice on a separate machine for quorum
  • Keep resource utilization below 50% per node so either can absorb the other's workload
  • Use redundant network links between nodes (bonding or dual corosync rings)
  • Test failover regularly by gracefully rebooting one node
  • Have a plan for the scenario where both the QDevice and a node fail
  • Monitor cluster health and QDevice connectivity continuously

A two-node Proxmox cluster with a QDevice provides a cost-effective, reliable platform for small to medium workloads. The key is proper planning around quorum, storage, and failover capacity.

Take Proxmox management mobile

All the features discussed in this guide — accessible from your phone with ProxmoxR. Real-time monitoring, power control, firewall management, and more.

ProxmoxR

Manage Proxmox from your phone

Monitor, control, and manage your clusters on the go.

Free 7-day trial · No credit card required