Proxmox Cluster Network Requirements: The Complete Checklist
Detailed guide to Proxmox cluster networking including dedicated cluster networks, corosync link redundancy, latency requirements, MTU settings, and firewall port configuration.
Why Network Design Matters for Proxmox Clusters
A Proxmox cluster depends entirely on reliable, low-latency communication between nodes. If corosync packets are delayed or dropped, nodes get fenced, VMs restart unexpectedly, and your cluster becomes more of a liability than an asset. Getting the network right from the start saves you from painful troubleshooting later.
Separate Your Cluster Network from VM Traffic
The single most important rule: never run corosync traffic on the same network as your VM or storage traffic. A busy VM saturating a 1Gbps link can starve corosync of bandwidth, causing heartbeat timeouts and false node failures.
Use a dedicated VLAN or, ideally, a physically separate network interface for cluster communication:
# /etc/network/interfaces - Dedicated cluster network
auto ens19
iface ens19 inet static
address 10.10.10.1/24
# This interface carries ONLY corosync traffic
# No gateway needed - cluster network is local only
auto vmbr0
iface vmbr0 inet static
address 192.168.1.10/24
gateway 192.168.1.1
bridge-ports ens18
bridge-stp off
bridge-fd 0
# This bridge carries VM traffic and management
Latency and Bandwidth Requirements
Corosync is extremely sensitive to latency. The hard requirements are:
- Round-trip latency: under 2 ms – Anything above this and you risk frequent token losses and node ejections.
- Bandwidth: 1 Gbps minimum – Corosync itself uses very little bandwidth, but live migration and Ceph replication will need the headroom.
- Jitter: minimal – Consistent latency is as important as low latency. Spikes above 5 ms can trigger timeouts even if average latency is low.
Test your network before joining nodes to the cluster:
# Measure latency between nodes
ping -c 100 10.10.10.2 | tail -1
# rtt min/avg/max/mdev = 0.102/0.145/0.312/0.028 ms
# Test bandwidth with iperf3
# On node 2 (server):
iperf3 -s
# On node 1 (client):
iperf3 -c 10.10.10.2
# Look for at least 900 Mbits/sec on 1Gbps links
Corosync Link Redundancy: link0 and link1
Proxmox VE supports two corosync links for redundancy. If link0 fails, corosync automatically switches to link1 without losing quorum. Always configure both links in production:
# When creating a cluster with two links:
pvecm create my-cluster --link0 10.10.10.1 --link1 10.10.20.1
# When joining with two links:
pvecm add 10.10.10.1 --link0 10.10.10.2 --link1 10.10.20.2
The resulting corosync configuration will contain both links:
# Excerpt from /etc/corosync/corosync.conf
nodelist {
node {
name: pve1
nodeid: 1
ring0_addr: 10.10.10.1
ring1_addr: 10.10.20.1
}
node {
name: pve2
nodeid: 2
ring0_addr: 10.10.10.2
ring1_addr: 10.10.20.2
}
}
totem {
cluster_name: my-cluster
config_version: 3
interface {
linknumber: 0
}
interface {
linknumber: 1
}
}
Check the status of both links at any time:
corosync-cfgtool -s
# Should show status "OK" for both ring IDs
MTU Configuration
If your switches support jumbo frames, increasing the MTU to 9000 on the cluster network reduces overhead and improves throughput for live migrations and Ceph replication:
# /etc/network/interfaces
auto ens19
iface ens19 inet static
address 10.10.10.1/24
mtu 9000
Make sure every device in the path supports the same MTU — the switch ports, any intermediate switches, and all node interfaces. Verify end-to-end:
# Test jumbo frames between nodes (don't fragment flag set)
ping -M do -s 8972 10.10.10.2
# If this fails, something in the path doesn't support MTU 9000
Firewall Ports You Must Open
If you run a firewall on your Proxmox nodes (including the built-in Proxmox firewall), these ports must be allowed between all cluster nodes:
# Corosync cluster communication
UDP 5405-5412
# Proxmox web interface
TCP 8006
# Spice proxy for console access
TCP 3128
# Live migration ports
TCP 60000-60050
# SSH (used during cluster join)
TCP 22
# Ceph (if used)
TCP 6789, 3300 # Ceph monitors
TCP 6800-7300 # Ceph OSDs
If you use the Proxmox built-in firewall, add cluster-wide rules in the datacenter firewall configuration:
# /etc/pve/firewall/cluster.fw
[RULES]
IN ACCEPT -source +cluster -p udp -dport 5405:5412 -log nolog
IN ACCEPT -source +cluster -p tcp -dport 22 -log nolog
IN ACCEPT -source +cluster -p tcp -dport 8006 -log nolog
IN ACCEPT -source +cluster -p tcp -dport 3128 -log nolog
IN ACCEPT -source +cluster -p tcp -dport 60000:60050 -log nolog
Verifying Your Network Configuration
After setting everything up, run through this checklist on each node:
# 1. Verify cluster status
pvecm status
# 2. Check both corosync links
corosync-cfgtool -s
# 3. Test DNS/hostname resolution
for node in pve1 pve2 pve3; do
echo -n "$node: "; ping -c 1 -W 1 $node | grep time=
done
# 4. Verify firewall isn't blocking traffic
iptables -L -n | grep -E "5405|8006|60000"
Monitoring tools like ProxmoxR can alert you immediately when a node drops out of the cluster, which is often the first sign of a network issue. Getting a push notification on your phone beats discovering the problem when a user complains about downtime.
Summary
A healthy Proxmox cluster starts with a healthy network. Use a dedicated interface for corosync, configure link redundancy with link0 and link1, ensure sub-2ms latency, open the required firewall ports, and consider jumbo frames for migration and storage traffic. Taking these steps during initial setup prevents the vast majority of cluster communication issues you would otherwise encounter in production.
Take Proxmox management mobile
All the features discussed in this guide — accessible from your phone with ProxmoxR. Real-time monitoring, power control, firewall management, and more.