Networking

Network Bonding in Proxmox VE: LACP, Failover, and Performance

Configure network bonding in Proxmox VE with LACP 802.3ad, active-backup failover, and balance-alb for increased throughput and redundancy.

ProxmoxR app icon

Managing Proxmox? Try ProxmoxR

Monitor and control your VMs & containers from your phone.

Try Free

Why Bond Network Interfaces?

Network bonding (also called NIC teaming or link aggregation) combines two or more physical network interfaces into a single logical interface. This gives you two key benefits: redundancy and increased throughput. If one cable or switch port fails, traffic automatically moves to the remaining links. With the right bond mode, you can also aggregate bandwidth across multiple NICs.

In a Proxmox VE environment, bonding is especially valuable for storage networks (Ceph, NFS, iSCSI), live migration traffic, and any workload where a single point of network failure is unacceptable.

Bond Modes Explained

Linux supports several bonding modes. The most relevant for Proxmox deployments are:

  • balance-rr (mode 0) – Round-robin across interfaces. Simple but can cause out-of-order packets with some switches.
  • active-backup (mode 1) – Only one interface active at a time; the other is standby. No switch configuration required. Best for pure failover.
  • balance-xor (mode 2) – Transmits based on a hash of source/destination MAC. Requires static EtherChannel on the switch.
  • 802.3ad (mode 4) – IEEE LACP. Requires switch support for LACP. Provides true link aggregation with negotiated failover.
  • balance-tlb (mode 5) – Adaptive transmit load balancing. No special switch support needed.
  • balance-alb (mode 6) – Adaptive load balancing for both transmit and receive. No special switch support needed.

Configuring LACP 802.3ad Bonding

LACP is the most common choice in data center environments because it provides negotiated aggregation with proper failover. Both the Proxmox host and the switch must be configured.

Proxmox Host Configuration

Edit /etc/network/interfaces:

auto eno1
iface eno1 inet manual

auto eno2
iface eno2 inet manual

auto bond0
iface bond0 inet manual
    bond-slaves eno1 eno2
    bond-miimon 100
    bond-mode 802.3ad
    bond-xmit-hash-policy layer3+4

auto vmbr0
iface vmbr0 inet static
    address 192.168.1.10/24
    gateway 192.168.1.1
    bridge-ports bond0
    bridge-stp off
    bridge-fd 0

Key parameters:

  • bond-miimon 100 – checks link status every 100ms for fast failover detection.
  • bond-mode 802.3ad – enables LACP negotiation.
  • bond-xmit-hash-policy layer3+4 – hashes based on IP addresses and ports, providing better traffic distribution than the default layer 2 hash.

Switch Configuration (Example: Cisco IOS)

On the switch, the ports connected to the Proxmox host must be configured as an LACP port channel:

interface range GigabitEthernet0/1-2
  channel-group 1 mode active
  no shutdown

interface Port-channel1
  switchport mode trunk
  switchport trunk allowed vlan all

For other switch vendors, the concept is the same: group the ports and enable LACP (active or passive mode).

Active-Backup for Simple Failover

If your switch does not support LACP, or you simply want no-fuss failover, use active-backup mode:

auto bond0
iface bond0 inet manual
    bond-slaves eno1 eno2
    bond-miimon 100
    bond-mode active-backup
    bond-primary eno1

auto vmbr0
iface vmbr0 inet static
    address 192.168.1.10/24
    gateway 192.168.1.1
    bridge-ports bond0
    bridge-stp off
    bridge-fd 0

The bond-primary option designates which interface is preferred when both are up. No switch configuration is needed since only one link is active at any time.

Applying and Verifying the Bond

After editing the interfaces file, apply the configuration:

ifreload -a

Verify the bond status:

cat /proc/net/bonding/bond0

You should see output showing both slave interfaces, the active bond mode, and the LACP partner information (for 802.3ad). A healthy bond looks like this:

Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer3+4 (1)
MII Status: up
MII Polling Interval (ms): 100
LACP rate: slow
...
Slave Interface: eno1
MII Status: up
Link Failure Count: 0
...
Slave Interface: eno2
MII Status: up
Link Failure Count: 0

Testing Failover

To confirm failover works, start a continuous ping from or to the host, then disconnect one cable or administratively disable a port:

# On the Proxmox host, watch bond events
journalctl -f -u networking

# From another machine, start a ping
ping -i 0.2 192.168.1.10

With bond-miimon 100, you should see at most a few hundred milliseconds of interruption before traffic shifts to the remaining interface. When the cable is reconnected, the interface rejoins the bond automatically.

Bonding with VLANs

Bonded interfaces work seamlessly with VLAN-aware bridges. Simply bridge on top of the bond and tag VLANs as you normally would:

auto vmbr0
iface vmbr0 inet static
    address 192.168.1.10/24
    gateway 192.168.1.1
    bridge-ports bond0
    bridge-stp off
    bridge-fd 0
    bridge-vlan-aware yes
    bridge-vids 10 20 30 100

Network bonding is a foundational building block for any production Proxmox deployment. When managing bonded interfaces across multiple cluster nodes, tools like ProxmoxR let you verify bond health and network status from a single interface, so you can spot a degraded link before it becomes a full outage.

Take Proxmox management mobile

All the features discussed in this guide — accessible from your phone with ProxmoxR. Real-time monitoring, power control, firewall management, and more.

ProxmoxR

Manage Proxmox from your phone

Monitor, control, and manage your clusters on the go.

Free 7-day trial · No credit card required