Networking

Proxmox VE SDN Setup: Zones, VNets, and Overlay Networks

A practical guide to configuring Software-Defined Networking in Proxmox VE, covering SDN zones, VNets, subnets, VXLAN tunnels, and EVPN routing.

ProxmoxR app icon

Managing Proxmox? Try ProxmoxR

Monitor and control your VMs & containers from your phone.

Try Free

What Is Proxmox SDN?

Software-Defined Networking (SDN) in Proxmox VE lets you create and manage virtual networks across your entire cluster from a single pane of glass. Instead of manually editing /etc/network/interfaces on every node, SDN gives you a centralized way to define zones, VNets, and subnets that automatically propagate to all cluster members. This is especially powerful in multi-node environments where you need consistent network segmentation without repetitive manual configuration.

Proxmox SDN supports several zone types, each suited to different use cases:

  • Simple – isolated layer 2 bridges on each node, no cross-node connectivity.
  • VLAN – traditional 802.1Q VLAN tagging on a shared physical bridge.
  • VXLAN – layer 2 overlay tunnels across layer 3 boundaries using UDP encapsulation.
  • EVPN – combines VXLAN transport with BGP-based control plane for scalable routing.

Enabling SDN in Proxmox VE

SDN is built into Proxmox VE 7.x and later. First, ensure the required packages are installed on every cluster node:

apt update
apt install libpve-network-perl dnsmasq frr-pythontools -y

Disable the default dnsmasq service since SDN manages its own instances:

systemctl disable --now dnsmasq

After installation, the SDN section appears under Datacenter > SDN in the web UI.

Creating a Simple Zone and VNet

Start with a Simple zone to understand the workflow. In the web UI, navigate to Datacenter > SDN > Zones and click Add. Choose Simple as the type:

# Equivalent CLI command
pvesh create /cluster/sdn/zones --zone myzone --type simple

Next, create a VNet attached to this zone:

pvesh create /cluster/sdn/vnets --vnet myvnet01 --zone myzone

Add a subnet so VMs on this VNet receive IP addresses via the integrated DHCP:

pvesh create /cluster/sdn/vnets/myvnet01/subnets \
    --subnet 10.100.0.0/24 \
    --gateway 10.100.0.1 \
    --type subnet \
    --dhcp-range start-address=10.100.0.100,end-address=10.100.0.200

Finally, apply the configuration so it takes effect across the cluster:

pvesh set /cluster/sdn

You can now assign myvnet01 as the bridge for any VM or container network interface.

Setting Up VXLAN Overlay Networks

VXLAN zones let VMs on different nodes communicate at layer 2, even if the nodes sit on different IP subnets. This is ideal for stretched clusters or environments where adding VLANs to the physical switch fabric is impractical.

Create a VXLAN zone specifying the peer node IPs and the UDP port for tunnel traffic:

pvesh create /cluster/sdn/zones \
    --zone vxzone1 \
    --type vxlan \
    --peers 10.0.0.1,10.0.0.2,10.0.0.3 \
    --mtu 1450

Note the reduced MTU. VXLAN adds a 50-byte header, so if your physical network uses an MTU of 1500, set the VXLAN zone MTU to 1450. If your switches support jumbo frames (MTU 9000), set the zone MTU to 8950 for better performance.

Create a VNet inside this zone with a specific VXLAN tag:

pvesh create /cluster/sdn/vnets --vnet prod-net --zone vxzone1 --tag 100100

EVPN for Scalable Routing

EVPN zones build on VXLAN but add a BGP control plane via FRRouting (FRR). This eliminates the need for static peer lists and enables inter-VNet routing at the fabric level.

First, create a BGP controller under Datacenter > SDN > Controllers:

pvesh create /cluster/sdn/controllers \
    --controller evpnctl \
    --type evpn \
    --asn 65001 \
    --peers 10.0.0.1,10.0.0.2,10.0.0.3

Then create the EVPN zone referencing this controller:

pvesh create /cluster/sdn/zones \
    --zone evpnzone \
    --type evpn \
    --controller evpnctl \
    --vrf-vxlan 4000 \
    --exitnodes pve1,pve2

The exitnodes parameter specifies which nodes act as gateways to the external network. The vrf-vxlan tag identifies the layer 3 VRF tunnel.

After applying the SDN config, verify that BGP sessions are established:

vtysh -c "show bgp l2vpn evpn summary"

You should see each peer in the Established state with routes being exchanged.

Troubleshooting SDN

If VMs cannot communicate across nodes, check a few common issues:

  • Verify the SDN config has been applied: click Apply in the SDN panel or run pvesh set /cluster/sdn.
  • Check that UDP port 4789 (VXLAN default) is not blocked between nodes.
  • Inspect the generated network config: cat /etc/network/interfaces.d/sdn.
  • For EVPN, confirm FRR is running: systemctl status frr.
  • Verify the VXLAN interface exists: ip -d link show type vxlan.

Proxmox SDN is a powerful feature that simplifies network management across clusters of any size. Tools like ProxmoxR can further streamline multi-node management by providing remote visibility into your SDN topology and VM network assignments from a single dashboard. Whether you choose simple VLANs or full EVPN fabric, centralizing your network definitions through SDN reduces configuration drift and makes your infrastructure more maintainable.

Take Proxmox management mobile

All the features discussed in this guide — accessible from your phone with ProxmoxR. Real-time monitoring, power control, firewall management, and more.

ProxmoxR

Manage Proxmox from your phone

Monitor, control, and manage your clusters on the go.

Free 7-day trial · No credit card required