VXLAN (Virtual Extensible LAN) is a network virtualization technology that extends Layer 2 segments over Layer 3 infrastructure. By encapsulating Ethernet frames in UDP packets, VXLAN enables you to build overlay networks across geographically distributed data centers, connect Kubernetes clusters across subnets, and isolate tenant traffic without physical VLAN limitations.
This guide compares three open-source implementations for self-hosted VXLAN tunneling: Open vSwitch (the dominant software switch), FRRouting (VXLAN EVPN control plane), and OVN (Open Virtual Network, the SDN overlay built on OVS). We cover deployment with Docker, configuration patterns, and when to choose each tool.
What Is VXLAN and Why Use It?
VXLAN solves a fundamental problem in modern infrastructure: the 4,096 VLAN limit. By using a 24-bit VXLAN Network Identifier (VNI), it supports up to 16 million logical networks — enough for large multi-tenant environments. Each VXLAN tunnel is a point-to-point or multicast-based UDP connection (default port 4789) that carries encapsulated Ethernet frames between VTEPs (VXLAN Tunnel Endpoints).
Key use cases include:
- Multi-datacenter L2 extension — stretch Layer 2 networks across sites without dark fiber
- Kubernetes pod networking — overlay networks for cross-node pod communication
- Cloud-native migration — lift-and-shift VMs to new subnets without re-IPing
- Network segmentation — isolate tenant workloads without physical switch reconfiguration
- Hybrid cloud connectivity — connect on-premises infrastructure to cloud VPCs
Unlike traditional VLAN trunking, VXLAN tunnels traverse any IP network — including the public internet (with encryption) — making it ideal for distributed infrastructure.
Open vSwitch (OVS)
Open vSwitch (3,946+ GitHub stars) is the most widely deployed software switch for VXLAN. It supports both static VXLAN tunnels (configured manually per peer) and EVPN-based VXLAN (with BGP control plane). OVS runs in the Linux kernel datapath for high throughput or in userspace for portability.
Strengths:
- Industry-standard VXLAN implementation, used by OpenStack, Kubernetes (via OVN-Kubernetes), and VMware
- Hardware offload support (SmartNICs, DPDK) for line-rate performance
- Rich OpenFlow integration for advanced traffic engineering
- Mature Docker integration via
--network=hostor macvlan
Limitations:
- Static VXLAN configuration doesn’t scale beyond a few peers
- EVPN requires a separate BGP daemon (FRRouting or BIRD)
- Kernel module dependency limits portability to non-Linux systems
Docker Deployment
| |
VXLAN Tunnel Configuration
| |
FRRouting (VXLAN EVPN)
FRRouting (4,127+ GitHub stars) provides the BGP EVPN control plane for VXLAN networks. Instead of manually configuring every VTEP peer, FRR uses BGP to automatically distribute MAC and IP reachability information, enabling scalable multi-tenant VXLAN overlays.
Strengths:
- BGP EVPN eliminates manual VTEP peer configuration
- Supports MAC/IP advertisement routes (Type 2, Type 5)
- Integrated with IRB (Integrated Routing and Bridging) for L3 VXLAN
- Active development with strong enterprise adoption
- Works alongside OVS or Linux native VXLAN interfaces
Limitations:
- Requires BGP peering infrastructure (iBGP or eBGP)
- Steeper learning curve than static VXLAN
- Needs a separate data plane (OVS, Linux bridge, or VRF)
Docker Deployment
| |
EVPN Configuration
| |
OVN (Open Virtual Network)
OVN (695+ GitHub stars) is a full SDN overlay solution built on top of Open vSwitch. It provides a centralized control plane (OVN Northbound/Southbound databases) that automatically manages VXLAN tunnels, logical switches, and logical routers across all hypervisor nodes.
Strengths:
- Complete SDN stack — logical switches, routers, ACLs, load balancers
- Automatic VXLAN tunnel management (no manual VTEP config)
- Native integration with OpenStack, Kubernetes (OVN-Kubernetes), and Kubernetes Gateway API
- Supports distributed logical routing (no traffic hairpinning)
- Built-in security groups and ACLs
Limitations:
- More complex architecture (3 databases + multiple daemons)
- Requires central controller (ovn-northd) for policy distribution
- Heavier resource footprint than standalone OVS
- Best suited for cloud/VM orchestration environments
Docker Deployment
| |
OVN Logical Network Setup
| |
Comparison: OVS vs FRRouting vs OVN
| Feature | Open vSwitch | FRRouting EVPN | OVN |
|---|---|---|---|
| Primary Role | Software switch / data plane | BGP EVPN control plane | Full SDN overlay |
| VXLAN Support | Native (static + EVPN) | EVPN control plane only | Native (auto-managed) |
| Control Plane | Manual or external BGP | BGP EVPN (built-in) | Centralized (OVN NB/SB) |
| L3 Routing | Via external router | Integrated IRB | Distributed logical router |
| Scalability | Limited (static) | High (BGP-based) | Very high (centralized) |
| Docker Image | linuxserver/openvswitch | frrouting/frr | ovn/ovn-central |
| GitHub Stars | 3,946 | 4,127 | 695 |
| Best For | Simple point-to-point tunnels | Multi-site EVPN fabrics | Cloud/VM orchestration |
Choosing the Right VXLAN Solution
For simple point-to-point or hub-and-spoke tunnels between a few sites, Open vSwitch with static VXLAN configuration is the quickest path. A few ovs-vsctl commands establish the tunnel, and no control plane infrastructure is needed.
For multi-site EVPN fabrics with dozens of VTEPs, FRRouting’s BGP EVPN control plane is essential. It automates MAC learning, eliminates flooding, and supports multi-tenant segmentation with separate VNIs per tenant. Pair FRR with OVS or Linux native VXLAN interfaces for the data plane.
For cloud-native environments running OpenStack or Kubernetes, OVN provides the most complete solution. Its centralized control plane automatically provisions VXLAN tunnels as workloads are scheduled, and its logical router eliminates the need for external routing daemons.
For related reading on overlay networking, see our ZeroTier vs Nebula vs Netmaker guide for alternative approaches to cross-site connectivity, and our SDN controller comparison for broader SDN orchestration options.
FAQ
What is the difference between VXLAN and a VPN?
VXLAN operates at Layer 2 (Ethernet) and extends broadcast domains across IP networks, while VPNs operate at Layer 3 (IP) or above and provide encrypted point-to-point tunnels. VXLAN is designed for data center overlay networking; VPNs are designed for secure remote access. You can combine both by running VXLAN over an IPsec tunnel.
Do I need BGP for VXLAN?
No. VXLAN works in two modes: (1) Head-end replication (static VTEP configuration) for small deployments, and (2) EVPN control plane (BGP-based) for larger multi-site fabrics. BGP EVPN is recommended when you have more than 4-6 VTEPs or need multi-tenant isolation.
What UDP port does VXLAN use?
VXLAN uses UDP port 4789 by default (IANA-assigned). Some implementations allow custom ports. Ensure your firewalls allow UDP 4789 between all VTEP endpoints.
Can VXLAN run over the public internet?
Yes, but with caveats. VXLAN adds 50 bytes of overhead (outer IP + UDP + VXLAN header), which can cause MTU issues on links with 1500-byte MTU. For internet transit, use VXLAN over IPsec/WireGuard for encryption, and ensure path MTU is at least 1550 bytes.
How does OVN compare to Kubernetes CNI plugins?
OVN is itself a Kubernetes CNI plugin (OVN-Kubernetes). It provides more features than simpler CNIs like Flannel — including network policies, distributed logical routing, and load balancing. If you need advanced networking beyond basic pod connectivity, OVN-Kubernetes is a strong choice over Flannel or Calico.
What is the maximum number of VXLAN segments?
VXLAN uses a 24-bit VNI field, supporting 16,777,216 (2^24) logical segments. This far exceeds the 4,096 limit of 802.1Q VLANs and is sufficient for any practical multi-tenant deployment.