Container networking is the foundation of any self-hosted infrastructure. The Container Network Interface (CNI) defines a standard for connecting containers to networks, and the choice of CNI plugin determines how your containers communicate with each other, the host, and external networks.
This guide dives deep into three fundamental CNI plugin types — Linux Bridge, IPvLAN, and Macvlan — comparing their architecture, performance, isolation properties, and ideal use cases for self-hosted container deployments.
Understanding CNI Plugin Types
| Feature | Linux Bridge | IPvLAN | Macvlan |
|---|---|---|---|
| Kernel Module | bridge | ipvlan | macvlan |
| Layer | L2 (Ethernet) | L3 (IP) / L2 | L2 (Ethernet) |
| MAC Addresses | 1 per container (veth pair) | Shared (host MAC) | 1 per container |
| Performance | Good (software bridge) | Excellent (no bridge) | Excellent (no bridge) |
| IPAM Support | ✅ host-local, DHCP | ✅ host-local, DHCP | ✅ host-local, DHCP |
| Network Policy | ✅ Full support | ✅ L3 mode only | ❌ Limited |
| Hairpin Mode | Required for host access | Not needed | Required for host access |
| VLAN Support | Via vlan plugin | Native (mode l2/l3) | Native (mode bridge) |
| Best For | General container networking | High-performance, L3 isolation | Legacy app compatibility, direct L2 |
Linux Bridge CNI
The Linux Bridge plugin creates a virtual Ethernet bridge on the host and connects each container via a veth (virtual Ethernet) pair. One end of the veth pair lives inside the container’s network namespace, and the other end attaches to the bridge. This is the most common and widely supported CNI plugin, forming the foundation for more complex networking setups.
IPvLAN CNI
IPvLAN creates virtual network interfaces that share the host’s MAC address but have unique IP addresses. Unlike Bridge mode, IPvLAN operates at Layer 3 (IP level), meaning the kernel routes packets directly without a software bridge. This eliminates bridge overhead and provides near-native network performance. IPvLAN has three modes: L2 (same subnet), L3 (routed), and L3S (L3 with source NAT).
Macvlan CNI
Macvlan creates virtual interfaces that each have their own unique MAC address, making containers appear as separate physical devices on the network. This is useful for legacy applications that expect direct network access or for environments where containers need to be individually addressable by external firewalls, load balancers, or monitoring systems.
Deploying Linux Bridge CNI
The Linux Bridge plugin is part of the standard CNI plugins package:
| |
Bridge CNI configuration (/etc/cni/net.d/10-bridge.conf):
| |
Docker Compose with custom bridge network:
| |
Deploying IPvLAN CNI
IPvLAN configuration (/etc/cni/net.d/10-ipvlan.conf):
| |
Test IPvLAN with a temporary container:
| |
Docker native IPvLAN network:
| |
Deploying Macvlan CNI
Macvlan configuration (/etc/cni/net.d/10-macvlan.conf):
| |
Docker native Macvlan network:
| |
Macvlan with VLAN tagging:
| |
Performance Comparison
Throughput and Latency
| Metric | Linux Bridge | IPvLAN (L2) | IPvLAN (L3) | Macvlan |
|---|---|---|---|---|
| TCP Throughput | ~90% native | ~98% native | ~97% native | ~98% native |
| UDP Latency | ~50μs overhead | ~5μs overhead | ~8μs overhead | ~5μs overhead |
| CPU Overhead | Moderate (bridge processing) | Minimal | Minimal | Minimal |
| Context Switches | Higher (bridge traversal) | Lower (direct route) | Lower (direct route) | Lower (direct route) |
| Scalability | Good (1000s of containers) | Excellent | Excellent | Limited (MAC table) |
Network Isolation
Linux Bridge provides L2 isolation — containers on the same bridge can communicate directly. Network policies are enforced via iptables/nftables rules on the bridge interface.
IPvLAN L3 mode provides the strongest isolation — containers can only communicate through the host’s routing stack. This is ideal for multi-tenant environments where containers must not reach each other directly.
Macvlan provides minimal isolation — all containers appear as regular network devices. External firewalls and switches see each container’s MAC address independently, which can be both a feature and a security concern.
Troubleshooting Common Issues
Bridge: Hairpin Mode
When containers need to reach themselves through the host IP, enable hairpin mode:
| |
IPvLAN: Host Communication
Containers on an IPvLAN network cannot reach the host by default. Fix this by creating a macvlan interface on the host:
| |
Macvlan: Host Communication
Same issue as IPvLAN — Macvlan containers cannot reach the host directly:
| |
Choosing the Right CNI Plugin
Choose Linux Bridge if:
- You need general-purpose container networking with broad compatibility
- You want Kubernetes NetworkPolicy support
- You’re running Docker or Kubernetes without specialized networking requirements
- You need NAT for containers to reach external networks
Choose IPvLAN if:
- You need maximum network performance (minimal overhead)
- You want L3 isolation between container groups (multi-tenant)
- You don’t want MAC address proliferation on your network switches
- Your network infrastructure limits the number of MAC addresses per port
Choose Macvlan if:
- Your containers need to appear as individual devices on the physical network
- You have legacy applications that require direct network access
- External firewalls or monitoring systems need to see individual container MAC addresses
- You’re migrating VMs to containers and need the same network behavior
Why Care About CNI Plugin Choice?
Performance matters at scale: For small deployments with a few containers, any CNI plugin works fine. But at hundreds or thousands of containers, the difference between Bridge and IPvLAN becomes measurable — IPvLAN’s near-native throughput can reduce network latency by 90% compared to bridge-based networking.
Security and isolation: Different plugins offer different isolation guarantees. IPvLAN L3 mode provides strong tenant isolation by default, while Macvlan exposes all containers directly to the physical network. Understanding these differences is critical for compliance and security.
Network infrastructure compatibility: Some network switches limit the number of MAC addresses per port (port security). Macvlan creates one MAC per container, which can exhaust switch MAC tables. IPvLAN shares the host MAC, avoiding this limitation entirely.
Operational complexity: Linux Bridge is the simplest to understand and troubleshoot — it behaves like a physical Ethernet switch. IPvLAN and Macvlan operate at a lower level and require understanding of kernel networking internals for advanced troubleshooting.
For teams managing container networks at scale, see our Kubernetes CNI comparison and sidecar proxy guide. If you need advanced network policies, our Kubernetes network policies deep dive covers policy enforcement patterns.
FAQ
Can I use multiple CNI plugins on the same host?
Yes. CNI supports chaining multiple plugins together. For example, you can use the Bridge plugin for basic connectivity and chain it with the Portmap plugin for port forwarding, or with the Bandwidth plugin for rate limiting. The CNI specification defines how plugins are chained in a single network configuration file.
What is the difference between IPvLAN L2 and L3 modes?
IPvLAN L2 mode allows containers to communicate at Layer 2 — they can see each other’s ARP traffic and communicate directly within the same subnet. IPvLAN L3 mode routes all traffic through the host’s IP stack, providing stronger isolation — containers cannot see each other’s Layer 2 traffic and must go through routing for all communication.
Does Macvlan work with WiFi interfaces?
Generally no. Most WiFi drivers and access points do not support multiple MAC addresses on a single interface (802.11 standard limitation). Macvlan requires the underlying interface to support multiple MAC addresses, which is true for Ethernet but not for most WiFi adapters.
How do I monitor CNI plugin performance?
Use standard Linux networking tools: ip -s link shows byte/packet counts per interface, tc -s qdisc shows queueing discipline statistics, and ethtool -S shows driver-level statistics. For Kubernetes deployments, Cilium Hubble and Weave Scope provide network observability on top of CNI plugins.
Can containers on different CNI networks communicate?
Not by default. Containers on separate CNI networks are isolated from each other. To enable cross-network communication, you need a router or gateway between the networks. In Kubernetes, this is handled by the kube-proxy and Service abstraction.
What happens to existing connections when I change CNI plugins?
Changing CNI plugins requires restarting containers — the network namespace is recreated with the new plugin. Existing TCP connections will be dropped. Plan CNI changes during maintenance windows and use rolling updates to minimize disruption.