Virtual Router Redundancy Protocol (VRRP) is the backbone of high-availability IP failover in self-hosted infrastructure. When a primary server goes down, VRRP ensures a virtual IP address seamlessly migrates to a standby node — keeping services reachable without DNS propagation delays or manual intervention.
This guide compares three open-source VRRP implementations: Keepalived (the industry standard), FRRouting VRRPd (part of the FRR protocol suite), and UCARP (a lightweight CARP alternative). We’ll cover installation, Docker Compose deployments, configuration patterns, and help you choose the right tool for your HA setup.
What Is VRRP and Why Self-Host It?
VRRP (RFC 5798) allows multiple routers or servers to share a virtual IP address. One node acts as the master, holding the VIP and responding to traffic. Backup nodes monitor the master via periodic VRRP advertisements. If advertisements stop, the highest-priority backup takes over the VIP — typically within 1-3 seconds.
Self-hosting VRRP is essential for:
- Zero-downtime failover — services bound to a VIP remain accessible during server maintenance or hardware failure
- No single point of failure — eliminates dependency on a single load balancer or gateway
- Cloud-agnostic — works on bare metal, VMs, and any cloud provider without proprietary HA services
- Cost savings — avoids expensive managed load balancer or cloud HA offerings
- Full control — tune priorities, preemption, health checks, and notification scripts to your exact needs
For infrastructure teams running PostgreSQL clusters, Kubernetes control planes, or reverse proxy fleets, VRRP provides the network-layer redundancy that application-level health checks can’t replace.
Keepalived: The Industry Standard
Keepalived (4,500+ GitHub stars) is the most widely deployed VRRP implementation. Originally designed for Linux Virtual Server (LVS) load balancing, it has evolved into a standalone VRRP daemon with extensive health checking capabilities.
Key features:
- Full VRRPv2 and VRRPv3 (IPv4/IPv6) support
- Built-in TCP/HTTP/SMTP/SSL health checks
- Notification scripts on state transitions
- Multiple VIP support per instance
- Preemption control and priority-based election
- LVS integration for load balancer HA
Keepalived Docker Compose Setup
| |
Keepalived Configuration Example
| |
FRRouting VRRPd: VRRP as Part of a Full Routing Suite
FRRouting (FRR) (4,100+ GitHub stars) is a comprehensive routing protocol suite supporting BGP, OSPF, IS-IS, RIP, and VRRP through its vrrpd daemon. If you’re already running FRR for dynamic routing, VRRPd provides native VRRP without deploying a separate tool.
Key features:
- Integrated with BGP, OSPF, and other FRR protocols
- VRRPv3 support (IPv4 and IPv6)
- Configuration via FRR’s vtysh CLI or config files
- Consistent operational model across all protocols
- Active development with quarterly releases
FRRouting VRRPd Docker Compose Setup
| |
FRRouting VRRPd Configuration
| |
VRRPd configuration is managed through FRR’s unified config system, accessible via vtysh for runtime adjustments:
| |
UCARP: Lightweight CARP Implementation
UCARP (170+ GitHub stars) implements the Common Address Redundancy Protocol (CARP) — OpenBSD’s patent-free alternative to VRRP. CARP uses cryptographic authentication and is designed for simplicity over feature richness.
Key features:
- Patent-free CARP protocol (not VRRP)
- Cryptographic HMAC-SHA1 authentication
- Minimal resource footprint
- Simple configuration with command-line arguments
- Suitable for embedded systems and lightweight deployments
UCARP Docker Compose Setup
| |
UCARP Command-Line Configuration
| |
VIP Up/Down Scripts
| |
Comparison: Keepalived vs FRRouting VRRPd vs UCARP
| Feature | Keepalived | FRRouting VRRPd | UCARP (CARP) |
|---|---|---|---|
| Protocol | VRRPv2/v3 | VRRPv3 | CARP |
| GitHub Stars | 4,500+ | 4,100+ (FRR) | 170+ |
| IPv6 Support | Yes | Yes | Limited |
| Health Checks | Built-in (HTTP/TCP/SMTP) | Via FRR tracking | External scripts |
| Configuration | Declarative config file | FRR vtysh/config | Command-line args |
| Preemption | Configurable | Configurable | Yes (default) |
| Multiple VIPs | Yes | Yes | Per-instance |
| Notification Scripts | Yes | Limited | Up/down scripts |
| LVS Integration | Native | No | No |
| BGP/OSP Integration | No | Native (FRR suite) | No |
| Resource Usage | Low | Medium | Minimal |
| Docker Support | Yes (osixia image) | Yes (frrouting image) | Yes (linuxserver image) |
| Active Development | Yes (regular releases) | Yes (quarterly) | Sporadic (2019) |
| Learning Curve | Moderate | High (FRR knowledge) | Low |
Choosing the Right VRRP Implementation
Choose Keepalived if:
- You need the most battle-tested VRRP implementation with proven production track records
- Built-in health checks (HTTP, TCP, SSL) are essential for your failover logic
- You’re running LVS load balancers and want tight integration
- You need notification scripts for automated remediation on state changes
Choose FRRouting VRRPd if:
- You already run FRR for BGP, OSPF, or other routing protocols
- You want unified configuration and monitoring across all network protocols
- VRRPv3 IPv6 support is a requirement
- You prefer a single daemon managing your entire routing stack
Choose UCARP if:
- You need the simplest possible VIP failover with minimal configuration
- Patent-free CARP protocol is a legal requirement
- You’re deploying on resource-constrained hardware or embedded systems
- You prefer explicit up/down scripts over declarative health checks
Best Practices for VRRP Deployments
Use dedicated interfaces — Run VRRP on a separate network interface from production traffic to prevent split-brain scenarios during network congestion.
Secure VRRP authentication — Always configure VRRP authentication (PASS for Keepalived, simple for VRRPd, HMAC for CARP). Use strong passwords and rotate them periodically.
Monitor failover events — Log all state transitions and set up alerting. Use Keepalived’s
notify_*scripts or VRRPd’sshow vrrpto track election history.Test failover regularly — Schedule periodic failover tests by stopping the VRRP daemon on the master node. Verify the backup takes over within the expected timeframe (1-3 seconds).
Avoid split-brain — Ensure only one node is MASTER at any time. Use
preempt delayin Keepalived to prevent flapping during brief network partitions.Coordinate with application health — VRRP handles network-layer failover, but application-level health checks should complement it. Combine VRRP with health check scripts that verify the actual service state.
Why Self-Host VRRP?
Running your own VRRP infrastructure gives you complete control over failover behavior, priorities, and health checks — without relying on cloud provider-specific HA solutions. For organizations operating across multiple data centers or on bare metal, VRRP provides a standardized, vendor-neutral approach to IP redundancy.
Combined with load balancers like HAProxy or NGINX, VRRP creates a robust HA architecture that handles both network-layer and application-layer failures. For database clusters, pairing VRRP with PostgreSQL streaming replication or MySQL Group Replication ensures both network access and data consistency during failover events.
For related infrastructure topics, see our Keepalived vs Corosync HA clustering guide, LVS load balancing guide, and HAProxy load balancing guide.
FAQ
What is the difference between VRRP and CARP?
VRRP (Virtual Router Redundancy Protocol, RFC 5798) is an IETF standard supported by most vendors. CARP (Common Address Redundancy Protocol) is OpenBSD’s patent-free alternative that uses cryptographic HMAC authentication. UCARP implements CARP, while Keepalived and FRRouting implement VRRP.
How fast is VRRP failover?
With default settings (advert_int=1), failover typically occurs within 3 seconds. Reducing advert_int to 0.5-1 seconds and adjusting the master down interval can achieve sub-second failover, but may increase false positives during brief network glitches.
Can I run VRRP in a Docker container?
Yes, but the container requires NET_ADMIN, NET_RAW, and NET_BROADCAST capabilities, and typically needs network_mode: host to receive VRRP multicast traffic. All three tools in this guide support Docker deployment.
Does VRRP work across different subnets?
No. VRRP operates at Layer 2 and requires all participating nodes to be on the same broadcast domain (subnet). For cross-subnet HA, consider BGP anycast or DNS-based failover instead.
Should I use VRRP or a cloud load balancer?
For cloud deployments, managed load balancers (AWS ALB, GCP LB, Azure LB) are often simpler. For bare metal, on-premises, or multi-cloud setups, VRRP provides a cost-effective, self-managed alternative with no vendor lock-in.
What happens if both VRRP nodes think they’re MASTER?
This is a split-brain condition, typically caused by network partition or misconfigured authentication. Both nodes will hold the VIP, causing duplicate IP conflicts. Use dedicated VRRP interfaces, strong authentication, and preempt delay to prevent this.