DNS anycast is one of the most powerful yet underutilized techniques for building resilient, low-latency infrastructure. By advertising the same IP address from multiple locations, anycast ensures that DNS queries are automatically routed to the nearest healthy server — no load balancer, no geo-DNS provider, no SaaS dependency required.
In this guide, we compare three self-hosted tools for building DNS anycast: BIRD, FRRouting (FRR), and Keepalived. We cover how each works, when to use which, and provide complete Docker and bare-metal deployment configurations.
Why Self-Host DNS Anycast?
Commercial DNS providers like Cloudflare, AWS Route 53, and Google Cloud DNS offer anycast out of the box. But they come with trade-offs: vendor lock-in, unpredictable pricing at scale, and limited control over routing policies.
Self-hosted DNS anycast gives you:
- Full control over routing decisions, BGP communities, and failover behavior
- Predictable costs — no per-query fees, no egress charges between your own servers
- Data sovereignty — DNS data never leaves your infrastructure
- Resilience — no single provider outage can take down your entire DNS stack
- Learning — deep understanding of how BGP, VRRP, and anycast actually work
For organizations running authoritative DNS (with PowerDNS, BIND9, or Knot DNS) across multiple data centers, anycast is the gold standard for availability. Let’s look at the tools that make it possible.
Comparison Overview
| Feature | BIRD | FRRouting (FRR) | Keepalived |
|---|---|---|---|
| Primary protocol | BGP, OSPF, RIP, BFD | BGP, OSPF, IS-IS, RIP, BFD | VRRP, BFD |
| Anycast method | BGP route advertisement | BGP route advertisement | VRRP virtual IP |
| License | GPLv2 | GPLv2 | GPLv2+ |
| Language | C | C | C |
| Latest release | 2.16.x | 10.2+ | 2.3.x |
| GitHub stars | N/A (GitLab) | 4,100+ | 4,500+ |
| Last updated | Active (GitLab) | Apr 2026 | Nov 2025 |
| Docker support | Community images | Official + community | Official + community |
| Configuration style | Declarative config | Cisco/Juniper-like CLI | Declarative config |
| Learning curve | Medium | High | Low |
| Best for | Simple anycast, edge routers | Full routing suite, ISPs | VRRP failover, simple HA |
When to Use Each Tool
BIRD is the simplest option for pure anycast. If your upstream provider already peers with you and you just need to advertise an anycast prefix, BIRD does it in ~20 lines of configuration. It is lightweight, fast, and has been the go-to for DNS anycast since the early 2000s.
FRRouting is the heavyweight choice. It supports the most protocols (BGP, OSPF, IS-IS, RIP, BFD, PIM, EIGRP) and is actively developed by a large community. Choose FRR if you need advanced BGP features like route filtering, communities, or multipath load balancing across multiple upstreams.
Keepalived takes a different approach. Instead of BGP, it uses VRRP to elect a master router that owns a virtual IP. This works well within a single data center but does not provide true anycast across geographic locations. Use Keepalived for local high availability rather than cross-site anycast.
BIRD: Minimalist BGP Daemon
BIRD (BGP Intermediate Routing Daemon) has been a staple of DNS anycast deployments since 1999. Its strength is simplicity — it does BGP and a handful of IGP protocols, and it does them well.
BGP Configuration for DNS Anycast
Here is a production-ready BIRD configuration for advertising a /32 anycast prefix:
| |
Docker Compose for BIRD
| |
The network_mode: host is required because BIRD needs direct access to network interfaces for BGP peering. The NET_ADMIN capability allows it to manipulate routing tables.
Health Checking with BIRD
BIRD supports BFD (Bidirectional Forwarding Detection) for fast failure detection:
| |
With BFD configured, BIRD detects upstream failures in under 1 second and withdraws the anycast route, causing traffic to shift to the next closest node.
FRRouting: Full-Featured Routing Suite
FRRouting (FRR) is a fork of Quagga that has become one of the most popular open-source routing platforms. With over 4,100 GitHub stars and commits as recent as April 2026, it is under active development.
BGP Configuration for DNS Anycast
FRR uses a Cisco-like CLI, which is familiar to network engineers:
| |
Advanced BGP: Communities and Route Maps
FRR shines when you need advanced routing policies:
| |
This configuration attaches BGP communities to the anycast prefix, allowing upstream peers to apply traffic engineering policies based on those communities.
Docker Compose for FRR
| |
The daemons file controls which routing protocols FRR starts:
| |
Only enable the daemons you need. Running fewer protocols reduces memory usage and attack surface.
Keepalived: VRRP-Based High Availability
Keepalived takes a fundamentally different approach. Rather than advertising routes via BGP, it uses VRRP (Virtual Router Redundancy Protocol) to elect a master that owns a shared virtual IP. This works within a single broadcast domain — ideal for data center local HA, not cross-site anycast.
Keepalived Configuration
| |
DNS Health Check Script
| |
Make the script executable: chmod +x /etc/keepalived/check_dns.sh. When the health check fails, Keepalived reduces the node’s priority and triggers a failover to the backup node.
Docker Compose for Keepalived
| |
Note: Keepalived in Docker requires network_mode: host because VRRP uses multicast traffic that does not traverse Docker bridge networks.
Comparison: BGP Anycast vs VRRP
| Criterion | BGP Anycast (BIRD/FRR) | VRRP (Keepalived) |
|---|---|---|
| Geographic scope | Global — works across data centers | Local — single L2 domain |
| Failover speed | 1-30s (BGP convergence) | < 3s (VRRP advertisement) |
| Upstream dependency | Requires BGP-speaking upstream | No upstream changes needed |
| IP address model | Multiple nodes share same /32 | Master-backup, single active |
| Traffic distribution | All nodes active simultaneously | Only master handles traffic |
| Configuration complexity | Medium-High | Low-Medium |
| Protocol support | BGP, OSPF, BFD | VRRP, BFD |
For true multi-site DNS anycast, BGP (via BIRD or FRR) is the only viable option. Keepalived is best used as a complement — for example, running Keepalived for local HA within each data center, and BIRD/FRR for anycast between data centers.
Step-by-Step Deployment: Two-Site DNS Anycast
Here is a practical example of deploying DNS anycast across two data centers using BIRD.
Network Topology
| |
Step 1: Configure Authoritative DNS
Deploy PowerDNS on both nodes:
| |
For related reading, see our authoritative DNS comparison and DNS load balancing guide for complementary strategies.
Step 2: Configure BIRD on Both Nodes
DC1 (/etc/bird/bird.conf):
| |
DC2 uses identical config with router id 10.0.2.1 and its own upstream neighbor IP.
Step 3: Verify Anycast is Working
From a client machine, run:
| |
When both nodes are healthy, clients in different geographic regions receive responses from different nodes — automatically, with no client-side configuration.
Monitoring and Alerting
Anycast DNS requires monitoring at multiple layers:
| |
For deeper network monitoring, pair this with tools like Gatus for blackbox probing or network observability stacks.
Troubleshooting Common Issues
BGP Session Not Establishing
| |
Anycast IP Not Responding
| |
Traffic Not Distributing
| |
FAQ
What is DNS anycast and how does it work?
DNS anycast works by advertising the same IP address from multiple geographic locations using BGP. Internet routers automatically direct queries to the topologically nearest node based on BGP path metrics. If one node goes offline, BGP withdraws its route and traffic shifts to the next nearest node — typically within seconds.
Can I use DNS anycast with a single data center?
Technically yes, but it defeats the purpose. Anycast’s value comes from geographic distribution — placing DNS servers close to your users worldwide. With a single data center, you get redundancy but not latency improvement. For single-site setups, VRRP (Keepalived) for local HA is more appropriate.
Does DNS anycast require my own ASN?
Most transit ISPs will accept BGP advertisements from customers without their own ASN using private AS numbers (64512-65534) that the ISP strips before propagating. However, for full control and direct peering, you will need your own public ASN from your regional internet registry (RIR).
How fast is failover with BGP anycast?
BGP convergence time depends on your configuration. With BFD enabled, failure detection happens in 300-900ms, and route withdrawal propagates in 1-5 seconds. Without BFD, standard BGP hold timers (typically 90-180 seconds) mean much slower failover. Always use BFD for production anycast.
Can I combine BGP anycast and VRRP?
Yes, this is a common production pattern. Use VRRP (Keepalived) for high availability within each data center — so if one DNS server fails, another takes over the local anycast VIP. Then use BIRD or FRR to advertise that VIP via BGP from each data center. This gives you both local HA and geographic anycast.
What is the minimum number of nodes for DNS anycast?
You need at least 2 nodes for anycast to provide redundancy. With 2 nodes, if one fails, all traffic goes to the other. With 3+ nodes, you get geographic distribution — clients in different regions hit different servers. Major DNS operators run dozens or hundreds of anycast nodes globally.
Conclusion
Building self-hosted DNS anycast infrastructure is more accessible than ever. BIRD remains the simplest choice for straightforward anycast deployments — a few lines of configuration and you are advertising your DNS prefix globally. FRRouting is the go-to when you need advanced BGP features, protocol flexibility, and the backing of an active open-source community. Keepalived serves a different but complementary role, providing local high availability within each data center through VRRP.
For most organizations, the winning pattern is: Keepalived for local HA + BIRD or FRR for cross-site anycast + PowerDNS or BIND9 for authoritative DNS. This combination delivers the resilience and performance of commercial anycast DNS, entirely self-hosted.