SQLite is the world’s most deployed database engine, embedded in billions of devices. Its simplicity, zero-configuration setup, and single-file architecture make it ideal for self-hosted applications. But SQLite alone does not handle replication, high availability, or multi-node clustering.
Three open-source projects solve this problem in fundamentally different ways: rqlite, LiteFS, and dqlite. This guide compares their architectures, trade-offs, and deployment patterns so you can choose the right distributed SQLite solution for your infrastructure.
Why Use a Distributed SQLite Database
SQLite stores data in a single file on disk. This is brilliant for local applications but becomes a liability when you need:
- High availability — if the server crashes, the database goes offline with it
- Read scaling — a single SQLite file can only serve one write at a time
- Automatic failover — manual intervention to restore a crashed node
- Multi-region access — latency becomes unacceptable when clients connect across data centers
Distributed SQLite solutions address these by replicating data across multiple nodes. Instead of replacing SQLite, they layer consensus or replication on top of it, preserving SQLite’s SQL compatibility while adding fault tolerance. For self-hosters running lightweight infrastructure, this means you get the simplicity of SQLite with the resilience of a clustered database.
If you need more traditional distributed SQL databases with full postgresql or MySQL compatibility, see our CockroachDB vs YugabyteDB vs TiDB comparison and PostgreSQL vs MySQL vs MariaDB guide.
Architecture Comparison
Each project takes a different approach to distribution:
| Feature | rqlite | LiteFS | dqlite |
|---|---|---|---|
| Approach | Raft consensus on SQLite commands | FUSE filesystem replication | Raft consensus on SQLite internals |
| Language | Go | Go | C |
| Consensus | Raft (hashicorp/raft) | Primary-based via Fly.io LIFX | Raft (custom implementation) |
| Write Model | Leader only, replicated via Raft log | Primary node writes, replicas pull via LIFX | Leader only, replicated via Raft log |
| Read Model | Weak reads on any node; strong reads from leader | Local reads on any replica | Strong reads from any node |
| Storage | SQLite file per node | FUSE-mounted SQLite file per node | In-memory SQLite state + disk WAL |
| Transactions | Full SQLite transactions | Full SQLite transactions | Full SQLite transactions |
| GitHub Stars | 17,429 | 4,743 | 4,303 |
| License | MIT | Apache-2.0 | AGPL-3.0 (with commercial option) |
| Maintained by | Independent community | Superfly (Fly.io) | Canonical (Ubuntu) |
rqlite — Raft on Top of SQLite
rqlite wraps a standard SQLite database with a Raft consensus layer. Every SQL command (INSERT, UPDATE, DELETE) is written to the Raft log and replicated to all nodes before being applied. This guarantees that all nodes converge to the same state.
The key insight: rqlite does not modify SQLite itself. It uses the standard mattn/go-sqlite3 driver and treats SQLite as a black box. The Raft layer ensures command ordering and replication.
Strengths:
- Simple deployment — single binary with no external dependencies
- Strong consistency with Raft consensus
- Built-in HTTP/REST API for easy integration
- Automatic leader election and failover
- Snapshot support for fast node recovery
Limitations:
- Write throughput limited by Raft consensus latency
- Not a drop-in replacement for SQLite (must use HTTP API or go client)
- No native connection pooling
LiteFS — FUSE Filesystem Replication
LiteFS, developed by Superfly (the company behind Fly.io), uses a FUSE (Filesystem in Userspace) mount to intercept SQLite file operations. The primary node writes to the database normally, while LiteFS streams the Write-Ahead Log (WAL) to replica nodes. Replicas receive and replay the WAL, keeping their local SQLite files in sync.
Unlike rqlite, LiteFS does not use a consensus algorithm. Instead, it relies on a single primary node for writes, with replicas pulling changes asynchronously.
Strengths:
- Transparent to SQLite — any SQLite application works without modification
- Low read latency — replicas serve reads from local files
- Simpler architecture — no consensus overhead for writes
- Designed for edge deployments with Fly.io integration
Limitations:
- Single primary node — no automatic failover without external orchestration
- Requires FUSE support on the host OS (not available on all platforms)
- Replicas have eventual consistency, not strong consistency
- Tied to Fly.io’s LIFX service for primary election
dqlite — Raft Embedded in SQLite
dqlite, developed by Canonical, embeds the Raft consensus layer directly into SQLite’s internal architecture. Rather than treating SQLite as a black box (like rqlite), dqlite hooks into SQLite’s VFS (Virtual File System) layer. This allows it to replicate at a lower level — Raft log entries contain individual page writes rather than full SQL commands.
Strengths:
- Very low latency — page-level replication is faster than SQL-level
- Native SQLite wire protocol — applications connect as if using standard SQLite
- Used in production by Canonical (MicroK8s, LXD, Juju)
- Strong consistency with automatic failover
Limitations:
- C codebase — harder to modify and contribute to compared to Go alternatives
- AGPL-3.0 license may not suit all use cases
- Requires a C compiler and libuv for building from source
- Less community adoption than rqlite
Installation and Deployment
Installing rqlite
rqlite ships as a single binary. Download it directly or use the docker image:
| |
Start a 3-node cluster:
| |
Query the cluster:
| |
Docker Compose for rqlite
| |
Installing LiteFS
LiteFS requires FUSE support on the host. On Debian/Ubuntu:
| |
LiteFS configuration file (/etc/litefs.yml):
| |
| |
Installing dqlite
dqlite is used as a C library embedded in your application, but you can also run it standalone:
| |
Example usage with a Go application using the Go dqlite driver:
| |
Performance Characteristics
Understanding performance trade-offs is critical for choosing the right tool:
| Metric | rqlite | LiteFS | dqlite |
|---|---|---|---|
| Write Latency | ~5-20ms (Raft roundtrip) | ~1-5ms (local write) | ~1-10ms (Raft, page-level) |
| Read Latency (local) | ~0.1ms | ~0.1ms | ~0.1ms |
| Read Latency (remote) | ~0.5ms (weak consistency) | ~0.1ms (local file) | ~0.5ms (strong) |
| Throughput | ~1,000 writes/sec (3-node) | ~10,000 writes/sec (primary) | ~5,000 writes/sec (3-node) |
| Failover Time | < 1 second (automatic) | Manual or via LIFX | < 1 second (automatic) |
| Storage Overhead | ~2x (SQLite + Raft log) | ~1x per replica | ~2x (SQLite + Raft log) |
When to choose each:
rqlite: Best for general-purpose distributed SQLite with strong consistency. Ideal when you want a simple, self-contained solution with automatic leader election. The HTTP API makes it easy to integrate with any programming language.
LiteFS: Best when you want zero application changes. Since LiteFS sits at the filesystem level, any existing SQLite application works without modification. However, the FUSE dependency and eventual consistency model may not suit all use cases.
dqlite: Best for performance-critical applications that need low-latency replication. The page-level Raft replication is faster than SQL-level approaches. Canonical’s production use in MicroK8s and LXD proves its maturity.
Use Case Recommendations
Microservices with Shared State
For microservices that need a shared, lightweight database, rqlite is the strongest choice. Its HTTP API and REST-compatible endpoints make it easy for services in different languages to query the same data without custom drivers.
Edge Deployments
For edge computing scenarios where nodes may have intermittent connectivity, LiteFS shines. Its primary-replica model allows edge nodes to serve reads locally from cached SQLite files, even when disconneckuberneteshe primary.
Kubernetes Operators
For Kubernetes-native deployments, dqlite integrates seamlessly. Canonical uses it as the backing store for MicroK8s, and its Go bindings make it straightforward to embed in custom operators.
High-Availability Web Applications
For web applications requiring HA, rqlite provides the simplest deployment model. Drop it in behind a load balancer, and the built-in leader election handles failover automatically. For more complete database high availability patterns, see our Patroni vs Galera Cluster vs repmgr guide.
Migration from Single-Node SQLite
Migrating an existing single-node SQLite database to a distributed setup varies by tool:
| |
For any of these approaches, always backup your original SQLite file before migration. If you’re also running a self-hosted backup strategy, see our Restic vs Borg vs Kopia comparison for database backup tools.
FAQ
Which distributed SQLite solution is the easiest to set up?
rqlite is the easiest to set up. It ships as a single binary, requires no external dependencies, and starts a cluster with just a few command-line flags. You can have a 3-node cluster running in under a minute. LiteFS requires FUSE support and a consul or LIFX service for primary election, while dqlite requires building from source with C dependencies.
Can I use LiteFS without Fly.io?
Yes, but with limitations. LiteFS supports two lease types: consul and static. The consul lease type works with any self-hosted Consul instance, making it usable outside of Fly.io’s infrastructure. The static lease type allows you to manually designate a primary node. However, some of LiteFS’s most polished features (like automatic primary election via LIFX) are Fly.io-specific.
Is dqlite a drop-in replacement for SQLite?
dqlite is designed to be wire-compatible with SQLite, meaning applications using standard SQLite client libraries can connect to dqlite without code changes — as long as they use the dqlite driver. The key difference is that dqlite runs as a separate process that your application connects to, whereas standard SQLite is embedded in-process. rqlite, by contrast, requires using its HTTP API or Go client.
How many nodes can each solution support?
rqlite supports clusters of 3 to 7 nodes (odd numbers for Raft quorum). LiteFS supports any number of replicas, limited only by network bandwidth for WAL streaming. dqlite also supports 3 to 7 node clusters. For large-scale deployments with more nodes, consider traditional distributed databases like CockroachDB or TiDB.
What happens when the leader/primary node fails?
In rqlite, the Raft consensus automatically elects a new leader within ~1 second. In LiteFS, failover is not automatic — you need external orchestration (Consul, Kubernetes, or manual intervention) to promote a replica to primary. In dqlite, Raft consensus handles automatic leader election similarly to rqlite, with failover completing in under a second.
Which solution has the best write performance?
LiteFS has the best raw write performance because it avoids consensus overhead — writes go directly to the local SQLite file on the primary node, with WAL streaming happening asynchronously. However, this comes at the cost of eventual consistency. dqlite offers the best write performance among consensus-based solutions due to its page-level Raft replication, which is faster than rqlite’s SQL-level replication.
Are these solutions production-ready?
Yes. rqlite has been in production since 2015 with 17,000+ GitHub stars and active community development. LiteFS is used in production by Fly.io to power their edge database infrastructure. dqlite is the database layer for Canonical’s MicroK8s, LXD, and Juju, serving millions of deployments worldwide.