When running Redis in production, a single-node deployment is a single point of failure. If that node goes down, your cache layer vanishes, session stores evaporate, and application performance collapses. High availability (HA) for Redis is not optional — it is essential.

This guide compares three approaches to Redis high availability: Redis Sentinel (the official automated failover system), Redis Cluster (native sharding with built-in HA), and KeyDB Multi-Active (a Redis-compatible fork with active-active replication). We will cover architecture, deployment with Docker Compose, failover behavior, and trade-offs to help you choose the right HA strategy for your infrastructure.

Redis Sentinel: Automated Failover with Master-Replica

Redis Sentinel is the official high availability solution for Redis. It does not handle data sharding — instead, it monitors your master and replica instances and automatically promotes a replica to master if the master fails.

Architecture

Sentinel runs as a separate process (or set of processes) alongside your Redis instances. A typical deployment uses three or five Sentinel nodes for quorum-based decision making. The Sentinel nodes monitor the master, detect failures through periodic pings, and coordinate a failover election when the master is unreachable.

1
2
3
4
5
6
7
8
9
┌─────────────┐     ┌─────────────┐     ┌─────────────┐
│  Sentinel 1  │────▶│  Sentinel 2  │────▶│  Sentinel 3  │
└──────┬──────┘     └──────┬──────┘     └──────┬──────┘
       │                   │                   │
       ▼                   ▼                   ▼
  ┌──────────┐       ┌──────────┐       ┌──────────┐
  │ Master   │       │ Replica 1 │       │ Replica 2 │
  │ :6379    │       │ :6380     │       │ :6381     │
  └──────────┘       └──────────┘       └──────────┘

Docker Compose Deployment

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
version: "3.8"
services:
  redis-master:
    image: redis:7-alpine
    command: ["redis-server", "--appendonly", "yes", "--requirepass", "YourStrongPassword123"]
    ports:
      - "6379:6379"
    volumes:
      - redis-master-data:/data
    healthcheck:
      test: ["CMD", "redis-cli", "-a", "YourStrongPassword123", "ping"]
      interval: 10s
      timeout: 5s
      retries: 3

  redis-replica-1:
    image: redis:7-alpine
    command:
      - "redis-server"
      - "--replicaof"
      - "redis-master"
      - "6379"
      - "--masterauth"
      - "YourStrongPassword123"
      - "--requirepass"
      - "YourStrongPassword123"
      - "--appendonly"
      - "yes"
    depends_on:
      - redis-master

  redis-replica-2:
    image: redis:7-alpine
    command:
      - "redis-server"
      - "--replicaof"
      - "redis-master"
      - "6379"
      - "--masterauth"
      - "YourStrongPassword123"
      - "--requirepass"
      - "YourStrongPassword123"
      - "--appendonly"
      - "yes"
    depends_on:
      - redis-master

  sentinel-1:
    image: redis:7-alpine
    command: >
      redis-sentinel /usr/local/etc/redis/sentinel.conf
    volumes:
      - ./sentinel-1.conf:/usr/local/etc/redis/sentinel.conf
    depends_on:
      - redis-master
      - redis-replica-1
      - redis-replica-2

  sentinel-2:
    image: redis:7-alpine
    command: >
      redis-sentinel /usr/local/etc/redis/sentinel.conf
    volumes:
      - ./sentinel-2.conf:/usr/local/etc/redis/sentinel.conf
    depends_on:
      - redis-master
      - redis-replica-1
      - redis-replica-2

  sentinel-3:
    image: redis:7-alpine
    command: >
      redis-sentinel /usr/local/etc/redis/sentinel.conf
    volumes:
      - ./sentinel-3.conf:/usr/local/etc/redis/sentinel.conf
    depends_on:
      - redis-master
      - redis-replica-1
      - redis-replica-2

volumes:
  redis-master-data:

Sentinel configuration file (sentinel.conf):

1
2
3
4
5
6
port 26379
sentinel monitor mymaster redis-master 6379 2
sentinel auth-pass mymaster YourStrongPassword123
sentinel down-after-milliseconds mymaster 5000
sentinel failover-timeout mymaster 60000
sentinel parallel-syncs mymaster 1

The sentinel monitor line defines the master name, address, port, and quorum (2 out of 3 Sentinels must agree on a failure before failover triggers).

Failover Behavior

When the master becomes unreachable, Sentinels detect the failure after the configured down-after-milliseconds period (default 30 seconds, set to 5 seconds above). A leader Sentinel is elected to coordinate the failover. It selects the best replica (lowest replication offset, most up-to-date) and promotes it. The old master, when it comes back online, is reconfigured as a replica of the new master.

Failover typically completes in 10-30 seconds depending on configuration. Clients must support Sentinel-aware connection pooling (e.g., Jedis, Lettuce, redis-py with Sentinel support) to automatically redirect to the new master.

Redis Cluster: Native Sharding with Built-In HA

Redis Cluster distributes data across multiple nodes using hash slots (16,384 slots total). Each node handles a subset of slots, and replicas provide HA for each shard. Cluster combines horizontal scaling with automatic failover in a single system.

Architecture

A minimum Redis Cluster requires six nodes: three masters (each handling ~5,461 hash slots) and three replicas (one per master). Data is partitioned using CRC16 hashing of keys modulo 16,384.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
┌──────────────┐  ┌──────────────┐  ┌──────────────┐
│ Master A     │  │ Master B     │  │ Master C     │
│ Slots 0-5460 │  │ Slots 5461-  │  │ Slots 10923- │
│              │  │ 10922        │  │ 16383        │
└──────┬───────┘  └──────┬───────┘  └──────┬───────┘
       │                 │                 │
┌──────┴───────┐  ┌──────┴───────┐  ┌──────┴───────┐
│ Replica A    │  │ Replica B    │  │ Replica C    │
│ (backs up A) │  │ (backs up B) │  │ (backs up C) │
└──────────────┘  └──────────────┘  └──────────────┘

Docker Compose Deployment

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
version: "3.8"
services:
  redis-node-0:
    image: redis:7-alpine
    command: >
      redis-server --port 6379 --cluster-enabled yes
      --cluster-config-file nodes.conf --cluster-node-timeout 5000
      --appendonly yes --protected-mode no --requirepass YourStrongPassword123
      --masterauth YourStrongPassword123
    ports:
      - "7000:6379"
      - "17000:16379"
    volumes:
      - redis-node-0-data:/data

  redis-node-1:
    image: redis:7-alpine
    command: >
      redis-server --port 6379 --cluster-enabled yes
      --cluster-config-file nodes.conf --cluster-node-timeout 5000
      --appendonly yes --protected-mode no --requirepass YourStrongPassword123
      --masterauth YourStrongPassword123
    ports:
      - "7001:6379"
      - "17001:16379"
    volumes:
      - redis-node-1-data:/data

  redis-node-2:
    image: redis:7-alpine
    command: >
      redis-server --port 6379 --cluster-enabled yes
      --cluster-config-file nodes.conf --cluster-node-timeout 5000
      --appendonly yes --protected-mode no --requirepass YourStrongPassword123
      --masterauth YourStrongPassword123
    ports:
      - "7002:6379"
      - "17002:16379"
    volumes:
      - redis-node-2-data:/data

  redis-node-3:
    image: redis:7-alpine
    command: >
      redis-server --port 6379 --cluster-enabled yes
      --cluster-config-file nodes.conf --cluster-node-timeout 5000
      --appendonly yes --protected-mode no --requirepass YourStrongPassword123
      --masterauth YourStrongPassword123
    ports:
      - "7003:6379"
      - "17003:16379"
    volumes:
      - redis-node-3-data:/data

  redis-node-4:
    image: redis:7-alpine
    command: >
      redis-server --port 6379 --cluster-enabled yes
      --cluster-config-file nodes.conf --cluster-node-timeout 5000
      --appendonly yes --protected-mode no --requirepass YourStrongPassword123
      --masterauth YourStrongPassword123
    ports:
      - "7004:6379"
      - "17004:16379"
    volumes:
      - redis-node-4-data:/data

  redis-node-5:
    image: redis:7-alpine
    command: >
      redis-server --port 6379 --cluster-enabled yes
      --cluster-config-file nodes.conf --cluster-node-timeout 5000
      --appendonly yes --protected-mode no --requirepass YourStrongPassword123
      --masterauth YourStrongPassword123
    ports:
      - "7005:6379"
      - "17005:16379"
    volumes:
      - redis-node-5-data:/data

  cluster-init:
    image: redis:7-alpine
    depends_on:
      - redis-node-0
      - redis-node-1
      - redis-node-2
      - redis-node-3
      - redis-node-4
      - redis-node-5
    entrypoint: >
      bash -c "
      sleep 5 &&
      redis-cli -a YourStrongPassword123 --cluster create
      redis-node-0:6379 redis-node-1:6379 redis-node-2:6379
      redis-node-3:6379 redis-node-4:6379 redis-node-5:6379
      --cluster-replicas 1 --cluster-yes"

volumes:
  redis-node-0-data:
  redis-node-1-data:
  redis-node-2-data:
  redis-node-3-data:
  redis-node-4-data:
  redis-node-5-data:

Cluster Operations

Redis Cluster requires the --cluster-enabled yes flag and uses a bus port (client port + 10000) for node-to-node communication. The cluster-init service runs redis-cli --cluster create to assign hash slots and set up master-replica relationships.

Clients must be Cluster-aware. The Redis Cluster protocol returns MOVED or ASK redirects when a key maps to a different node, and the client library handles the redirect transparently.

KeyDB Multi-Active: Active-Active Replication

KeyDB is a high-performance fork of Redis that introduces Multi-Active Replication — a feature that allows multiple instances to accept writes simultaneously and replicate changes bidirectionally. Unlike Sentinel (active-passive) or Cluster (sharded), Multi-Active provides true active-active operation.

Architecture

All KeyDB nodes in a Multi-Active ring accept reads and writes. Changes propagate asynchronously between nodes using a last-writer-wins conflict resolution strategy based on timestamps.

1
2
3
4
5
6
7
┌──────────┐     ┌──────────┐     ┌──────────┐
│ KeyDB A  │◀───▶│ KeyDB B  │◀───▶│ KeyDB C  │
│ (active) │     │ (active) │     │ (active) │
└──────────┘     └──────────┘     └──────────┘
     ▲                ▲                ▲
     └────────────────┴────────────────┘
         Multi-Active Replication Ring

Docker Compose Deployment

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
version: "3.8"
services:
  keydb-1:
    image: eqalpha/keydb:latest
    command: >
      keydb-server --active-replica yes
      --replicaof keydb-2 6379
      --requirepass YourStrongPassword123
      --masterauth YourStrongPassword123
      --appendonly yes
    ports:
      - "6380:6379"
    volumes:
      - keydb-1-data:/data

  keydb-2:
    image: eqalpha/keydb:latest
    command: >
      keydb-server --active-replica yes
      --replicaof keydb-3 6379
      --requirepass YourStrongPassword123
      --masterauth YourStrongPassword123
      --appendonly yes
    ports:
      - "6381:6379"
    volumes:
      - keydb-2-data:/data

  keydb-3:
    image: eqalpha/keydb:latest
    command: >
      keydb-server --active-replica yes
      --replicaof keydb-1 6379
      --requirepass YourStrongPassword123
      --masterauth YourStrongPassword123
      --appendonly yes
    ports:
      - "6382:6379"
    volumes:
      - keydb-3-data:/data

volumes:
  keydb-1-data:
  keydb-2-data:
  keydb-3-data:

KeyDB uses the --active-replica yes flag to enable bidirectional replication. Each node replicates to the next node in the ring, creating a circular replication topology. Any node can accept writes, and changes propagate around the ring.

Comparison: Redis Sentinel vs Cluster vs KeyDB

FeatureRedis SentinelRedis ClusterKeyDB Multi-Active
TopologyMaster + replicas (active-passive)Sharded masters + replicasActive-active ring
Write nodes1 (master only)3+ (each shard master)All nodes
Read scalingYes (replicas serve reads)Yes (any node serves reads)Yes (all nodes serve reads)
Failover time10-30 seconds5-15 secondsNear-instant
Data shardingNoYes (16,384 hash slots)No (full dataset on each node)
Max dataset sizeLimited by single node memoryScales with number of mastersLimited by single node memory
Cross-slot operationsSupportedLimited (HASH tags required)Supported
Client supportSentinel-aware clients requiredCluster-aware clients requiredStandard Redis clients work
Conflict resolutionN/A (single writer)N/A (sharded)Last-writer-wins (timestamps)
Memory overheadLow (replicas mirror master)Moderate (shard metadata)High (full dataset on each node)
Operational complexityLowHigh (slot management, rebalancing)Low
Docker imageredis:7-alpine (official)redis:7-alpine (official)eqalpha/keydb:latest
LicenseBSD 3-ClauseBSD 3-ClauseGPL 3.0
GitHub starsN/A (part of redis/redis)N/A (part of redis/redis)14,000+
Last updateActive (redis/redis repo)Active (redis/redis repo)Active (2025)

Choosing the Right Redis HA Strategy

Use Redis Sentinel when:

  • Your dataset fits in a single node’s memory
  • You need simple active-passive failover with minimal complexity
  • Your application already uses Redis and you need HA without architectural changes
  • You want to avoid the operational overhead of managing hash slots

Sentinel is the right choice for session stores, cache layers, and work queues where the total data size stays below the memory capacity of a single server. It adds HA with minimal changes to your application code.

Use Redis Cluster when:

  • Your dataset exceeds the memory capacity of a single server
  • You need horizontal write scaling across multiple nodes
  • You can tolerate the complexity of slot management and rebalancing
  • Your application can handle or avoid cross-slot operations (using HASH tags)

Cluster is the right choice for high-throughput caching, real-time analytics, and large-scale session management where data growth requires horizontal scaling. The complexity trade-off is worth it when a single node cannot hold your dataset.

Use KeyDB Multi-Active when:

  • You need active-active writes for geographic distribution
  • Your application cannot tolerate even brief failover windows
  • You want multi-threaded performance (KeyDB’s other key feature)
  • Your dataset fits in memory and you can accept full replication on each node

KeyDB Multi-Active is ideal for globally distributed applications where write latency matters, or for environments where the failover window of Sentinel/Cluster is unacceptable. The last-writer-wins conflict resolution works well for cache and session data but requires careful consideration for data with concurrent writes.

Why Self-Host Your Redis Infrastructure?

Running your own Redis infrastructure gives you full control over data residency, security policies, and performance tuning. Managed Redis services are convenient, but self-hosting eliminates vendor lock-in, reduces costs at scale, and allows deep customization of replication, persistence, and eviction policies.

For organizations managing sensitive session data or caching layers, self-hosted Redis ensures that no third party has access to your in-memory data. You control TLS configuration, password rotation, network isolation, and backup schedules. Combined with proper monitoring and alerting, self-hosted Redis HA deployments can achieve five-nines availability.

For infrastructure automation and configuration management of your Redis clusters, see our Ansible vs SaltStack vs Puppet comparison. For container orchestration options that can manage Redis deployments at scale, our K3s vs K0s vs Talos Linux guide covers lightweight Kubernetes. If you are managing Redis GUI tools for administration, our Redis Commander vs Redis Insight vs ARDM comparison provides operational insights.

FAQ

What happens to in-flight requests during Redis Sentinel failover?

During the failover window (typically 10-30 seconds), clients connected to the old master will receive connection errors. Sentinel-aware clients will automatically reconnect to the new master once failover completes. In-flight commands are lost because Redis does not persist in-memory operations during a failover.

Can Redis Cluster run with fewer than 6 nodes?

No. Redis Cluster requires a minimum of 3 master nodes for quorum and at least 1 replica per master for HA, totaling 6 nodes. You can run a 3-node cluster without replicas, but this provides no HA — if any master fails, its hash slots become unavailable.

Does KeyDB Multi-Active resolve write conflicts automatically?

Yes, using a last-writer-wins strategy based on logical timestamps. The key with the most recent write wins across all replicas. This works well for cache and session data but can cause data loss if two clients write to the same key simultaneously. For critical data, consider using Redis Cluster or Sentinel instead.

How do I monitor Redis HA health?

Use the INFO replication command on any Redis instance to check replication status, connected replicas, and replication offset. For Sentinel, use SENTINEL master <name> to check the monitored master’s status. Redis Cluster provides CLUSTER INFO and CLUSTER NODES commands for cluster-wide health checks.

Can I mix Redis Sentinel and Cluster?

No. Sentinel and Cluster are mutually exclusive. Sentinel manages master-replica relationships for a single dataset, while Cluster manages sharding across multiple nodes. If you need both sharding and automated failover, use Redis Cluster (which includes built-in failover for each shard).

What is the memory overhead of KeyDB Multi-Active vs Redis Sentinel?

KeyDB Multi-Active stores the full dataset on every node in the ring. With 3 nodes, you need 3x the total memory. Redis Sentinel also stores the full dataset on the master and each replica, so the memory overhead is identical. The difference is that all KeyDB nodes accept writes, while only the Sentinel master does.