TL;DR — Quick Summary

Deploy Redis Sentinel for automatic failover and high availability. Configure quorum, master-replica topology, Sentinel-aware clients, and production security.

Redis Sentinel is the built-in high availability solution for Redis, providing automatic failover, topology monitoring, and service discovery without the complexity of Redis Cluster. This guide walks through the complete architecture, a production-ready three-node deployment, Sentinel-aware client configuration, quorum concepts, and the security controls you need before going live.

Prerequisites

  • Three Linux servers (physical or virtual) reachable by each other on ports 6379 (Redis) and 26379 (Sentinel).
  • Redis 7.0 or later installed on all nodes.
  • Root or sudo access on each server.
  • Basic familiarity with Redis configuration files and redis-cli.

Sentinel Architecture

Redis Sentinel runs as a separate process alongside your Redis instances. Each Sentinel performs four roles:

  • Monitoring: Checks master and replica health every second using PING/INFO.
  • Notification: Publishes alerts on +switch-master, +slave, and other channels via pub/sub.
  • Automatic failover: When quorum is reached, Sentinel elects a leader, promotes the best replica, and reconfigures remaining replicas.
  • Configuration provider: Clients query Sentinel with SENTINEL get-master-addr-by-name to discover the current master address.

Why Three Sentinels?

A single Sentinel is a single point of failure — if it crashes, failover stops. Two Sentinels create a tie-vote problem: neither can reach the majority needed to elect a leader. Three Sentinels with quorum 2 means:

  • One Sentinel can fail and failover still works.
  • Any two Sentinels can agree to promote a new master.
  • The minority cannot trigger a split-brain promotion.

Always deploy an odd number of Sentinels (3, 5, or 7). More than 7 rarely adds value and increases coordination overhead.


Step 1 — Configure the Redis Master

On Server 1 (192.168.1.101), edit /etc/redis/redis.conf:

bind 0.0.0.0
port 6379
protected-mode no

requirepass "StrongRedisPassword123"
masterauth "StrongRedisPassword123"

# Persistence (AOF recommended for Sentinel deployments)
appendonly yes
appendfsync everysec

# Split-brain protection: reject writes if no replica has synced within 10s
min-replicas-to-write 1
min-replicas-max-lag 10

Start and enable Redis:

sudo systemctl start redis-server
sudo systemctl enable redis-server

Step 2 — Configure Redis Replicas

On Server 2 (192.168.1.102) and Server 3 (192.168.1.103), edit /etc/redis/redis.conf:

bind 0.0.0.0
port 6379
protected-mode no

requirepass "StrongRedisPassword123"
masterauth "StrongRedisPassword123"

# Point to the initial master
replicaof 192.168.1.101 6379

# Replicas refuse write commands from clients (safe default)
replica-read-only yes

# Persistence
appendonly yes
appendfsync everysec

# Replica promotion priority (lower = preferred; 0 = never promote)
replica-priority 100

Start Redis on both replicas:

sudo systemctl start redis-server
sudo systemctl enable redis-server

Verify replication from the master:

redis-cli -a "StrongRedisPassword123" INFO replication

You should see role:master and two slave entries with state:online.


Step 3 — Configure Redis Sentinel

Create /etc/redis/sentinel.conf on all three servers:

port 26379
daemonize yes
logfile "/var/log/redis/sentinel.log"
dir /var/lib/redis

# Monitor the master named "mymaster" with quorum 2
sentinel monitor mymaster 192.168.1.101 6379 2

# Password to authenticate with the master (and replicas after failover)
sentinel auth-pass mymaster StrongRedisPassword123

# Mark master as SDOWN after 5 seconds of no response
sentinel down-after-milliseconds mymaster 5000

# Failover attempt timeout (ms)
sentinel failover-timeout mymaster 60000

# Only one replica syncs from the new master at a time
sentinel parallel-syncs mymaster 1

Key configuration options explained:

DirectivePurpose
sentinel monitor ... 2Quorum: 2 of 3 Sentinels must agree before failover
down-after-millisecondsHow long (ms) before a non-responding instance is marked SDOWN
failover-timeoutMax time for the entire failover process
parallel-syncsReplicas that re-sync simultaneously after failover (keep at 1)
sentinel auth-passPassword Sentinel uses to authenticate with Redis nodes

Start Sentinel on all three servers:

sudo redis-sentinel /etc/redis/sentinel.conf

# Or as a systemd service
sudo systemctl start redis-sentinel
sudo systemctl enable redis-sentinel

Verify topology discovery:

redis-cli -p 26379 SENTINEL masters
redis-cli -p 26379 SENTINEL replicas mymaster
redis-cli -p 26379 SENTINEL sentinels mymaster
redis-cli -p 26379 SENTINEL ckquorum mymaster

The Failover Process Step by Step

Understanding the failover sequence helps you tune timeouts and diagnose problems:

  1. SDOWN (Subjectively Down): One Sentinel stops receiving PONG from the master within down-after-milliseconds. That Sentinel alone marks it as SDOWN.
  2. ODOWN (Objectively Down): The Sentinel broadcasts SENTINEL is-master-down-by-addr to its peers. When the quorum count replies that they also cannot reach the master, the state becomes ODOWN.
  3. Leader election: Sentinels that agree on ODOWN vote for a leader using the Raft-like algorithm built into Sentinel. The Sentinel with the most votes becomes the failover coordinator.
  4. Replica selection: The leader ranks replicas by: disconnection time from master → replica-priority (lower wins) → replication offset (most data wins) → lexicographic run ID.
  5. Promotion: The leader sends REPLICAOF NO ONE to the chosen replica, making it the new master.
  6. Reconfiguration: All other replicas receive REPLICAOF <new-master-ip> 6379. Sentinels update their sentinel.conf files with the new master address. Clients that query Sentinel now receive the new master address.

The entire process typically completes in 10–30 seconds with default settings.


Client Connection Through Sentinel

Never hardcode the Redis master IP in your application. Use a Sentinel-aware client that queries Sentinel for the current master before connecting.

Python (redis-py)

from redis.sentinel import Sentinel

sentinel = Sentinel(
    [
        ("192.168.1.101", 26379),
        ("192.168.1.102", 26379),
        ("192.168.1.103", 26379),
    ],
    socket_timeout=0.5,
    password="StrongRedisPassword123",
)

master = sentinel.master_for("mymaster", socket_timeout=0.5)
replica = sentinel.slave_for("mymaster", socket_timeout=0.5)

master.set("session:abc", "user-data")
value = replica.get("session:abc")

Node.js (ioredis)

const Redis = require("ioredis");

const redis = new Redis({
  sentinels: [
    { host: "192.168.1.101", port: 26379 },
    { host: "192.168.1.102", port: 26379 },
    { host: "192.168.1.103", port: 26379 },
  ],
  name: "mymaster",
  password: "StrongRedisPassword123",
  sentinelPassword: "StrongRedisPassword123",
});

await redis.set("key", "value");

Java (Jedis)

Set<String> sentinels = new HashSet<>();
sentinels.add("192.168.1.101:26379");
sentinels.add("192.168.1.102:26379");
sentinels.add("192.168.1.103:26379");

JedisSentinelPool pool = new JedisSentinelPool(
    "mymaster", sentinels,
    new JedisPoolConfig(),
    "StrongRedisPassword123"
);

try (Jedis jedis = pool.getResource()) {
    jedis.set("key", "value");
}

Security Hardening

Protect Sentinel Itself

Add requirepass to Sentinel — otherwise any client can issue Sentinel commands:

# In sentinel.conf
requirepass "SentinelAdminPassword"

# Sentinel also needs to authenticate with other Sentinels
sentinel sentinel-pass SentinelAdminPassword

Enable TLS

Redis 6.0+ supports TLS for both Redis and Sentinel connections:

# In sentinel.conf
tls-port 26380
port 0
tls-cert-file /etc/redis/tls/sentinel.crt
tls-key-file /etc/redis/tls/sentinel.key
tls-ca-cert-file /etc/redis/tls/ca.crt
tls-replication yes

Use ACLs (Redis 6.0+)

Define granular permissions instead of relying on requirepass alone:

# In redis.conf
aclfile /etc/redis/users.acl
# users.acl
user default off
user sentinel on >SentinelPass ~* &* +@all
user app on >AppPass ~session:* ~cache:* +GET +SET +DEL +EXPIRE

Monitoring Sentinel

# Check all masters monitored by this Sentinel
redis-cli -p 26379 SENTINEL masters

# List known replicas for a specific master
redis-cli -p 26379 SENTINEL replicas mymaster

# List other Sentinels for this master
redis-cli -p 26379 SENTINEL sentinels mymaster

# Verify quorum is reachable
redis-cli -p 26379 SENTINEL ckquorum mymaster

# Get current master address (use this in health checks)
redis-cli -p 26379 SENTINEL get-master-addr-by-name mymaster

# Live Sentinel INFO
redis-cli -p 26379 INFO sentinel

Integrate SENTINEL ckquorum into your monitoring system (Prometheus, Nagios, Zabbix) as a health check. Alert if it returns NOQUORUM.


Redis Sentinel vs Alternatives Comparison

FeatureSentinelRedis ClusterKeyDBDragonflyManaged Redis
Automatic failoverYesYesYesYesYes
Data shardingNoYesYesNoVaries
Setup complexityMediumHighLowLowNone
Client requirementSentinel-awareCluster-awareStandardStandardStandard
Max dataset sizeRAM of one nodeRAM × shardsRAM of one nodeRAM of one nodePlan-limited
CostFree (self-hosted)FreeFreeFree$$/month
Best forHA without shardingLarge datasetsRedis drop-in HAHigh throughputNo ops burden

Use Sentinel when your entire dataset fits on one node and you want automatic failover without the operational complexity of Cluster. Choose Redis Cluster when you need horizontal scaling across multiple nodes. Use a managed Redis (ElastiCache, Upstash, Redis Cloud) when you want HA without managing infrastructure.


Common Issues and Gotchas

Sentinel rewrites its config file. Never edit sentinel.conf manually while Sentinel is running. Sentinel continuously writes discovered topology back to the file. Your edits will be overwritten or corrupt the file.

DNS vs IP addresses. Sentinel stores master and replica addresses by IP at discovery time. If you change a server’s IP, Sentinel won’t follow DNS — you must update sentinel.conf manually and restart Sentinel.

Stale reads during failover. In the 10–30 seconds between master failure and replica promotion, reads from replicas may return stale data. Design your application to tolerate brief stale reads or retry with a timeout.

requirepass without masterauth. If you set requirepass on the master but forget masterauth on replicas, replication authentication fails and replicas disconnect silently. Always set both.

Sentinel failover-timeout is not a detection timeout. failover-timeout controls how long Sentinel retries a failed failover attempt — not how quickly it detects failure. Detection speed is controlled by down-after-milliseconds.


Production Checklist

  • Three Sentinel instances on separate physical hosts or availability zones.
  • Quorum set to (N/2)+1 where N is the number of Sentinels.
  • requirepass and masterauth identical on master and all replicas.
  • sentinel auth-pass set in sentinel.conf.
  • min-replicas-to-write 1 and min-replicas-max-lag 10 on master to prevent split-brain.
  • replica-read-only yes on all replicas.
  • appendonly yes for durability.
  • Application uses Sentinel-aware client library pointing to all Sentinel addresses.
  • SENTINEL ckquorum mymaster returns OK from all three Sentinel nodes.
  • Failover test performed in staging before production go-live.
  • Monitoring alert on NOQUORUM and +switch-master Sentinel pub/sub events.

Summary

  • Redis Sentinel provides monitoring, automatic failover, and service discovery for Redis without requiring Cluster.
  • Deploy at least 3 Sentinel instances on separate hosts; set quorum to 2 for a 3-node setup.
  • The failover sequence moves from SDOWN → ODOWN → leader election → replica promotion → reconfiguration.
  • Use min-replicas-to-write and min-replicas-max-lag on the master to prevent split-brain data loss.
  • Always use a Sentinel-aware client — never hardcode the master IP.
  • Secure Sentinel with requirepass, ACLs, and TLS in production environments.