|
| 1 | +# Client Architecture |
| 2 | + |
| 3 | +This guide explains the different client types available in `async-redis` and when to use each one. |
| 4 | + |
| 5 | +## Redis Deployment Patterns |
| 6 | + |
| 7 | +Redis can be deployed in several configurations, each serving different scalability and availability needs: |
| 8 | + |
| 9 | +### Single Instance |
| 10 | +A single Redis server handles all operations. Simple to set up and manage, but limited by the capacity of one machine. |
| 11 | + |
| 12 | +**Use when:** |
| 13 | +- **Development**: Local development and testing. |
| 14 | +- **Small applications**: Low traffic applications with simple caching needs. |
| 15 | +- **Prototyping**: Getting started quickly without infrastructure complexity. |
| 16 | + |
| 17 | +**Limitations:** |
| 18 | +- **Single point of failure**: If Redis goes down, your application loses caching. |
| 19 | +- **Memory constraints**: Limited by the memory of one machine. |
| 20 | +- **CPU bottlenecks**: All operations processed by one Redis instance. |
| 21 | + |
| 22 | +Use {ruby Async::Redis::Client} to connect to a single Redis instance. |
| 23 | + |
| 24 | +``` ruby |
| 25 | +require "async/redis" |
| 26 | + |
| 27 | +endpoint = Async::Redis.local_endpoint |
| 28 | +client = Async::Redis::Client.new(endpoint) |
| 29 | + |
| 30 | +Async do |
| 31 | + begin |
| 32 | + client.set("cache:page", "cached content") |
| 33 | + content = client.get("cache:page") |
| 34 | + puts "Retrieved: #{content}" |
| 35 | + ensure |
| 36 | + client.close |
| 37 | + end |
| 38 | +end |
| 39 | +``` |
| 40 | + |
| 41 | +### Cluster (Sharded) |
| 42 | + |
| 43 | +Multiple Redis nodes work together, with data automatically distributed across nodes based on key hashing. Provides horizontal scaling and high availability. |
| 44 | + |
| 45 | +**Use when:** |
| 46 | +- **Large datasets**: Data doesn't fit in a single Redis instance's memory. |
| 47 | +- **High throughput**: Need to distribute load across multiple machines. |
| 48 | +- **Horizontal scaling**: Want to add capacity by adding more nodes. |
| 49 | + |
| 50 | +**Benefits:** |
| 51 | +- **Automatic sharding**: Data distributed across nodes based on consistent hashing. |
| 52 | +- **High availability**: Cluster continues operating if some nodes fail. |
| 53 | +- **Linear scaling**: Add nodes to increase capacity and throughput. |
| 54 | + |
| 55 | +Use {ruby Async::Redis::ClusterClient} to connect to a Redis cluster. |
| 56 | + |
| 57 | +``` ruby |
| 58 | +require "async/redis" |
| 59 | + |
| 60 | +cluster_endpoints = [ |
| 61 | + Async::Redis::Endpoint.new(hostname: "redis-1.example.com", port: 7000), |
| 62 | + Async::Redis::Endpoint.new(hostname: "redis-2.example.com", port: 7001), |
| 63 | + Async::Redis::Endpoint.new(hostname: "redis-3.example.com", port: 7002) |
| 64 | +] |
| 65 | + |
| 66 | +cluster_client = Async::Redis::ClusterClient.new(cluster_endpoints) |
| 67 | + |
| 68 | +Async do |
| 69 | + begin |
| 70 | + # Data automatically distributed across nodes: |
| 71 | + cluster_client.set("cache:user:123", "user data") |
| 72 | + cluster_client.set("cache:user:456", "other user data") |
| 73 | + |
| 74 | + data = cluster_client.get("cache:user:123") |
| 75 | + puts "Retrieved from cluster: #{data}" |
| 76 | + ensure |
| 77 | + cluster_client.close |
| 78 | + end |
| 79 | +end |
| 80 | +``` |
| 81 | + |
| 82 | +Note that the cluster client automatically routes requests to the correct shard where possible. |
| 83 | + |
| 84 | +### Sentinel (Master/Slave with Failover) |
| 85 | + |
| 86 | +One master handles writes, multiple slaves handle reads, with sentinel processes monitoring for automatic failover. |
| 87 | + |
| 88 | +**Use when:** |
| 89 | +- **High availability**: Cannot tolerate Redis downtime. |
| 90 | +- **Read scaling**: Many read operations, fewer writes. |
| 91 | +- **Automatic failover**: Want automatic promotion of slaves to masters. |
| 92 | + |
| 93 | +**Benefits:** |
| 94 | +- **Automatic failover**: Sentinels promote slaves when master fails. |
| 95 | +- **Read/write separation**: Distribute read load across slave instances. |
| 96 | +- **Monitoring**: Built-in health checks and failure detection. |
| 97 | + |
| 98 | +Use {ruby Async::Redis::SentinelClient} to connect to a Redis sentinel. |
| 99 | + |
| 100 | +``` ruby |
| 101 | +require "async/redis" |
| 102 | + |
| 103 | +sentinel_endpoints = [ |
| 104 | + Async::Redis::Endpoint.new(hostname: "sentinel-1.example.com", port: 26379), |
| 105 | + Async::Redis::Endpoint.new(hostname: "sentinel-2.example.com", port: 26379), |
| 106 | + Async::Redis::Endpoint.new(hostname: "sentinel-3.example.com", port: 26379) |
| 107 | +] |
| 108 | + |
| 109 | +sentinel_client = Async::Redis::SentinelClient.new( |
| 110 | + sentinel_endpoints, |
| 111 | + master_name: "mymaster" |
| 112 | +) |
| 113 | + |
| 114 | +Async do |
| 115 | + begin |
| 116 | + # Automatically connects to current master: |
| 117 | + sentinel_client.set("cache:critical", "important data") |
| 118 | + data = sentinel_client.get("cache:critical") |
| 119 | + puts "Retrieved from master: #{data}" |
| 120 | + ensure |
| 121 | + sentinel_client.close |
| 122 | + end |
| 123 | +end |
| 124 | +``` |
0 commit comments