|
1 |
| -# StackRedis.L1 |
2 |
| -In-Memory "L1" cache for the .NET StackExchange.Redis library. |
| 1 | +# RedisInMemory |
| 2 | +In-Memory .NET StackExchange.Redis library. |
3 | 3 |
|
4 |
| -This library implements the `StackExchange.Redis.IDatabase` interface to provide an in-memory cache layer between your .NET application and Redis. |
| 4 | +This library implements the `StackExchange.Redis.IDatabase` interface to provide a test double for your .NET applications using Redis. |
5 | 5 |
|
6 |
| -Additionally, it can be used as a purely in-memory cache, without any Redis instance. This is useful for simulating redis use, in test cases, for example. |
| 6 | +It can be used purely in-memory, without any Redis instance. This is useful for simulating Redis use, in test cases, for example. |
7 | 7 |
|
8 | 8 | ### Why?
|
9 | 9 |
|
10 |
| -Network latency is your primary bottleneck when talking to Redis. Usually this is solved via the use of in-memory caching on the application server; this project is an attempt to generalise that. Any data sent to Redis is intercepted and stored in memory using the .NET MemoryCache, and later requests for that data are returned from MemoryCache if possible. |
11 |
| - |
12 |
| -### What about multiple clients? |
13 |
| - |
14 |
| -If you have multiple clients all talking to Redis, then each one can still use this cache layer. This is achieved by invalidating data appropriately in the background by using a combination of [Redis keyspace notifications](http://redis.io/topics/notifications) and custom pub/sub channels. Due to the redis at-most-once delivery of messages, if a message is lost, then the in-memory cache will not be updated. This is a risk which needs to be managed. |
| 10 | +For faster, more reliable and more isolated integration tests. |
15 | 11 |
|
16 | 12 | ### Usage
|
17 | 13 |
|
18 |
| -If you are already using StackExchange.Redis, then integration into your project is a 2-line change: |
| 14 | +If you are already using StackExchange.Redis, then integration into your project is a 1-line change: |
19 | 15 |
|
20 |
| - IDatabase database = connection.GetDatabase(); //Get your Redis IDatabase instance as normal |
21 |
| - IDatabase l1Database = new StackRedis.L1.RedisL1Database(database) //Create the in-memory cached database on top |
| 16 | + IDatabase database = new RedisDatabaseInMemory(); //Create the in-memory Redis database |
22 | 17 |
|
23 |
| -Since the `RedisL1Database` implements `IDatabase`, it's a simple swap. |
| 18 | +Since the `RedisDatabaseInMemory` implements `IDatabase`, it's a simple swap. |
24 | 19 |
|
25 | 20 | If you wish to clear your in-memory state, simply `Dispose` it and re-create it.
|
26 | 21 |
|
27 |
| -### Limitations |
28 |
| - |
29 |
| -If you use this library for one Redis client, you must use it for all clients of that database. This is because Redis keyspace notifications alone are not enough to accelerate the Hash and Set types; custom messages are required. If you use this library to accelerate only one client, then that client risks holding on to stale data keys expire or are deleted. |
30 |
| - |
31 |
| -No doubt there are other limitations I haven't thought of, so all feedback is of course welcome. |
32 | 22 |
|
33 | 23 | ### Project State
|
34 | 24 |
|
35 |
| -Any types/functions that are not accelerated will work as normal, as the request will be passed directly to Redis. Details on what is accelerated is laid out below. |
| 25 | +Not every Redis functionality is implemented so some methods will throw `NotImplementedException`. |
36 | 26 |
|
37 | 27 | **String**
|
38 |
| -The `String` type is fully accelerated, using `MemoryCache` for storage. |
| 28 | +The `String` type is mostly implemented. |
39 | 29 |
|
40 | 30 | **Hash**
|
41 |
| -The `Hash` type is fully accelerated with a `Dictionary` within `MemoryCache` used for storage. |
| 31 | +The `Hash` type is mostly implemented. |
42 | 32 |
|
43 | 33 | **Set**
|
44 |
| -The `Set` type is heavily accelerated using a `HashSet` within `MemoryCache` used for in-memory storage. |
| 34 | +The `Set` type is mostly implemented. |
45 | 35 |
|
46 | 36 | **SortedSet**
|
47 |
| -`SortedSet` operations involving `score` are currently accelerated. This is done using the concept of 'disjointed sets' in memory - a collection of sorted subsets of the full sorted set. It would be possible to use the same technique to accelerate calls involving Rank, but this isn't implemented. Use `SortedSetRangeByScoreWithScores` for best caching performance. Note however that specifying 'skip' or 'take' prevents results being cached. |
| 37 | +`SortedSet` type is mostly not implemented. |
48 | 38 |
|
49 | 39 | **List**
|
50 |
| -The `List` type is not accelerated, as it cannot be cached meaningfully in memory. This is because operations generally involve the head or the tail of the list, and we cannot know whether or not we have the head or tail in memory. Most operations would involve invalidating the entire list data and so there would be little benefit. All `List` operations are passed through directly to Redis. |
| 40 | +The `List` type is somewhat implemented. |
0 commit comments