You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: README.md
+82-35
Original file line number
Diff line number
Diff line change
@@ -99,6 +99,9 @@ if err != nil {
99
99
fmt.Println(v)
100
100
```
101
101
102
+
`Updater` provides `ListByPrefix` function, but it can be used only if underlying cache supports it (is a `KV` wrapper).
103
+
Otherwize it will panic.
104
+
102
105
### Sharding
103
106
104
107
If you intend to use cache in *higlhy* concurrent manner (16+ cores and 100k+ RPS). It may make sense to shard it.
@@ -140,7 +143,7 @@ Internally `KV` maintains trie structure to store keys to be able to quickly fin
140
143
This wrapper has some limitations:
141
144
*`KV` only supports keys of type `string`.
142
145
* Lexicographical order is maintained on the byte level, so it will work as expected for ASCII strings, but may not work for other encodings.
143
-
*If you wrap `KV` with another wrapper you can't use `ListByPrefix`. Don't do it!
146
+
*`Updater` and `Locker` wrappers provide `ListByPrefix` function, that will call underlying `KV` implementation. But if you wrap `KV` with `Sharded` wrapper, you will loose this functionality. In other words it would not make sense to wrap `KV` with `Sharded` wrapper.
144
147
145
148
```go
146
149
cache:=NewMapCache[string, string]()
@@ -156,6 +159,42 @@ This wrapper has some limitations:
156
159
// Output: [bar bar1 bar2 bar3]
157
160
```
158
161
162
+
### Locker
163
+
164
+
This wrapper is useful when you need to make several operations on the cache atomically. For example you store account balances in the cache and want to transfer some amount from one account to another:
The `Locker` itself does not implement `Geche` interface, but `Tx` object returned by `Lock` or `RLock` method does.
187
+
Be careful to follow these rules (will lead to panics):
188
+
* do not use `Set` and `Del` on read-only `Tx` acquired with `RLock`.
189
+
* do not use `Tx` after `Unlock` call.
190
+
* do not `Unlock``Tx` that was unlocked before.
191
+
And do not forget to `Unlock` the `Tx` object, otherwise it will lead to lock to be held forever.
192
+
193
+
Returned `Tx` object is not a transaction in a sense that it does not
194
+
allow rollback, but it provides atomicity and isolation guarantees.
195
+
196
+
`Locker` provides `ListByPrefix` function, but it can only be used if underlying cache implementation supports it (is a `KV` wrapper). Otherwize it will panic.
197
+
159
198
## Benchmarks
160
199
161
200
Test suite contains a couple of benchmarks to compare the speed difference between old-school generic implementation using `interface{}` or `any` to hold cache values versus using generics.
@@ -166,6 +205,8 @@ There are two types of benchmarks:
166
205
*`BenchmarkSet` only times the `Set` operation that allocates all the memory, and usually is the most resource intensive.
167
206
*`BenchmarkEverything` repeatedly does one of three operations (Get/Set/Del). The probability for each type of operation to be executed is 0.9/0.05/0.05 respectively. Each operation is executed on randomly generated key, there are totally 1 million distinct keys, so total cache size will be limited too.
168
207
208
+
Another benchmark `BenchmarkKVListByPrefix` lists `KV` wrapper's `ListByPrefix` operation. It times getting all values matching particular prefix in a cache with 1 million keys. Benchmark is arranged so each call returns 10 records.
209
+
169
210
Benchmarking four simple cache implementations shows that generic cache (`MapCache`) is faster than cache that uses an empty interface to store any type of values (`AnyCache`), but slower than implementations that use concrete types (`StringCache`) and skip on thread safety (`UnsafeCache`).
170
211
Generic `MapTTLCache` is on par with `AnyCache` but it is to be expected as it does more work keeping linked list for fast invalidation. `RingBuffer` performs the best because all the space it needs is preallocated during the initialization, and actual cache size is limited.
171
212
@@ -174,24 +215,27 @@ Note that `stringCache`, `unsafeCache`, `anyCache` implementations are unexporte
174
215
The results below are not to be treated as absolute values. Actual cache operation latency will depend on many variables such as CPU speed, key cardinality, number of concurrent operations, whether the allocation happen during the operation or underlying structure already has the allocated space and so on.
And now on 32 CPU machine we clearly see performance degradation due to lock contention. Sharded implementations are about 4 times faster.
222
-
Notice the Imcache result. Is too good to be true 😅
270
+
Notice the Imcache result. Crazy fast! 😅
223
271
224
-
KV wrapper result is worse then other caches, but it is expected as it keeps key level ordering on insert and does extra work to cleanup the key in trie on delete.
272
+
KV wrapper result is worse then other caches, but it is expected as it keeps key index allowing prefix search with deterministic order, that other caches do not allow. It updates trie structure on `Set` and does extra work to cleanup the key on `Del`.
Concurrent comparison benchmark is located in a [separate repository](https://github.com/C-Pro/cache-benchmarks) to avoid pulling unnecessary dependencies in the library.
0 commit comments