-
-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
clear without limit #30
Comments
Optionally if we could add a |
Why is |
Because it's non blocking. It can apply the delete lazily while applying it to all preceding reads in memory. Also you can do delete range in a batch. |
Not sure I understand what you mean by non blocking. |
It doesn't block the thread due to IO. |
From the JS side that doesn't matter. Unless your concern is not having enough threads in the async worker pool (for things other than level)? |
I'm actually trying to do all the write operations in the main js thread due to a few different reasons:
In rocksdb and I believe also leveldb writes are non blocking as long as the writing background threads can keep up. |
Not sure if there's anything left to answer here. I'll address some specific points:
That simply makes it incompatible with the
See #38 (comment) - which was a duplicate issue?
I don't want to add RocksDB-specific features that are a limited subset of existing functionality. Your implementation is free to have additional methods ofc (like deleteRange). Perhaps a better way forward is to not follow the
You have a fundamentally different idea of what the order of operations should be. Which is fine, but it's not level. Level promises consistency in a simple way: want to know that a write finished? Await it. Subsequent reads are then guaranteed to include that written data.
Same applies here: userland code can decide (to care about) the order, by awaiting. |
rocksdb has a very nice
DeleteRange
API which is non blocking and would be great forclear
. However, there are two problems:Any chance of making these optional in the level API?
The text was updated successfully, but these errors were encountered: