-
Notifications
You must be signed in to change notification settings - Fork 645
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add RwLock
#2082
base: master
Are you sure you want to change the base?
Add RwLock
#2082
Conversation
At the moment, there is a deadlocking problem when multiple requests for write access are made. |
With this implementation (and the Starvation occurs here: futures-rs/futures-util/src/lock/rwlock.rs Lines 101 to 106 in 1430b05
See also: |
@parasyte The function |
@parasyte At the moment what I have is this: futures-rs/futures-util/src/lock/rwlock.rs Lines 417 to 431 in 1430b05
Which only wakes waiting readers if there are no currently waiting writers. Though, this is flawed at the moment since the read counter isn't counting the number of waiting readers, it counts the number of active readers—which in this case is always going to be 0. |
I'm confident the issue is in A second reader can come along at any time and also acquire the lock because This is where read-heavy workloads become a problem. A writer will be blocked forever. |
I see your point; so what you're arguing is that we establish more complex rules for acquiring read locks? I think that's doable and could work out fine.
Although, from the sound of it, it seems like we will then be starving readers in write-heavy applications. |
The docs I pointed to in
The second property is termed "eventual fairness", which is maybe closer to what you have been trying to address? They describe a situation where a single task may starve other tasks by repeatedly locking and unlocking without yielding to other tasks. In the In other words, fairness is based on the order in which tasks attempt to acquire the lock: https://github.com/Amanieu/parking_lot/blob/fa294cd677936bf365afa0497039953b10c722f5/src/raw_rwlock.rs#L21-L32 // This reader-writer lock implementation is based on Boost's upgrade_mutex:
// https://github.com/boostorg/thread/blob/fc08c1fe2840baeeee143440fba31ef9e9a813c8/include/boost/thread/v2/shared_mutex.hpp#L432
//
// This implementation uses 2 wait queues, one at key [addr] and one at key
// [addr + 1]. The primary queue is used for all new waiting threads, and the
// secondary queue is used by the thread which has acquired WRITER_BIT but is
// waiting for the remaining readers to exit the lock.
//
// This implementation is fair between readers and writers since it uses the
// order in which threads first started queuing to alternate between read phases
// and write phases. In particular is it not vulnerable to write starvation
// since readers will block if there is a pending writer. Comparing to |
Nonetheless, I think it would suffice for now to allow the pull request to merge once it is deemed suitable. Dealing with fairness policies seems beyond the scope of this pull request and should be left to a dedicated issue. |
It's just an opinion, but I wouldn't merge a feature with known issues. Might as well make it as correct as possible before merging. Since it's async, I would assume that you have a queue of things waiting to obtain the lock. Once a writer is in first place, you wait until everything else releases the lock and give it to them. You can make a queue system where it is FIFO, but when a reader asks for the lock and there is already a reader in the queue, just bundle them all together and wake them all together. After you wake them, if a reader comes in and the queue is empty, they can immediately join the party, but if a writer comes in, that goes in first place and further readers coming in all get grouped in the next slot. That way I don't think anything can starve. That's just a really basic design of the top of my head, just to say I don't think it's an insurmountable problem. But I'm sure people have written white papers and thesises about how you can do this in really smart ways, and if all else fails, tokio seems to have an impl that is fair, based on a semaphore, so that can provide inspiration as well. |
@najamelan The point I was trying to make is that this pull request is focused on the introduction of new data types that provide asynchronous read-write locks. The improvement and optimization of these locks is, in my opinion, a separate matter. If there is an issue with the code itself such that this request at the moment would fail to provide correct read-write locks, then that is something to fix before merging. Otherwise, it would be more efficient to merge just a basic implementation then improve it in separate branches. |
So I've managed to get an implementation together that can support fairness for Read/Write locks. However, I'm currently blocked. I believe that I will need to create custom waker functions, and while I considered using ArcWake for this, I don't know if it would be wise to require types that are stored in these locks to be thread-safe. @taiki-e @cramertj Would it be possible to add additional helper traits/methods in futures-task for creating waker functions that don't require Arc? |
Since |
Alright, I'll see if I can make some adjustments. To clarify though, am I correct in avoiding the additional Edit: Nevermind, the route I was going with was nonsense. |
a2d2459
to
8df0e55
Compare
So, what I have is what I think should produce a fair read/write lock. The way it works uses a ticketing system. Each request for a lock takes a ticket that determines under what circumstances the lock can be granted. The way that it works, and how it manages fairness, is by allocating locks in an alternating style between readers and writers. If there are no write requests, readers are provided locks upon each request; however, when a write request does come along, all read requests that come after are put on hold until the next read phase. Once the current read phase completes, the write request is given the lock, marking the start of a write phase. Once the write phase completes, the next read phase begins and the cycle repeats from there. However, the solution I have here is pretty airtight; and by airtight, I mean its exceptionally inflexible. As of now, I don't really know how to alter it to allow for methods such as |
@taiki-e Could you provide any help setting up a test environment. I'm unfamiliar with the general procedure for adding a new unstable feature to a package like this. |
cdbf2c9
to
9799578
Compare
Alright, from what I managed to see, this should work and provide fair read/write locks. I managed to add in support for Let me know when you guys are available so we can get this going. @cramertj and @taiki-e, I'll need for someone to review the code and to help integrate this as an unstable feature. |
state: AtomicUsize, | ||
blocked: WaiterSet, | ||
writers: WaiterSet, | ||
value: UnsafeCell<T>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we use the spin
crate, and replace the AtomicUsize/UnsafeCell
pair with a spin::RwLock
we may deduplicate a lot of code.
pub struct RwLock<T: ?Sized> { | ||
state: AtomicUsize, | ||
blocked: WaiterSet, | ||
writers: WaiterSet, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IIRC This is generally implemented using another mutex instead of just another water list, but I'm not aure
futures-util/src/lock/rwlock.rs
Outdated
|
||
/// Attempt to acquire a write access lock synchronously. | ||
pub fn try_write(&self) -> Option<RwLockWriteGuard<'_, T>> { | ||
todo!() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you use the spin
types here this should be easier to implement
ace22e8
to
d99ab6d
Compare
fixed missing state variable for waiting readers fixed deadlocking problem with new WaiterSet type added ticketing system added tests renamed ticket to phase in RwLockReadFuture
Resolves #2015
This branch adds an asynchronous version of
std::sync::RwLock
tofutures_util::lock
.New public data types:
futures::lock::RwLock
futures::lock::RwLockReadFuture
futures::lock::RwLockWriteFuture
futures::lock::RwLockReadGuard
futures::lock::RwLockWriteGuard
Feel free to check over the additions. Though I mostly based the code off of the implementation of
futures::lock::Mutex
andasync_std::sync::RwLock