-
-
Notifications
You must be signed in to change notification settings - Fork 33.4k
Description
Feature or enhancement
Proposal:
At the moment, collections.deque is implemented via a linked list of 64-element arrays. That means it can quickly access elements at the beginning and end, and has only a modest amount of space overhead.
However, collections.deque also offers indexing via my_deque[index] but that takes linear time.
Inspired by Rust's VecDeque, I have implemented a double-ended queue implemented with a growable ring buffer.
The new implementation is simpler, and I ran a few simple benchmarks. Most notable, random access is now O(1), and slicing can be done in O(k) where k is the length of the resulting slice. Just like lists.
- Accessing the ends (append either side, pop either side) is about as fast as before.
- Deleting an arbitrary element in the middle, inserting an element in the middle and rotation etc are still O(n), but the constant factors are better: the main bulk of the work is essentially just
memmove.
One downside: just like with lists, my prototype grows the allocated space geometrically, so the worst case append-time is linear, but it's amortised constant. (There are well-known ways to make both lists and my deque prototype have O(1) append, but if they aren't worth it for lists, they probably aren't worth it for the deque. They come with a constant factor overhead.)
The new implementation should be cache friendlier, because it's just one big array, not a linked list. But I haven't devised a benchmark to investigate the impact of that, yet.
Inserting into a ring-deque with a fixed maximum length is still O(1) worst case (not just amortised), like in the original implementation. That's what the ring-buffer buys us.
Has this already been discussed elsewhere?
No response given
Links to previous discussion of this feature:
No response