Skip to content

Commit 329b93b

Browse files
authored
Rollup merge of rust-lang#138161 - HeroicKatora:heap-peek-mut-refresh, r=dtolnay
Add PeekMut::refresh I'm not sure if this should go through ACP or not. BinaryHeap is not the most critical data structure in the standard library and it would be understandable if maintainer throughput is thus too limited to accept this PR without a proper design phase that ensures the required understanding of consequence over a longer time period. This aims to improve the useability of heaps for priority-based work queues. In certain scenarios, modifications on the most relevant or critical items are performed until a condition that determines the work items have been sufficiently addressed. For instance the criticality could be a deadline that is relaxed whenever some part of a work item is completed. Such a loop will repeatedly access the most critical item and put it back in a sorted position when it is complete. Crucially, due to the ordering invariant we know that all necessary work was performed when the completed item remains the most critical. Getting this information from the heap position avoids a (potentially more costly) check on the item state itself. A customized `drop` with boolean result would avoid up to two more comparisons performed in both the last no-op refresh and Drop code but this occurs once in each execution of the above scenario whereas refresh occurs any number of times. Also note that the comparison overhead of Drop is only taken if the element is mutably inspected to determine the end condition, i.e. not when refresh itself is the break condition.
2 parents 2d97a21 + 436959e commit 329b93b

File tree

1 file changed

+78
-4
lines changed
  • library/alloc/src/collections/binary_heap

1 file changed

+78
-4
lines changed

library/alloc/src/collections/binary_heap/mod.rs

+78-4
Original file line numberDiff line numberDiff line change
@@ -361,6 +361,74 @@ impl<T: Ord, A: Allocator> DerefMut for PeekMut<'_, T, A> {
361361
}
362362

363363
impl<'a, T: Ord, A: Allocator> PeekMut<'a, T, A> {
364+
/// Sifts the current element to its new position.
365+
///
366+
/// Afterwards refers to the new element. Returns if the element changed.
367+
///
368+
/// ## Examples
369+
///
370+
/// The condition can be used to upper bound all elements in the heap. When only few elements
371+
/// are affected, the heap's sort ensures this is faster than a reconstruction from the raw
372+
/// element list and requires no additional allocation.
373+
///
374+
/// ```
375+
/// #![feature(binary_heap_peek_mut_refresh)]
376+
/// use std::collections::BinaryHeap;
377+
///
378+
/// let mut heap: BinaryHeap<u32> = (0..128).collect();
379+
/// let mut peek = heap.peek_mut().unwrap();
380+
///
381+
/// loop {
382+
/// *peek = 99;
383+
///
384+
/// if !peek.refresh() {
385+
/// break;
386+
/// }
387+
/// }
388+
///
389+
/// // Post condition, this is now an upper bound.
390+
/// assert!(*peek < 100);
391+
/// ```
392+
///
393+
/// When the element remains the maximum after modification, the peek remains unchanged:
394+
///
395+
/// ```
396+
/// #![feature(binary_heap_peek_mut_refresh)]
397+
/// use std::collections::BinaryHeap;
398+
///
399+
/// let mut heap: BinaryHeap<u32> = [1, 2, 3].into();
400+
/// let mut peek = heap.peek_mut().unwrap();
401+
///
402+
/// assert_eq!(*peek, 3);
403+
/// *peek = 42;
404+
///
405+
/// // When we refresh, the peek is updated to the new maximum.
406+
/// assert!(!peek.refresh(), "42 is even larger than 3");
407+
/// assert_eq!(*peek, 42);
408+
/// ```
409+
#[unstable(feature = "binary_heap_peek_mut_refresh", issue = "138355")]
410+
#[must_use = "is equivalent to dropping and getting a new PeekMut except for return information"]
411+
pub fn refresh(&mut self) -> bool {
412+
// The length of the underlying heap is unchanged by sifting down. The value stored for leak
413+
// amplification thus remains accurate. We erase the leak amplification firstly because the
414+
// operation is then equivalent to constructing a new PeekMut and secondly this avoids any
415+
// future complication where original_len being non-empty would be interpreted as the heap
416+
// having been leak amplified instead of checking the heap itself.
417+
if let Some(original_len) = self.original_len.take() {
418+
// SAFETY: This is how many elements were in the Vec at the time of
419+
// the BinaryHeap::peek_mut call.
420+
unsafe { self.heap.data.set_len(original_len.get()) };
421+
422+
// The length of the heap did not change by sifting, upholding our own invariants.
423+
424+
// SAFETY: PeekMut is only instantiated for non-empty heaps.
425+
(unsafe { self.heap.sift_down(0) }) != 0
426+
} else {
427+
// The element was not modified.
428+
false
429+
}
430+
}
431+
364432
/// Removes the peeked value from the heap and returns it.
365433
#[stable(feature = "binary_heap_peek_mut_pop", since = "1.18.0")]
366434
pub fn pop(mut this: PeekMut<'a, T, A>) -> T {
@@ -672,6 +740,8 @@ impl<T: Ord, A: Allocator> BinaryHeap<T, A> {
672740
/// # Safety
673741
///
674742
/// The caller must guarantee that `pos < self.len()`.
743+
///
744+
/// Returns the new position of the element.
675745
unsafe fn sift_up(&mut self, start: usize, pos: usize) -> usize {
676746
// Take out the value at `pos` and create a hole.
677747
// SAFETY: The caller guarantees that pos < self.len()
@@ -698,10 +768,12 @@ impl<T: Ord, A: Allocator> BinaryHeap<T, A> {
698768
/// Take an element at `pos` and move it down the heap,
699769
/// while its children are larger.
700770
///
771+
/// Returns the new position of the element.
772+
///
701773
/// # Safety
702774
///
703775
/// The caller must guarantee that `pos < end <= self.len()`.
704-
unsafe fn sift_down_range(&mut self, pos: usize, end: usize) {
776+
unsafe fn sift_down_range(&mut self, pos: usize, end: usize) -> usize {
705777
// SAFETY: The caller guarantees that pos < end <= self.len().
706778
let mut hole = unsafe { Hole::new(&mut self.data, pos) };
707779
let mut child = 2 * hole.pos() + 1;
@@ -721,7 +793,7 @@ impl<T: Ord, A: Allocator> BinaryHeap<T, A> {
721793
// SAFETY: child is now either the old child or the old child+1
722794
// We already proven that both are < self.len() and != hole.pos()
723795
if hole.element() >= unsafe { hole.get(child) } {
724-
return;
796+
return hole.pos();
725797
}
726798

727799
// SAFETY: same as above.
@@ -736,16 +808,18 @@ impl<T: Ord, A: Allocator> BinaryHeap<T, A> {
736808
// child == 2 * hole.pos() + 1 != hole.pos().
737809
unsafe { hole.move_to(child) };
738810
}
811+
812+
hole.pos()
739813
}
740814

741815
/// # Safety
742816
///
743817
/// The caller must guarantee that `pos < self.len()`.
744-
unsafe fn sift_down(&mut self, pos: usize) {
818+
unsafe fn sift_down(&mut self, pos: usize) -> usize {
745819
let len = self.len();
746820
// SAFETY: pos < len is guaranteed by the caller and
747821
// obviously len = self.len() <= self.len().
748-
unsafe { self.sift_down_range(pos, len) };
822+
unsafe { self.sift_down_range(pos, len) }
749823
}
750824

751825
/// Take an element at `pos` and move it all the way down the heap,

0 commit comments

Comments
 (0)