Skip to content

Commit 229f883

Browse files
committed
Avoid parallel fork choice runs during sync (#3217)
## Issue Addressed Fixes an issue that @paulhauner found with the v2.3.0 release candidate whereby the fork choice runs introduced by #3168 tripped over each other during sync: ``` May 24 23:06:40.542 WARN Error signalling fork choice waiter slot: 3884129, error: ForkChoiceSignalOutOfOrder { current: Slot(3884131), latest: Slot(3884129) }, service: beacon ``` This can occur because fork choice is called from the state advance _and_ the per-slot task. When one of these runs takes a long time it can end up finishing after a run from a later slot, tripping the error above. The problem is resolved by not running either of these fork choice calls during sync. Additionally, these parallel fork choice runs were causing issues in the database: ``` May 24 07:49:05.098 WARN Found a chain that should already have been pruned, head_slot: 92925, head_block_root: 0xa76c7bf1b98e54ed4b0d8686efcfdf853484e6c2a4c67e91cbf19e5ad1f96b17, service: beacon May 24 07:49:05.101 WARN Database migration failed error: HotColdDBError(FreezeSlotError { current_split_slot: Slot(92608), proposed_split_slot: Slot(92576) }), service: beacon ``` In this case, two fork choice calls triggering the finalization processing were being processed out of order due to differences in their processing time, causing the background migrator to try to advance finalization _backwards_ :flushed:. Removing the parallel fork choice runs from sync effectively addresses the issue, because these runs are most likely to have different finalized checkpoints (because of the speed at which fork choice advances during sync). In theory it's still possible to process updates out of order if any other fork choice runs end up completing out of order, but this should be much less common. Fixing out of order fork choice runs in general is difficult as it requires architectural changes like serialising fork choice updates through a single thread, or locking fork choice along with the head when it is mutated (#3175). ## Proposed Changes * Don't run per-slot fork choice during sync (if head is older than 4 slots) * Don't run state-advance fork choice during sync (if head is older than 4 slots) * Check for monotonic finalization updates in the background migrator. This is a good defensive check to have, and I'm not sure why we didn't have it before (we may have had it and wrongly removed it).
1 parent 6044984 commit 229f883

File tree

3 files changed

+65
-7
lines changed

3 files changed

+65
-7
lines changed

Diff for: beacon_node/beacon_chain/src/beacon_chain.rs

+18-3
Original file line numberDiff line numberDiff line change
@@ -123,6 +123,12 @@ const EARLY_ATTESTER_CACHE_HISTORIC_SLOTS: u64 = 4;
123123
/// If the head block is older than this value, don't bother preparing beacon proposers.
124124
const PREPARE_PROPOSER_HISTORIC_EPOCHS: u64 = 4;
125125

126+
/// If the head is more than `MAX_PER_SLOT_FORK_CHOICE_DISTANCE` slots behind the wall-clock slot, DO NOT
127+
/// run the per-slot tasks (primarily fork choice).
128+
///
129+
/// This prevents unnecessary work during sync.
130+
const MAX_PER_SLOT_FORK_CHOICE_DISTANCE: u64 = 4;
131+
126132
/// Reported to the user when the justified block has an invalid execution payload.
127133
pub const INVALID_JUSTIFIED_PAYLOAD_SHUTDOWN_REASON: &str =
128134
"Justified block has an invalid execution payload.";
@@ -4412,6 +4418,18 @@ impl<T: BeaconChainTypes> BeaconChain<T> {
44124418
pub fn per_slot_task(self: &Arc<Self>) {
44134419
trace!(self.log, "Running beacon chain per slot tasks");
44144420
if let Some(slot) = self.slot_clock.now() {
4421+
// Always run the light-weight pruning tasks (these structures should be empty during
4422+
// sync anyway).
4423+
self.naive_aggregation_pool.write().prune(slot);
4424+
self.block_times_cache.write().prune(slot);
4425+
4426+
// Don't run heavy-weight tasks during sync.
4427+
if self.best_slot().map_or(true, |head_slot| {
4428+
head_slot + MAX_PER_SLOT_FORK_CHOICE_DISTANCE < slot
4429+
}) {
4430+
return;
4431+
}
4432+
44154433
// Run fork choice and signal to any waiting task that it has completed.
44164434
if let Err(e) = self.fork_choice() {
44174435
error!(
@@ -4434,9 +4452,6 @@ impl<T: BeaconChainTypes> BeaconChain<T> {
44344452
);
44354453
}
44364454
}
4437-
4438-
self.naive_aggregation_pool.write().prune(slot);
4439-
self.block_times_cache.write().prune(slot);
44404455
}
44414456
}
44424457

Diff for: beacon_node/beacon_chain/src/migrate.rs

+37-4
Original file line numberDiff line numberDiff line change
@@ -55,7 +55,13 @@ pub enum PruningOutcome {
5555
Successful {
5656
old_finalized_checkpoint: Checkpoint,
5757
},
58-
DeferredConcurrentMutation,
58+
/// The run was aborted because the new finalized checkpoint is older than the previous one.
59+
OutOfOrderFinalization {
60+
old_finalized_checkpoint: Checkpoint,
61+
new_finalized_checkpoint: Checkpoint,
62+
},
63+
/// The run was aborted due to a concurrent mutation of the head tracker.
64+
DeferredConcurrentHeadTrackerMutation,
5965
}
6066

6167
/// Logic errors that can occur during pruning, none of these should ever happen.
@@ -68,6 +74,10 @@ pub enum PruningError {
6874
MissingInfoForCanonicalChain {
6975
slot: Slot,
7076
},
77+
FinalizedStateOutOfOrder {
78+
old_finalized_checkpoint: Checkpoint,
79+
new_finalized_checkpoint: Checkpoint,
80+
},
7181
UnexpectedEqualStateRoots,
7282
UnexpectedUnequalStateRoots,
7383
}
@@ -223,16 +233,29 @@ impl<E: EthSpec, Hot: ItemStore<E>, Cold: ItemStore<E>> BackgroundMigrator<E, Ho
223233
Ok(PruningOutcome::Successful {
224234
old_finalized_checkpoint,
225235
}) => old_finalized_checkpoint,
226-
Ok(PruningOutcome::DeferredConcurrentMutation) => {
236+
Ok(PruningOutcome::DeferredConcurrentHeadTrackerMutation) => {
227237
warn!(
228238
log,
229239
"Pruning deferred because of a concurrent mutation";
230240
"message" => "this is expected only very rarely!"
231241
);
232242
return;
233243
}
244+
Ok(PruningOutcome::OutOfOrderFinalization {
245+
old_finalized_checkpoint,
246+
new_finalized_checkpoint,
247+
}) => {
248+
warn!(
249+
log,
250+
"Ignoring out of order finalization request";
251+
"old_finalized_epoch" => old_finalized_checkpoint.epoch,
252+
"new_finalized_epoch" => new_finalized_checkpoint.epoch,
253+
"message" => "this is expected occasionally due to a (harmless) race condition"
254+
);
255+
return;
256+
}
234257
Err(e) => {
235-
warn!(log, "Block pruning failed"; "error" => format!("{:?}", e));
258+
warn!(log, "Block pruning failed"; "error" => ?e);
236259
return;
237260
}
238261
};
@@ -347,6 +370,16 @@ impl<E: EthSpec, Hot: ItemStore<E>, Cold: ItemStore<E>> BackgroundMigrator<E, Ho
347370
.into());
348371
}
349372

373+
// The new finalized state must be newer than the previous finalized state.
374+
// I think this can happen sometimes currently due to `fork_choice` running in parallel
375+
// with itself and sending us notifications out of order.
376+
if old_finalized_slot > new_finalized_slot {
377+
return Ok(PruningOutcome::OutOfOrderFinalization {
378+
old_finalized_checkpoint,
379+
new_finalized_checkpoint,
380+
});
381+
}
382+
350383
debug!(
351384
log,
352385
"Starting database pruning";
@@ -523,7 +556,7 @@ impl<E: EthSpec, Hot: ItemStore<E>, Cold: ItemStore<E>> BackgroundMigrator<E, Ho
523556
// later.
524557
for head_hash in &abandoned_heads {
525558
if !head_tracker_lock.contains_key(head_hash) {
526-
return Ok(PruningOutcome::DeferredConcurrentMutation);
559+
return Ok(PruningOutcome::DeferredConcurrentHeadTrackerMutation);
527560
}
528561
}
529562

Diff for: beacon_node/beacon_chain/src/state_advance_timer.rs

+10
Original file line numberDiff line numberDiff line change
@@ -37,6 +37,9 @@ use types::{AttestationShufflingId, EthSpec, Hash256, RelativeEpoch, Slot};
3737
/// for some period of time.
3838
const MAX_ADVANCE_DISTANCE: u64 = 4;
3939

40+
/// Similarly for fork choice: avoid the fork choice lookahead during sync.
41+
const MAX_FORK_CHOICE_DISTANCE: u64 = 4;
42+
4043
#[derive(Debug)]
4144
enum Error {
4245
BeaconChain(BeaconChainError),
@@ -212,6 +215,13 @@ async fn state_advance_timer<T: BeaconChainTypes>(
212215
let next_slot = current_slot + 1;
213216
executor.spawn_blocking(
214217
move || {
218+
// Don't run fork choice during sync.
219+
if beacon_chain.best_slot().map_or(true, |head_slot| {
220+
head_slot + MAX_FORK_CHOICE_DISTANCE < current_slot
221+
}) {
222+
return;
223+
}
224+
215225
if let Err(e) = beacon_chain.fork_choice_at_slot(next_slot) {
216226
warn!(
217227
log,

0 commit comments

Comments
 (0)