-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove INITIALIZE and MARK_AGGREGATED from ReportsProcessed #415
Conversation
1bc774f
to
f2d01de
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks reasonable enough given the transitional phase we're in. I've left some suggestions and pointed out places that could use more clarification.
daphne/src/lib.rs
Outdated
@@ -484,7 +485,7 @@ impl DapTaskConfig { | |||
&self, | |||
part_batch_sel: &'sel PartialBatchSelector, | |||
consumed_reports: impl Iterator<Item = &'rep EarlyReportStateConsumed<'rep>>, | |||
) -> Result<HashMap<DapBatchBucket, Vec<&'rep EarlyReportStateConsumed<'rep>>>, DapError> { | |||
) -> Result<HashMap<DapBatchBucket, Vec<ReportMetadata>>, DapError> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does the caller actually need ReportMetadata
or would (ReportId, Time)
suffice? If so, consider returning a DapAggregateSpan<()>
so that we can start making sense of these types.
Passing around metadata might be kind of expensive. Note that the "extensions" field of report metadata could be fairly big, a few hundreed bytes say. However we really only need this for draft-02 (extensions have been moved out of the shared report metadata in draft-07).
@@ -185,6 +186,17 @@ impl<'req> EarlyReportStateConsumed<'req> { | |||
input_share: input_share.payload, | |||
}) | |||
} | |||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add docucomment.
Self::Ready { metadata, .. } => metadata, | ||
Self::Rejected { metadata, .. } => metadata, | ||
}; | ||
EarlyReportStateInitialized::Rejected { metadata, failure } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If self
was Self::Rejected
, then we'll end up overwriting thefailure
here. Is this intended behavior? If so, we should document why this is the behavior we want.
My gut feeling is we probably don't want to override the failure reason, but I could be wrong.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should override, principle of least surprise, if I pass a parameter I expect it to be used
@@ -307,6 +319,18 @@ impl<'req> EarlyReportStateInitialized<'req> { | |||
}; | |||
Ok(early_report_state_initialized) | |||
} | |||
|
|||
/// Turn this report into a rejected report using `failure` as the reason for it's rejection. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here again: Just want to double check that overwriting the failure reason in case it was already rejected is intended.
.collect::<Result<Vec<_>, DapError>>() | ||
.map_err(|e| fatal_error!(err = ?e, "failed to initialize a report"))?; | ||
|
||
let replayed_reports_check = futures::stream::iter(reports_processed_request_data) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since replay protection has now moved to AggregateStore, I don't think there's any sense in doing replay protection here as well. We will eventually delete this code --- any reason not to do so now?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These are different replay protection steps. The first one checks if they have been collected (aka, checks if these reports have already made it to the last stage of the protocol) the second check checks if they have made it to this very step of the protocol.
As such I think this check is needed. A report can be a replay before it has been collected.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch. I suppose this will get simpler when we move the collected check to AggregateStore as well?
f2d01de
to
85c9c24
Compare
The initial idea was to remove ReportsProcessed but it's still needed in order
to detect replays in the initialization step