Skip to content

feat: user batch support#1846

Open
Mirko-von-Leipzig wants to merge 8 commits intomirko/mempool-tx-revertingfrom
mirko/mempool-user-batches
Open

feat: user batch support#1846
Mirko-von-Leipzig wants to merge 8 commits intomirko/mempool-tx-revertingfrom
mirko/mempool-user-batches

Conversation

@Mirko-von-Leipzig
Copy link
Collaborator

@Mirko-von-Leipzig Mirko-von-Leipzig commented Mar 26, 2026

This PR is the third and final part of the mempool refactoring PR stack. Part 1 (#1820) performs the broad mempool refactoring to simplify this PR. Builds on part 2 (#1832).

Batch submissions must include their transaction inputs since we currently require this for the validator to verify them before inclusion in a block. This PR abuses this by treating the batch as a set of normal transactions at the mempool level. This simplifies the mempool implementation, which is currently built around a DAG of transactions - so having to insert a batch directly would be more complex. This will need to change once we stop requiring transaction inputs as part of the validator; but it won't be too bad.

The way this is implemented here, is that the transaction DAG tracks user batches and ensures that when a batch is selected, that transactions from user batches are not mixed with conventional transactions. That is, select_batch outputs either a user batch, or a conventional batch.

Effectively, the transaction DAG internally ensures that the user batch's transactions remain coherent even though the batch has been deconstructed into individual transactions. The benefit is that this doesn't require any major structural changes to the mempool. The rest of the mempool then treats the user batch as per normal.

Closes #1112

@Mirko-von-Leipzig Mirko-von-Leipzig force-pushed the mirko/mempool-user-batches branch from bf86aec to 51e74e4 Compare March 26, 2026 16:29
@Mirko-von-Leipzig Mirko-von-Leipzig force-pushed the mirko/mempool-user-batches branch from 51e74e4 to 6dd5f53 Compare March 26, 2026 16:31
// Encoded using [winter_utils::Serializable] implementation for
// [miden_protocol::transaction::proven_tx::ProvenTransaction].
bytes encoded = 1;
message TransactionBatch {
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The suggestion here I think was to re-add the proven_batch property, and have the others be optional so we can drop them at some point.

@Mirko-von-Leipzig Mirko-von-Leipzig marked this pull request as ready for review March 26, 2026 16:34
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@PhilippGackstatter if you could throw an eye on the process here to ensure I'm checking the correct things.

The state itself is checked in the mempool, so here we really just want to ensure that the batch and its transactions are valid and the reference block is correct iiuc.

Comment on lines +440 to +445
let reference_commitment: Word = reference_header
.chain_commitment
.expect("store should always fill block header")
.try_into()
.expect("store Word should be okay");
if reference_commitment != proof.reference_block_commitment() {
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this correct?

.into_iter()
.map(|tx| tx.id())
.collect();
let x = self.inner.revert_node_and_descendants(transaction);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
let x = self.inner.revert_node_and_descendants(transaction);
let x = self.inner.revert_node_and_descendants(revert);

Should this be revert instead of transaction?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, thank you!

&mut self,
txs: &[Arc<AuthenticatedTransaction>],
) -> Result<BlockNumber, MempoolSubmissionError> {
assert!(!txs.is_empty(), "Cannot have a batch with no transactions");
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just checking we want to crash here instead of return error

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I assumed that one cannot build a ProvenBatch without one, so this would indicate an internal bug somewhere. But maybe that's a poor assumption.

}

pub fn select_batch(&mut self, budget: BatchBudget) -> Option<SelectedBatch> {
self.select_user_batch().or_else(|| self.select_conventional_batch(budget))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might want some doc comments to make it clear that budget is intended to only relevant for conventional batches.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also are we OK with user batches always taking priority over conventional here?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also wondering if we need to prevent user batches of size 1 (or some other limit). Unsure if that is relevant to this PR just a general thought.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also are we OK with user batches always taking priority over conventional here?

I'm unsure, but at the moment it doesn't matter much. If its a concern we can make it random -- I was thinking maybe that's best.

Also wondering if we need to prevent user batches of size 1 (or some other limit). Unsure if that is relevant to this PR just a general thought.

Good question. I'm unsure 😬 I wonder if that makes some user loop more difficult i.e. they always submit user batches, but sometimes they don't have many transactions to bundle..

Probably we would want some limit even in the future? cc @bobbinth

) -> Result<BlockNumber, MempoolSubmissionError> {
assert!(!txs.is_empty(), "Cannot have a batch with no transactions");

if self.unbatched_transactions_count() + txs.len() >= self.config.tx_capacity.get() {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: we reject if we are at capacity, rather than over capacity

.store
.get_tx_inputs(tx)
.await
.map_err(MempoolSubmissionError::StoreConnectionFailed)?;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it be worth to do these queries concurrently with something like futures::future::try_join_all?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or a batch endpoint. Probably not worth it based on the number of txns in a batch?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point, tbh I didn't even consider it. I think simple is better for now, there's also the thought that this makes it equivalent to normal transactions i.e. submitting a batch of N isn't unfairly faster, or consumes more resources, than N single ones.

I don't feel strongly about this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants