Skip to content

Conversation

@drahnr
Copy link
Contributor

@drahnr drahnr commented Dec 4, 2025

Migration of #1158

Targets #1394

Outstanding work:

Split #1178 into a serialization and usage piece, which can replace the current naiive serialization to a byte blob, but use a full PartialSmt protobuf representation


Scope

Implements the API to query partial storage maps based on an naiive approach (collecting SmtProofs, one per leaf).

Out of scope

Any optimization as outline by #1178 - this is follow-up material.

@drahnr drahnr force-pushed the bernhard-partial-storage-map-queries branch 2 times, most recently from c5f05b6 to 5391d24 Compare December 4, 2025 23:03
@drahnr drahnr force-pushed the bernhard-partial-storage-map-queries branch 2 times, most recently from a994ba4 to 7ff90a7 Compare December 19, 2025 16:30
@drahnr drahnr marked this pull request as ready for review December 19, 2025 18:24
Comment on lines 224 to 227
// A flag that is set to `true` if the number of to-be-returned entries in the
// storage map would exceed a threshold. This indicates to the user that `SyncStorageMaps`
// endpoint should be used to get all storage map data.
bool too_many_entries = 2;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this not be an error instead? I think we do have specific error codes now as part of the RPC "spec".

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@bobbinth this came from the original API sketch. I agree with @Mirko-von-Leipzig if we have a size error and a pattern to use it, we should stick to it.

Comment on lines +550 to +554
storage_forest: &SmtForest,
smt_root: Word,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm sort of surprised there isn't an SmtTree<'a> which you get from SmtForest::open_tree(&self, root: Word) -> Option<SmtTree<'_>>

Copy link
Contributor Author

@drahnr drahnr Dec 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Even if there was, I wouldn't try to use it, since it'd be expensive once we'd move to LargeSmtForest hitting IO a lot for larger accounts

@drahnr drahnr force-pushed the bernhard-partial-storage-map-queries branch from 268f34a to efaa685 Compare December 22, 2025 23:34
@drahnr drahnr changed the title feat: partial storage map queries feat: [2/3] partial storage map queries Dec 23, 2025
@bobbinth
Copy link
Contributor

I'm a bit unclear about the purpose of this PR: it introduces some new code but this code doesn't seem to be used anywhere - but maybe I'm missing something?

It is also not clear to me if we need these changes now. Would it make sense to first finish the refactoring and then make these types improvements after? Or does this help with the refactoring somehow?

@drahnr drahnr force-pushed the bernhard-partial-storage-map-queries branch from ad32795 to bd2709c Compare December 29, 2025 21:52
@drahnr drahnr changed the title feat: [2/3] partial storage map queries feat: [3/4] partial storage map queries Dec 30, 2025
@drahnr drahnr force-pushed the bernhard-partial-storage-map-queries branch from 5ed7cb2 to 199a5cb Compare December 30, 2025 01:21
Copy link
Contributor

@bobbinth bobbinth left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you! Looks good. Not a full review, but I left some questions/comments inline.

Comment on lines +1058 to +1060
// Load storage header from DB (map entries come from forest)
let storage_header =
self.db.select_account_storage_header_at_block(account_id, block_num).await?;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

select_account_header_at_block() on line 1032 above should bring back storage header as well now, right? If so, we can avoid going to the DB for the storage header again.

Comment on lines +90 to +102
/// Returns the storage forest and the root for a specific account storage slot at a block.
///
/// This allows callers to query specific keys from the storage map using `SmtForest::open()`.
/// Returns `None` if no storage root is tracked for this account/slot/block combination.
pub(crate) fn storage_map_forest_with_root(
&self,
account_id: AccountId,
slot_name: &StorageSlotName,
block_num: BlockNumber,
) -> Option<(&SmtForest, Word)> {
let root = self.storage_roots.get(&(account_id, slot_name.clone(), block_num))?;
Some((&self.forest, *root))
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My understanding is that we use this to get SmtProofs from the forest later on, right? If so, this approach feels backwards to me. Is there a reason not to return all the proofs from here? For example, could this method be something like:

pub fn open_storage_map(
    &self,
    account_id: AccountId,
    slot_name: &StorageSlotName,
    block_num: BlockNumber,
    keys: Vec<Word>,
) -> Vec<SmtProof> {
    ...
}

Comment on lines +104 to +115
/// Returns all key-value entries for a specific account storage slot at a block.
///
/// Returns `None` if no entries are tracked for this account/slot/block combination.
pub(crate) fn storage_map_entries(
&self,
account_id: AccountId,
slot_name: &StorageSlotName,
block_num: BlockNumber,
) -> Option<Vec<(Word, Word)>> {
let entries = self.storage_entries.get(&(account_id, slot_name.clone(), block_num))?;
Some(entries.iter().map(|(k, v)| (*k, *v)).collect())
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this a temporary solution? I don't think we can put all entries for all storage maps into memory - how are you thinking of handling this in the future?

If we are able to provide this functionality in a sustainable way - we probably don't need database table to keep historical data - and this would simplify things quite a bit.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants