-
Notifications
You must be signed in to change notification settings - Fork 519
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Genesis block hash mapping #279
Genesis block hash mapping #279
Conversation
@@ -149,6 +154,22 @@ impl<B, I, C> BlockImport<B> for FrontierBlockImport<B, I, C> where | |||
} | |||
}, | |||
} | |||
// On importing block 1 we also map the genesis block in the auxiliary. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This approach of doing the mapping has the one disadvantage that the node won't be able to answer queries about the genesis block (by hash) until it has imported block 1. That being noted, this approach is certainly good enough in practice, and is the only way I see to do it from the import pipeline.
One maybe workable alternative I see is putting the ethereum genesis block hash in the chain spec's extension field, and having the node write it to aux storage when it starts a new chain. This alternative path has some uncertainty and added complexity for very little practical payoff, so I support keeping it in the import pipeline.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This approach of doing the mapping has the one disadvantage that the node won't be able to answer queries about the genesis block (by hash) until it has imported block 1.
Yes, unfortunately that's the case.
One maybe workable alternative I see is putting the ethereum genesis block hash in the chain spec's extension field, and having the node write it to aux storage when it starts a new chain.
Seems a more natural alternative than the current proposal tbh, but as you said the inmediate benefit we gain from it is almost nothing. Although I'd be very interested in how to use the extension field :)
if block.header.number().clone() == One::one() { | ||
let id = BlockId::Number(Zero::zero()); | ||
if let Ok(Some(header)) = client.header(id) { | ||
let block = self.client.runtime_api().current_block(&id) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not too worried about this because it only happens once a small number of times.
But out of curiosity, will we be able to avoid the runtime api once #199 lands?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, we can use the StorageProvider
from the import pipeline.
@@ -149,6 +154,22 @@ impl<B, I, C> BlockImport<B> for FrontierBlockImport<B, I, C> where | |||
} | |||
}, | |||
} | |||
// On importing block 1 we also map the genesis block in the auxiliary. | |||
if block.header.number().clone() == One::one() { | |||
let id = BlockId::Number(Zero::zero()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If there is a fork at height one, this could be triggered multiple times. I'm not too worried about the performance impact, but would we be overwriting aux data in that case?
Basically, is this logic sufficiently fork-friendly?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The auxiliary for blocks still one to many, so it will not overwrite.
When we switch to one-to-one (the switch is possible as for today, but requires migration strategy), each substrate block have a different calculated ethereum hash, so I'd say yes it's fork-friendly.
One more thought about doing this from the import pipeline. We could avoid accessing storage altogether (either through the runtime API or the storage provider) by adding a second kind of digest other than |
I like that idea way more! I will try it at some point this week. Thanks 💯 |
@sorpaas could you please review this? I wanted to know your opinion on Joshy's comment #279 (comment) and if you also agree that's a better approach I can take care of updating. |
One issue is backwards compatibility. Chains that have already started won't have the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this approach is good enough for us right now. It also only modifies the auxiliary so we can easily change it to something else in the future.
This PR proposes a solution for #78, mapping Ethereum <> Substrate genesis block hashes on the 1st block import using the runtime api.
Currently we depend on
ConsensusLog
digest items to do the mapping on the import pipeline. For genesis this is a problem because:HeaderBackend
.So I ended up using the runtime api.