-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pbss implemention #161
base: main
Are you sure you want to change the base?
pbss implemention #161
Conversation
core/state/state_object.go
Outdated
addrHash := s.addrHash | ||
if db.TrieDB().IsMorphZk() { | ||
addr_s, _ := zkt.ToSecureKey(s.address.Bytes()) | ||
addrHash = common.BigToHash(addr_s) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If addrHash
is only for storage, use s.addrHash
directly without reassigning.
WalkthroughThe pull request introduces a wide range of updates across the repository. A new GitHub Actions workflow automates Docker image builds and pushes upon versioned tag pushes. Multiple new build targets are added for EC2 artifact generation and upload. Several CLI commands and flags are enhanced within the geth commands, and significant modifications improve trie state management, database access, and proof verification. In addition, extensive changes in Ethereum’s EVM tracing, logging, and configuration further refine state initialization and diagnostic capabilities across various components. Changes
Sequence Diagram(s)sequenceDiagram
participant GitHub
participant ActionRunner
participant DockerRegistry
GitHub->>ActionRunner: Push tag (morph-v*)
ActionRunner->>Repo: Checkout code
ActionRunner->>Docker: Build Docker image (tag: go-ethereum)
ActionRunner->>GitHubPackages: Login using secrets.PACKAGE_TOKEN
ActionRunner->>DockerRegistry: Push image with version and latest tags
sequenceDiagram
participant User
participant CLI
participant GethInitializer
participant Database
User->>CLI: Run init command with --datadir and --state-scheme flags
CLI->>GethInitializer: Pass global flag for genesisPath
GethInitializer->>Database: Initialize state using provided configuration
Database-->>GethInitializer: Return genesis state / error if missing
GethInitializer->>CLI: Output initialization result
Poem
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 12
🔭 Outside diff range comments (1)
triedb/pathdb/difflayer.go (1)
180-188
:⚠️ Potential issueReset is destructive.
This approach simply reinitializesdl.nodes
. If partial data was needed, it’s lost. Confirm that use-case is correct.
🧹 Nitpick comments (89)
go.mod (1)
130-130
: Add Replace Directive for ZKTrie DependencyA new replace directive has been introduced for the ZKTrie dependency:
replace github.com/scroll-tech/zktrie v0.8.4 => github.com/morph-l2/zktrie v0.8.5-alpha
This change redirects to an alpha release from a different repository. Given that alpha versions can be unstable, please ensure:
- Thorough testing of any trie-related functionality (e.g., in
path_zk_trie.go
andzk_trie_database.go
).- That this dependency change is aligned with your new feature requirements.
If possible, consider additional safeguards or version pinning to mitigate risks associated with pre-release software.common/types.go (2)
450-487
: Consider a more concise implementation for bit reversal.While the explicit switch-case approach ensures correct mapping for each nibble value, it's verbose and could be replaced with a more compact bitwise operation. Additionally, the function lacks commentary explaining why this approach was chosen over a mathematical implementation.
A more concise implementation could use bitwise operations:
-func bitReverseForNibble(b byte) byte { - switch b { - case 0: - return 0 - case 1: - return 8 - case 2: - return 4 - case 3: - return 12 - case 4: - return 2 - case 5: - return 10 - case 6: - return 6 - case 7: - return 14 - case 8: - return 1 - case 9: - return 9 - case 10: - return 5 - case 11: - return 13 - case 12: - return 3 - case 13: - return 11 - case 14: - return 7 - case 15: - return 15 - default: - panic("unexpected input") - } +// bitReverseForNibble returns the bit-reversed value of a nibble (4 bits). +// For example, input 0001 (1) becomes 1000 (8). +func bitReverseForNibble(b byte) byte { + if b > 15 { + panic("unexpected input: nibble value must be 0-15") + } + return ((b&1)<<3 | (b&2)<<1 | (b&4)>>1 | (b&8)>>3) }However, if the current implementation was chosen for performance reasons after benchmarking, please add a comment explaining this decision.
489-499
: BitReverse function needs additional documentation.The
BitReverse
function would benefit from a documentation comment explaining its purpose, the expected input format, and how it differs fromReverseBytes
. Additionally, there's an opportunity to optimize by pre-allocating the exact output size.+// BitReverse performs a bit-level reversal of a byte slice, returning a new slice +// with bytes in reverse order and bits within each byte reversed. +// This is useful for certain cryptographic operations in zkTrie implementations. func BitReverse(inp []byte) (out []byte) { - l := len(inp) out = make([]byte, l) for i, b := range inp { out[l-i-1] = bitReverseForNibble(b&15)<<4 + bitReverseForNibble(b>>4) } return }core/rawdb/schema.go (1)
349-356
: Function for compacting storage trie node keys lacks documentation.The
CompactStorageTrieNodeKey
function appears to optimize storage trie node keys by hashing them in certain conditions, but lacks documentation explaining its purpose, when it should be used, and the rationale behind the implementation.+// CompactStorageTrieNodeKey creates a more efficient key for storage trie nodes by hashing +// the original key when it has the storage prefix and is longer than 33 bytes. +// This reduces key size while maintaining uniqueness, improving database efficiency. func CompactStorageTrieNodeKey(key []byte) []byte { if key[0] == TrieNodeStoragePrefix[0] && len(key) > 33 { h := crypto.Keccak256Hash(key[1:]).Bytes() newKey := append(TrieNodeStoragePrefix, h...) return newKey } return key }eth/tracers/native/prestate.go (1)
18-18
: Ensure the generation tool is documented.
The//go:generate
directive will require thegencodec
tool to be installed. It may be helpful to document any prerequisites or steps for properly regenerating the code.triedb/types/database_types.go (1)
33-40
: Shallow copy approach forKvMap
.
UsingPut
in a loop is a clean way to ensure each key-value is rehashed consistently, though keep in mind the hash overhead for very large maps.triedb/hashdb/zk_trie_database.go (5)
18-31
: Validate configuration completeness and boundaries.Currently,
Cache
can be zero or an arbitrary integer, but there is no explicit validation or boundary check (e.g., negative values). A misconfigured cache size might cause unexpected memory usage or silently disable the clean cache.
43-62
: Return an error when diskdb is nil.
NewZkDatabaseWithConfig
silently accepts anil
diskdb, which could lead to nil-pointer panics later. Explicitly returning an error can prevent cryptic runtime failures.func NewZkDatabaseWithConfig(diskdb ethdb.KeyValueStore, config *Config) *ZktrieDatabase { + if diskdb == nil { + log.Error("diskdb is nil in NewZkDatabaseWithConfig") + return nil + } if config == nil { config = Defaults }
77-104
: Consider partial or batched commits for large dirty sets.
CommitState
callscommitAllDirties()
which locks all dirties for the entire write. For very large sets, this can block other operations. Consider segmenting large commits or reducing lock contention.
126-133
: Implement or remove the Node() method.
Node
currently panics with "ZktrieDatabase not implement Node()". This might cause runtime failure if invoked. If not intended, consider removing it or adding logic to retrieve node data.Want me to stub or fully implement it?
182-220
: Consider adding rate-limiting or concurrency control to periodically saving the cache.
SaveCachePeriodically
continuously triggerssaveCache
atinterval
s. If the environment is resource-constrained or if writing is slow, consider adding concurrency checks or deferring if a previous write is still in progress.cmd/geth/main.go (1)
161-162
: Document newly added flags.
StateSchemeFlag
andPathDBSyncFlag
introduce additional CLI parameters. Provide descriptions or usage examples in the help text to clarify their purpose and valid values.cmd/geth/snapshot.go (1)
226-228
: Downgrade or handle the pruning mismatch more gracefully.This code abruptly aborts with a critical log. Consider warning or returning a standard error instead, so the user has the option to proceed or correct the configuration.
-if rawdb.ReadStateScheme(chaindb) != rawdb.HashScheme { - log.Crit("Offline pruning is not required for path scheme") +if scheme := rawdb.ReadStateScheme(chaindb); scheme != rawdb.HashScheme { + log.Error("Offline pruning is not required for non-Hash scheme", "foundScheme", scheme) + return fmt.Errorf("pruning is incompatible with scheme: %v", scheme) }core/vm/logger_json.go (1)
102-104
: Implement transaction lifecycle tracing methodsThe implementation adds empty methods for capturing transaction start and end events, which appear to be part of a transaction tracing interface implementation. Note that there's a receiver variable naming inconsistency - the new methods use
t
while existing methods usel
.Consider using consistent receiver variable naming throughout the file to improve readability:
-func (t *JSONLogger) CaptureTxStart(gasLimit uint64) {} -func (t *JSONLogger) CaptureTxEnd(restGas uint64) {} +func (l *JSONLogger) CaptureTxStart(gasLimit uint64) {} +func (l *JSONLogger) CaptureTxEnd(restGas uint64) {}.github/workflows/docker_release.yml (5)
5-6
: Fix trailing whitespaceThere's a trailing space on line 5 that should be removed.
- tags: + tags:🧰 Tools
🪛 YAMLlint (1.35.1)
[error] 5-5: trailing spaces
(trailing-spaces)
17-19
: Fix indentation and trailing whitespaceThere are indentation inconsistencies and trailing whitespace in the checkout step.
- - uses: actions/checkout@v4 - + - uses: actions/checkout@v4🧰 Tools
🪛 YAMLlint (1.35.1)
[warning] 18-18: wrong indentation: expected 6 but found 4
(indentation)
[error] 19-19: trailing spaces
(trailing-spaces)
23-24
: Consider using built-in GitHub token for registry authenticationInstead of using a custom
PACKAGE_TOKEN
, consider using the built-inGITHUB_TOKEN
which has permissions to access the GitHub Container Registry. This reduces the need for maintaining separate secrets.- run: echo "${{ secrets.PACKAGE_TOKEN }}" | docker login ghcr.io -u ${{ github.actor }} --password-stdin + run: echo "${{ secrets.GITHUB_TOKEN }}" | docker login ghcr.io -u ${{ github.actor }} --password-stdin
26-41
: Consider adding error handling for Docker operationsThe script doesn't check if Docker operations succeed before moving on to subsequent steps. Consider adding error checking to make the workflow more robust.
run: | + set -e IMAGE_ID=ghcr.io/${{ github.repository }} # Change all uppercase to lowercase IMAGE_ID=$(echo $IMAGE_ID | tr '[A-Z]' '[a-z]') # Strip git ref prefix from version VERSION=$(echo "${{ github.ref }}" | sed -e 's,.*/\(.*\),\1,') # Strip "morph-v" prefix from tag name [[ "${{ github.ref }}" == "refs/tags/"* ]] && VERSION=$(echo $VERSION | sed -e 's/^morph-v//') echo IMAGE_ID=$IMAGE_ID echo VERSION=$VERSION docker tag $IMAGE_NAME $IMAGE_ID:$VERSION docker tag $IMAGE_NAME $IMAGE_ID:latest docker push $IMAGE_ID:$VERSION docker push $IMAGE_ID:latest
🧰 Tools
🪛 YAMLlint (1.35.1)
[error] 29-29: trailing spaces
(trailing-spaces)
29-29
: Remove trailing whitespaceThere's trailing whitespace on line 29 that should be removed.
- IMAGE_ID=ghcr.io/${{ github.repository }} - + IMAGE_ID=ghcr.io/${{ github.repository }}🧰 Tools
🪛 YAMLlint (1.35.1)
[error] 29-29: trailing spaces
(trailing-spaces)
core/blockchain_l2.go (1)
110-127
: Consider error propagation from state commitThe
CommitState
method may return an error, but it's not clear if this error is properly handled or propagated back to the caller. Ensure that errors from state commit operations are appropriately handled.if bc.cacheConfig.TrieDirtyDisabled { if triedb.Scheme() == rawdb.PathScheme { // If node is running in path mode, skip explicit gc operation // which is unnecessary in this mode. - return triedb.CommitState(root, origin, current, false) + if err := triedb.CommitState(root, origin, current, false); err != nil { + return err + } + return nil } return triedb.Commit(root, false, nil) }MakefileEc2.mk (2)
24-31
: Consider centralizing shared build logic
All these build steps are repeated for different environments (holesky, qanet, etc.). You could centralize common tasks (directory creation, buildinggeth
, creating tarballs) into a single Makefile function or pattern rule to avoid duplication.
40-47
: Echo statements consistency
Most build targets don’t have an echo statement indicating completion. For consistency and clarity, consider printing a success message uniformly across all targets.eth/tracers/native/call.go (2)
76-96
: Centralized output handling
processOutput
neatly centralizes error, revert reason, and output handling. Consider extracting revert-reason-specific logic into a separate helper method for clarity, especially if new revert details are introduced in the future.
158-205
: CaptureState enhancements
Selective logging, memory copying, stack topic retrieval, and error checks look solid. Keep an eye on performance if contracts emit extremely large logs.core/rawdb/accessors_trie.go (2)
110-135
: Handle partial or inconsistent databases with warning logs.
ReadStateScheme
attempts multiple checks to conclude the scheme. If these checks fail (e.g., missing root node or inconsistent DB entries), the function returns an empty string without logging. Consider adding a warning log or comment to highlight possible database corruption or partial state scenarios.
146-181
: Enhance mismatch error details.When a user-provided scheme is incompatible with the stored one, the error message is concise but might benefit from additional detail about potential resolutions (e.g., re-initializing or forcing archives). Adding a short hint can ease debugging.
core/blockchain.go (1)
840-884
: Refine log level on journal failures.In the
Stop()
method, if journaling in-memory trie nodes fails for the path-based scheme, the code logs an informational message (log.Info
). Consider raising this to a warning or error level, as failing to journal might have operational consequences (e.g., incomplete state upon restart).- log.Info("Failed to journal in-memory trie nodes", "err", err) + log.Warn("Failed to journal in-memory trie nodes", "err", err)core/state/state_prove.go (1)
40-40
: Update panic message for naming consistency.Line 43 still references "zktrie" in the panic message. To maintain clarity, consider updating it to "ZkTrie."
- panic("unexpected trie type for zktrie") + panic("unexpected trie type for ZkTrie")trie/zk_trie_test.go (3)
174-174
: Validate disk database type-casting.
zkTrie.db.diskdb.(*leveldb.Database)
asserts a specific implementation. If you switch to an alternative database, this cast will fail. Keep an eye on potential issues if the underlying DB changes in the future.
201-203
: Capture or handle NewZkTrie instantiation errors.As with earlier feedback, ignoring errors from
NewZkTrie
may hide initialization problems.
265-303
: Inactive test code.
TestMorphAccountTrie
immediately returns ifdir
is empty, leaving the test effectively disabled. If this is intentional, mark it as skipped witht.Skip
or add parameterization to specify a valid directory path for full coverage.trie/hbss2pbss.go (3)
54-94
: Concurrent traversal design.The logic spawning a goroutine for traversal and using
errChan
is good. Ensure that any external references toh2p
fields remain thread-safe. For instance, if new fields are added later, re-check concurrency locks.
117-136
: Conditional compaction logic.Looping in increments of 0x10 is a valid approach for incremental compaction. Ensure you handle boundary cases if the database layout or keys significantly change in future.
148-194
: Check for large recursion depth.
concurrentTraversal
is recursively invoked for each node. For extremely large tries, this could risk hitting stack limits. Consider an iterative approach or tail recursion if this becomes an issue at scale.triedb/pathdb/metrics.go (1)
21-47
: Validate metric naming patterns & usage
All looks consistent and straightforward, but consider grouping related metrics together or ensuring the naming patterns remain consistent across the codebase for clarity.triedb/pathdb/nodebuffer.go (3)
34-42
: Clarify struct naming
Whilenodebuffer
is descriptive, consider an exported name likeNodeBuffer
orTrieNodeBuffer
to denote usage and maintain Go idiomatic naming consistency.
71-105
: Efficient merges in commit
Merging incoming nodes is straightforward and updates size deltas properly. Keep an eye on concurrency if multiple goroutines usecommit
simultaneously without external synchronization.
107-117
: Handle negative buffer size gracefully
Logging an error inupdateSize
for negative deltas is good. Consider whether the code should also throw or propagate an error if this occurs repeatedly, indicating a size tracking problem.trie/database.go (15)
23-23
: Clarify usage of big.Int.
Consider adding a brief note or doc comment on why we're importing big.Int here, so future maintainers understand its necessity.
295-297
: Add doc comments for new config references.
HashDB and PathDB are important references to their respective configs. Document them for discoverability.
299-299
: Consider scoping for global variable.
GenesisStateInPathZkTrie
might be better contained within a smaller scope if only used by a few functions.
301-306
: Reader interface usage.
This interface is minimal. Adding an example or linking it to usage in the codebase can help new contributors understand its purpose.
323-324
: Ensure safe repeated calls.
CommitGenesis
might be called more than once if there's a reorg. Confirm if it's idempotent or if repeated calls are disallowed.
326-327
: Audit resource release.
Confirm thatClose()
thoroughly cleans up any state. A more robust approach might log partial closes.
329-330
: Potential for context usage.
If writes are large, consider a context parameter inPut
for cancellation. Helps handle timeouts gracefully.
562-573
: Increase debug logs.
Dereference
can cascade large changes. Additional per-child logs or metrics might help diagnose memory usage issues.
653-663
: Code duplication risk.
Cap
partially mirrors commit logic. If possible, unify them or refactor to reduce duplication and complexity.
801-801
: Enhance error detail.
When an error occurs, including more context like the current root can aid debugging.
805-805
: Track repeated commit failures.
In addition to logging, consider a gauge or counter metric if repeated commits fail.
1001-1010
: Unknown scheme error.
Returning an error if the scheme is unknown is correct. Keep the error message consistent across all unknown-scheme branches.
1023-1029
: Defaulting to hash scheme.
This is a safe fallback, but confirm user config won't be silently ignored if they specifically request a path-based scheme.
1069-1071
: Reduce fragmentation.
IsZkTrie
andIsPathZkTrie
could be consolidated into a singleScheme()
approach to reduce branching logic.
1102-1127
: Improve error clarity.
When keys or DB states aren't found, consider a more descriptive error message specifying the unavailability reason.triedb/pathdb/difflayer.go (4)
28-46
: Struct concurrency approach.
diffLayer
uses an RWMutex to guardparent
andorigin
. Consider adding doc comments explaining how concurrency interacts with these fields.
47-81
: Memory usage approximations.
You're summing the lengths of keys and values asdl.memory
. That helps, but be aware of overhead from maps and references.
115-135
: Recursive node lookup.
Be mindful of the maximum depth if multiple diff layers stack up. Could cause deep recursion.
143-148
: Layer explosion risk.
Each update creates a newdiffLayer
. If there are many updates, you might accumulate layers quickly.trie/path_zk_trie.go (9)
32-33
: Unambiguous naming.
magicHashPathTrie
should be unique enough, but consider scoping it to avoid collisions or confusion.
44-53
: Enhanced error context.
We return the raw error frompathtrie.NewZkTrieWithPrefix
. Possibly wrap the error with additional context about prefix or root.
74-84
: Update silently logs errors.
Similar toGet
, any error fromTryUpdate
is logged but not returned. Evaluate whether the caller should handle errors.
93-99
: Delete logs but doesn’t return errors.
Maintaining a consistent pattern might mean returning errors for these calls as well.
101-113
: Preimage fallback.
We rely ondb.preimages
if available. If many keys are stored, we might want a more dynamic retrieval approach.
115-127
: Commit finalization.
Stating that all updates are already written to the DB is fine, but confirm there's no partial write scenario.
142-147
: Unimplemented NodeIterator.
Currently panics. Provide a partial stub or clarify its future usage.Would you like help creating a placeholder implementation?
149-165
: Remove or revisit commented-out code.
ThehashKey
function is commented out. If it's not needed, consider deleting it to reduce clutter.
199-230
: VerifyProofSMT2.
While this depends heavily on zktrie, consider building additional negative tests to confirm it reliably detects invalid proofs.triedb/pathdb/disklayer.go (3)
33-68
: Consider clarifying concurrency assumptions in interface documentation.
Thetrienodebuffer
interface is extensive and well-defined. A brief note on concurrency expectations (e.g., safe usage under multiple goroutines) could improve clarity.
107-141
: Panic for stale disk layer may merit an alternative error handling approach.
Usingpanic("triedb disk layer is stale")
at line 138 ensures an immediate abort on unexpected commits. Consider returning an error or using alog.Crit
for consistent error management instead of a hard panic.
251-263
: Optional logging on reset might aid diagnosability.
WhenresetCache
clearsdl.cleans
, adding a log statement could help track unexpected cache flushes at runtime.triedb/pathdb/asyncnodebuffer.go (1)
96-143
: Infinite retry loop in background flush could cause hangs.
Ifa.background.flush
repeatedly fails (line 133 onward), the goroutine will retry forever. Consider adding a backoff or a final fallback if failures persist.for { err := a.background.flush(db, clean, persistID) if err == nil { ... } else { + // Suggestion: Add an exponential backoff or bounded retries to avoid indefinite loops. log.Error("Failed to flush background nodecache to disk", "state_id", persistID, "error", err) } }
core/rawdb/accessors_state.go (9)
86-90
: Unify documentation or rename the comment
The top comment reads "ReadTrieNode retrieves the trie node of the provided hash," but the function is namedReadTrieNodeByKey
and uses a key slice instead of a hash. Consider adjusting the docstring to match the function's parameters for clarity. Additionally, this method returns an error whereasReadTrieNode
does not, so verify consistent handling across both.
98-104
: Consider propagating errors instead of using log.Crit
Callinglog.Crit
will terminate the application on write failures. It might be more flexible to return the error and let the caller decide how to handle it, ensuring better resiliency in production databases.
122-130
: Handle account-trie-node write errors
Similar to other write methods,WriteAccountTrieNode
callslog.Crit
on failures. Consider letting the caller handle such errors gracefully instead of terminating.
146-158
: Avoid indiscriminate application termination on error
WriteStorageTrieNode
andDeleteStorageTrieNode
calllog.Crit
upon database write/delete failure. If these operations might legitimately fail or need retry logic, returning errors could be more robust than forcing a crash.
167-173
: Revisit log.Crit usage for WriteTrieJournal
If the disk is full (or any I/O error occurs), forcibly stopping the node can be problematic. Returning an error might be preferable in robust scenarios.
175-181
: Consider non-terminating error handling for DeleteTrieJournal
Currently, a failure to delete triggerslog.Crit
. Depending on usage, a softer approach might be more user-friendly.
192-197
: Reassess forced termination on WritePersistentStateID failure
As with the other write methods, consider whetherlog.Crit
is appropriate or if returning an error is more flexible for the caller.
209-216
: Propagate state ID write errors
WriteStateID
terminates the program on any write failure. As before, consider a less severe approach if partial failures are tolerable or need to be recoverable.
218-223
: Simplify or unify error handling for DeleteStateID
Usinglog.Crit
for a single key deletion failure might be too severe in most operational scenarios.triedb/pathdb/database.go (10)
19-34
: Validate layering of imports and DB usage
This block introduces significant dependencies (os
,sort
,sync
, etc.) for advanced disk operations. Confirm that each import is necessary to keep the module lightweight.
36-63
: Ensure constants match real-world workload
Constants likemaxDiffLayers = 128
andMaxDirtyBufferSize = 256MB
may need re-tuning under high block rates or large reorg scenarios. Consider making them configurable.
72-101
: layer interface design
The interface includes both public and internal methods. For clarity, consider separating them or documenting which are strictly internal for layering.
102-112
: Config structure usage
Fields likeNoTries
remain unused or unclear. If it’s reserved for future expansions, consider marking it explicitly or removing it until needed.
125-134
: Check defaults for memory-limited environments
Defaults
setsDirtyCacheSize
to 64 MB andStateHistory
toparams.FullImmutabilityThreshold
. On systems with limited memory, these defaults might be too large.
209-240
: Consolidate reorg logic in CommitState
CommitState
enforces a maximum of 128 in-memory layers. In a chain with deep reorgs, consider advanced fallback or partial flush strategies for older states.
257-269
: Revisit partial layering in Size calculation
Size()
iterates all known layers. If multiple branches or partial layers exist, confirm that the sum of partial-layers is intended or if only the main chain should be counted.
297-300
: Scheme clarity
Scheme()
returnsrawdb.PathScheme
but might be confused with other possible schemes (e.g.,Zktrie
). Ensure naming patterns remain clear for new contributors.
318-337
: Block sorting with string conversions
GetAllRootHash
sorts block numbers by converting them to strings. For large block numbers, repeated conversions might degrade performance. Consider an integer-based sort.
348-359
: Avoid confusion with empty journal files
DetermineJournalTypeForReader
checks if the file path exists. Confirm that empty but existing journal files don’t lead to misinterpretation of the journaling method.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
go.sum
is excluded by!**/*.sum
📒 Files selected for processing (68)
.github/workflows/docker_release.yml
(1 hunks)MakefileEc2.mk
(1 hunks)cmd/geth/chaincmd.go
(2 hunks)cmd/geth/dbcmd.go
(3 hunks)cmd/geth/main.go
(1 hunks)cmd/geth/snapshot.go
(1 hunks)cmd/geth/usage.go
(1 hunks)cmd/utils/flags.go
(6 hunks)common/types.go
(2 hunks)core/blockchain.go
(5 hunks)core/blockchain_l2.go
(2 hunks)core/genesis.go
(3 hunks)core/rawdb/accessors_state.go
(2 hunks)core/rawdb/accessors_trie.go
(1 hunks)core/rawdb/database.go
(1 hunks)core/rawdb/schema.go
(6 hunks)core/state/database.go
(6 hunks)core/state/iterator.go
(1 hunks)core/state/pruner/zk-pruner.go
(2 hunks)core/state/state_object.go
(4 hunks)core/state/state_prove.go
(3 hunks)core/state/statedb.go
(5 hunks)core/state_transition.go
(2 hunks)core/types/l2trace.go
(1 hunks)core/vm/access_list_tracer.go
(1 hunks)core/vm/logger.go
(8 hunks)core/vm/logger_json.go
(1 hunks)core/vm/runtime/runtime_test.go
(3 hunks)eth/backend.go
(3 hunks)eth/ethconfig/config.go
(2 hunks)eth/tracers/api.go
(4 hunks)eth/tracers/internal/tracetest/calltrace_test.go
(3 hunks)eth/tracers/js/internal/tracers/4byte_tracer_legacy.js
(1 hunks)eth/tracers/js/internal/tracers/call_tracer_legacy.js
(1 hunks)eth/tracers/js/tracer.go
(2 hunks)eth/tracers/js/tracer_test.go
(11 hunks)eth/tracers/native/4byte.go
(3 hunks)eth/tracers/native/call.go
(3 hunks)eth/tracers/native/gen_account_json.go
(1 hunks)eth/tracers/native/gen_callframe_json.go
(1 hunks)eth/tracers/native/noop.go
(2 hunks)eth/tracers/native/prestate.go
(5 hunks)eth/tracers/native/tracer.go
(3 hunks)eth/tracers/tracers.go
(2 hunks)go.mod
(3 hunks)les/server_requests.go
(1 hunks)light/trie.go
(1 hunks)rollup/tracing/mux_tracer.go
(1 hunks)rollup/tracing/tracing.go
(3 hunks)trie/database.go
(14 hunks)trie/hbss2pbss.go
(1 hunks)trie/path_zk_trie.go
(1 hunks)trie/proof.go
(1 hunks)trie/zk_trie.go
(3 hunks)trie/zk_trie_proof_test.go
(4 hunks)trie/zk_trie_test.go
(6 hunks)triedb/hashdb/metrics.go
(1 hunks)triedb/hashdb/zk_trie_database.go
(1 hunks)triedb/pathdb/asyncnodebuffer.go
(1 hunks)triedb/pathdb/database.go
(1 hunks)triedb/pathdb/difflayer.go
(1 hunks)triedb/pathdb/disklayer.go
(1 hunks)triedb/pathdb/errors.go
(1 hunks)triedb/pathdb/journal.go
(1 hunks)triedb/pathdb/layertree.go
(1 hunks)triedb/pathdb/metrics.go
(1 hunks)triedb/pathdb/nodebuffer.go
(1 hunks)triedb/types/database_types.go
(1 hunks)
✅ Files skipped from review due to trivial changes (5)
- eth/tracers/js/internal/tracers/call_tracer_legacy.js
- eth/tracers/js/internal/tracers/4byte_tracer_legacy.js
- eth/tracers/native/gen_account_json.go
- triedb/hashdb/metrics.go
- core/vm/runtime/runtime_test.go
🧰 Additional context used
🪛 YAMLlint (1.35.1)
.github/workflows/docker_release.yml
[error] 5-5: trailing spaces
(trailing-spaces)
[warning] 18-18: wrong indentation: expected 6 but found 4
(indentation)
[error] 19-19: trailing spaces
(trailing-spaces)
[error] 29-29: trailing spaces
(trailing-spaces)
🔇 Additional comments (266)
go.mod (3)
62-62
: Updategolang.org/x/sync
VersionThe dependency for
golang.org/x/sync
has been updated from v0.5.0 to v0.6.0. This appears to be a straightforward version bump; however, please verify that the new version does not introduce any breaking changes that might affect concurrent primitives used elsewhere in the project.
66-66
: Updategolang.org/x/tools
VersionThe version for
golang.org/x/tools
has been updated from v0.15.0 to v0.18.0. Ensure that any tooling or integration that leverages these tools (such as code generation or analysis) continues to work as expected with the updated version.
120-120
: Updategolang.org/x/mod
VersionThe module
golang.org/x/mod
is now at v0.15.0 instead of v0.14.0. This change should streamline module management; please confirm that the updated functionalities (if any) do not adversely affect dependency resolution or CI workflows.core/types/l2trace.go (1)
88-88
:✅ Verification successful
New ReturnData field enhances EVM tracing capabilities.
The addition of the
ReturnData
field to theStructLogRes
type enables capturing data returned by EVM operations during transaction execution. This is valuable for debugging and analysis.
🏁 Script executed:
#!/bin/bash # Check how ReturnData field is populated and used in the codebase # Find where ReturnData is populated in the StructLogRes rg -A 5 "ReturnData\s*=\s*" --type go # Find where the field is used in JSON marshaling/unmarshaling rg "json.*returnData" --type goLength of output: 1147
Verified: "ReturnData" Field Integration is Correct and Enhances EVM Tracing
- The
ReturnData
field incore/types/l2trace.go
(line 88) is correctly tagged for JSON marshaling.- Verification confirms that in
core/vm/logger.go
, the code populatesReturnData
using the conversionhexutil.Bytes(trace.ReturnData.Bytes()).String()
.- Usage in
core/vm/gen_structlog.go
(i.e.,enc.ReturnData = s.ReturnData.Bytes()
) aligns with the intended tracing logic.These changes provide improved debugging and analysis by capturing data returned during EVM operations.
rollup/tracing/mux_tracer.go (2)
71-75
: Implementation of CaptureTxStart method follows the MuxTracer pattern.The
CaptureTxStart
method properly delegates the call to all underlying tracers, maintaining the multiplexing design pattern consistently with other capture methods.
77-81
:✅ Verification successful
Implementation of CaptureTxEnd method follows the MuxTracer pattern.
The
CaptureTxEnd
method properly delegates the call to all underlying tracers, maintaining the multiplexing design pattern consistently with other capture methods.
🏁 Script executed:
#!/bin/bash # Check that the EVMLogger interface includes these new methods # Find the EVMLogger interface definition rg -B 5 -A 15 "type EVMLogger interface" --type goLength of output: 1591
CaptureTxEnd Delegation Verified Against Interface Specification
The
CaptureTxEnd
method inrollup/tracing/mux_tracer.go
(lines 77-81) correctly iterates over the underlying tracers and delegates the transaction-ending capture, following the multiplexing design pattern. The verification confirms that theEVMLogger
interface incore/vm/logger.go
includesCaptureTxEnd(restGas uint64)
, ensuring consistency across implementations.
- Verified that the method signature matches the interface.
- Confirmed adherence to the established pattern seen in other capture methods.
eth/ethconfig/config.go (2)
93-94
: New configuration defaults for trie journal and sync behavior.The added default settings for
JournalFileName
andPathSyncFlush
provide sensible values that will be used if not explicitly configured otherwise.
217-224
:✅ Verification successful
New configuration options for state storage schemes.
Three new configuration options have been added:
JournalFileName
- Controls the path schema journal file namePathSyncFlush
- Controls the state scheme for Ethereum state and Merkle trie nodesStateScheme
- Controls the state scheme specifically for zktrie state nodesThese options enhance flexibility in state management and storage optimization, but should be properly documented in user guides.
🏁 Script executed:
#!/bin/bash # Check how these new configuration options are used in the codebase # Find usage of JournalFileName rg "JournalFileName" --type go -C 3 # Find usage of PathSyncFlush rg "PathSyncFlush" --type go -C 3 # Find usage of StateScheme rg "StateScheme" --type go -C 3Length of output: 11880
New state storage configuration options verified and integrated consistently.
The new options (
JournalFileName
,PathSyncFlush
, andStateScheme
) are defined ineth/ethconfig/config.go
and their usage across the codebase—in files likeeth/backend.go
,core/blockchain.go
, and various CLI flag definitions (e.g., incmd/utils/flags.go
andcmd/geth/*
)—confirms they function as expected. These changes enhance flexibility in managing state storage, and it is advisable to update the user documentation accordingly.
- JournalFileName: Properly set with default value and referenced in backend configuration.
- PathSyncFlush: Utilized to control disk flushing behavior in multiple modules.
- StateScheme: Routed through parsing and validation logic ensuring consistency across state management components.
core/state/pruner/zk-pruner.go (2)
50-50
: Code simplification by removing unnecessary indirection.Directly passing
stateCache.TrieDB()
totrie.NewZkTrie()
instead of wrapping it in an intermediate function call simplifies the code path and improves readability. This change aligns with good Go idioms by reducing unnecessary abstraction layers.
252-252
:✅ Verification successful
Updated to use common.BitReverse instead of trie.BitReverse.
The
BitReverse
function has been relocated from thetrie
package to thecommon
package. This is a sensible move as bit manipulation functions have utility beyond just the trie implementation and belong in a more general utility package.Also applies to: 254-254
🏁 Script executed:
#!/bin/bash # Check if there are any remaining references to trie.BitReverse in the codebase echo "Checking for any remaining references to trie.BitReverse:" rg "trie\.BitReverse" --type goLength of output: 158
#!/bin/bash # Display a snippet from the file to confirm the update. echo "Extracting lines 240-260 from core/state/pruner/zk-pruner.go to verify BitReverse usage:" sed -n '240,260p' core/state/pruner/zk-pruner.goThe above script will help us manually verify that both occurrences (lines 252 and 254) have been updated to use
common.BitReverse
instead oftrie.BitReverse
. Once you confirm that the snippet shows the correct usage (i.e. only calls tocommon.BitReverse
appear and no references totrie.BitReverse
remain), the refactoring can be considered complete.Refactoring Verified:
common.BitReverse
is now used as expected.
- Confirmed that references to
trie.BitReverse
are completely removed.- The updates in
core/state/pruner/zk-pruner.go
(lines 252 and 254) correctly utilize the utility function from thecommon
package.common/types.go (2)
229-232
: Good addition for address comparison functionality.The
Cmp
method provides a standardized way to compare addresses, which is a common operation in Ethereum-based systems. This implementation correctly usesbytes.Compare
for the underlying comparison, following Go idioms.
442-448
: Useful utility function for byte reversing.The
ReverseBytes
function provides a clean implementation for reversing byte order, which can be useful in various cryptographic and encoding contexts.rollup/tracing/tracing.go (3)
51-53
: Good defensive check for unsupported path trie implementation.Adding an early check to detect and reject path zkTrie usage is a good defensive programming practice. It prevents potentially confusing errors later in the execution path and provides a clear error message about the unsupported feature.
393-398
: Improved account tracing by ensuring sender and receiver addresses are included.This enhancement ensures that both the transaction sender (
from
) and receiver (to
) addresses are always included in the proof accounts, even if they weren't updated during transaction execution. This provides more complete state information for tracing purposes.
316-316
:✅ Verification successful
Updated tracer creation to match new API signature.
The function call to
tracers.New
has been updated to include a thirdnil
parameter, which aligns with changes in the underlying API. This consistency is important for proper function operation.
🏁 Script executed:
#!/bin/bash # Check the current signature of tracers.New to ensure our usage is correct rg -A 3 "func New\(" --include="*.go" --glob="**/tracers/*.go"Length of output: 506
Updated tracer creation to match new API signature – manual verification required
The call to
tracers.New
inrollup/tracing/tracing.go
(line 316) now includes an extranil
parameter. This change follows the intended API update. However, our initial automated check usingrg
with the--include
flag produced an error, so the API signature was not automatically confirmed. Please verify that the updated usage aligns with the actual function definition in the tracers package. You can re-run a corrected verification command such as:Once confirmed, no further changes are needed.
🏁 Script executed:
#!/bin/bash rg -A 3 "func New\(" -g "*tracers*.go"Length of output: 442
Tracer creation API update verified – no further changes required.
The updated call in
rollup/tracing/tracing.go
:callTracer, err := tracers.New("callTracer", &tracerContext, nil)now correctly provides three parameters, matching the current signature defined in
eth/tracers/tracers.go
:func New(code string, ctx *Context, cfg json.RawMessage) (Tracer, error)Please confirm manually if any further adjustments are necessary to accommodate future changes to the API.
core/rawdb/schema.go (4)
47-49
: Added new database keys for state management.The new
persistentStateIDKey
andtrieJournalKey
support path-based state storage and in-memory trie tracking across restarts. These additions enhance the state management capabilities of the system.Also applies to: 77-79
133-137
: Added prefixes for path-based storage scheme.The new prefix constants
TrieNodeAccountPrefix
,TrieNodeStoragePrefix
, andstateIDPrefix
support a path-based storage scheme for the Merkle Patricia trie. This is part of a larger architectural improvement for state storage.
291-293
: Added public error checking function for not found errors.The new
IsNotFoundErr
function exposes a public API for checking database "not found" errors, which is useful for packages that need to distinguish between missing data and other error types.
330-347
: Added key generation functions for path-based storage.The new functions
accountTrieNodeKey
,storageTrieNodeKey
, andstateIDKey
generate database keys for the path-based trie storage scheme. These are well structured and follow consistent naming patterns.eth/tracers/native/prestate.go (10)
11-11
: Use of hexutil import looks appropriate.
This newly introduced import is necessary for serializing balances and code in the next lines.
27-30
: Good addition of JSON tags.
Attachingomitempty
ensures the fields do not appear when empty, which is typically desirable.
38-39
: Use of hexutil for balance and code.
Switching to*hexutil.Big
andhexutil.Bytes
ensures large integer safety and proper hex encoding.
49-50
: New fields for transaction context.
IntroducinggasLimit
andconfig
helps track configuration and gas usage. The naming aligns well with the rest of the tracer’s logic.
57-59
: Configuration struct is concise and future-proof.
Having a dedicatedDiffMode
field paves the way for further configuration options if needed.
61-74
: Initialization flow is clear and robust.
The tracer properly unmarshals JSON intoprestateTracerConfig
and handles potential errors gracefully.
100-102
: Conditionally tracking created contracts.
Storing newly created addresses int.created
only whenDiffMode
is active is in line with the config-driven design.
106-109
: Skipping end-capture logic in diff mode.
Returning early ifDiffMode
is enabled ensures the tracer's standard finalization doesn't interfere with the diff-based approach.
172-174
: Selective post-state calculation.
By returning early whenDiffMode
is false, you avoid unnecessary post-state computation, which is efficient and clear.
237-251
: Conditional JSON output for diff mode.
Returning{post, pre}
in diff mode vs. justpre
otherwise is a clean design choice, aligning well with the tracer’s configuration.triedb/types/database_types.go (8)
1-7
: New package and imports are appropriate.
Establishing a dedicatedtypes
package with standard libraries for hashing and errors looks good.
9-11
: Clear definition of a not-found error.
ErrNotFound
is suitably defined to handle missing key scenarios.
13-17
: Simple KV struct for storing raw key-value pairs.
Design is straightforward and flexible for various byte-based data.
19-20
: Use of a map keyed by SHA-256.
KvMap
employing[sha256.Size]byte]
as the map key is an effective way to ensure consistent hashing.
22-26
: Get method is concise.
Retrieves the value by hash lookup, returning the boolean for existence. Looks correct and idiomatic.
28-31
: Put method integrates well with the hashing scheme.
Storing(k, v)
in theKvMap
under the hashed key remains consistent with the usage inGet
.
42-49
: Concat function is straightforward.
Usingbytes.Buffer
is efficient for assembling byte slices.
51-56
: Clone function for byte arrays.
Makes a defensive copy of input bytes, which is often a best practice in cryptographic contexts.core/vm/logger.go (9)
22-22
: Importingencoding/json
is pivotal for new tracing outputs.
The code below relies on JSON-basedGetResult
returns, so this is necessary.
27-27
:sync/atomic
import for interruption handling.
Ensures safe concurrency when toggling the tracer’s interrupt flag.
132-134
: Added TX-level capture methods.
CaptureTxStart
andCaptureTxEnd
provide a more comprehensive view of transaction lifecycle, clarifying gas usage.
171-174
: New fields to store gas limit/state.
gasLimit
,usedGas
, and theatomic.Bool
approach for interruption are coherent for mid-execution tracking.
396-416
: New GetResult method.
Generates JSON output summarizing gas usage, success/failure state, and logs. This is a solid improvement in trace introspection.
417-421
: Interruption control withStop(err error)
.
This method’s approach to markingreason
and togglinginterrupt
is a clear way to halt tracing gracefully.
543-543
: No-op CaptureTxStart in mdLogger.
Even if it’s empty now, it maintains interface consistency for transaction-level tracing.
545-545
: No-op CaptureTxEnd in mdLogger.
Similar toCaptureTxStart
, it ensures the logger meets the interface requirements.
572-611
:formatLogs
function extends structured logging support.
Marshaling memory, return data, and storage per step provides a clear, detailed record of execution traces.triedb/hashdb/zk_trie_database.go (5)
33-42
: Confirm concurrency safety for both clean and dirty caches.While the
sync.RWMutex
protectsdirties
, the operations oncleans
are not guarded by the same lock. Iffastcache
is not fully concurrency-safe, simultaneous reads/writes could lead to data races.Would you like to check the
fastcache
documentation or run concurrency tests to confirm safe usage here?
64-75
: Looks good!The
Scheme()
andSize()
methods are straightforward and match the expected interface.
106-124
: Well-structured commit process.The
commitAllDirties
function uses batch operations and promptly clearsdirties
, ensuring no extra references remain. The lock usage is suitable to prevent data races.
222-230
: The reader approach is consistent.
Reader
returns a dedicatedzkReader
that wraps the same concurrency logic as the main database. ThezkReader.Node()
delegates todb.Get()
, ensuring a single source of truth.
135-180
:✅ Verification successful
Confirm concurrency usage when modifying clean cache.
Put
locks thedirties
but updatescleans
outside the lock. Iffastcache
is only partially safe under concurrency, a race could occur whenGet
orPut
is called by multiple goroutines.
🏁 Script executed:
#!/bin/bash # Attempt to locate usage of fastcache concurrency notes. rg -A 5 "fastcache.New" rg -A 5 "fastcache.LoadFromFileOrNew"Length of output: 10380
Fastcache Concurrency Usage Confirmed
After verifying the repository, we see that the same pattern for creating and using the clean cache via fastcache is consistently applied in multiple modules (e.g., in
trie/database.go
,triedb/pathdb/disklayer.go
, and various snapshot tests). This indicates that fastcache’s design inherently supports concurrentSet
andGet
calls. As a result, updating the clean cache outside of the explicit lock inPut
is in line with our overall concurrency strategy.• No additional locking is required when operating on the clean cache from fastcache.
• The current implementation aligns with similar patterns elsewhere in the codebase.cmd/geth/usage.go (1)
60-61
:✅ Verification successful
Addition of new configuration flags for state scheme and path DB sync
These new flags extend the Ethereum category with configuration options for state scheme and path DB synchronization. The placement within the ETHEREUM section is appropriate.
🏁 Script executed:
#!/bin/bash # Check how these flags are defined in utils package cd $(git rev-parse --show-toplevel) rg -A 3 "StateSchemeFlag|PathDBSyncFlag" --type go cmd/utilsLength of output: 890
REVIEW APPROVED: Configuration Flags Correctly Integrated
The verification confirms that the new flags are properly defined in
cmd/utils/flags.go
and correctly referenced in the ETHEREUM section ofcmd/geth/usage.go
. There are no issues with their placement or implementation.core/state/iterator.go (1)
112-112
:✅ Verification successful
Updated OpenStorageTrie call with additional origin parameter
The function call has been updated to add an empty hash as the origin parameter, aligning with signature changes to the OpenStorageTrie method.
🏁 Script executed:
#!/bin/bash # Verify the OpenStorageTrie method signature has been updated appropriately cd $(git rev-parse --show-toplevel) rg "func.*OpenStorageTrie" --type goLength of output: 318
Verified update: OpenStorageTrie call correctly includes the origin parameter
The updated call in
core/state/iterator.go
at line 112 now passes an empty hash (common.Hash{}
) as the origin parameter, which aligns with the updated function signature defined in bothcore/state/database.go
andlight/trie.go
. No further action is required.trie/proof.go (1)
113-116
:✅ Verification successful
Enhanced proof verification with path trie support
The change adds a conditional check for path-trie-specific proofs and directs them to an appropriate verification function. This pattern mirrors the existing SMT proof verification structure.
🏁 Script executed:
#!/bin/bash # Checking for the definition of the magicHashPathTrie constant and the VerifyProofSMT2 function cd $(git rev-parse --show-toplevel) rg "magicHashPathTrie|VerifyProofSMT2" --type goLength of output: 567
Enhanced proof verification with path trie support – Verified
The updated conditional in
trie/proof.go
(lines 113-116) now correctly checks for a non-nil value usingmagicHashPathTrie
and delegates proof verification toVerifyProofSMT2
. Verification confirms that bothmagicHashPathTrie
andVerifyProofSMT2
are properly defined intrie/path_zk_trie.go
, and the logic mirrors the existing SMT proof verification structure without introducing any inconsistencies.les/server_requests.go (1)
431-431
:✅ Verification successful
Added third parameter to OpenStorageTrie - ensure state origin is correctly propagated
The change adds a third parameter
root
to theOpenStorageTrie
method call, which represents the state root of the trie. This parameter is crucial for the proper initialization of the storage trie in the context of Proof-Based State Sync (PBSS) implementation.Let's verify that this change is consistent with the method signature in the state package:
🏁 Script executed:
#!/bin/bash # Check for the OpenStorageTrie function signature in the codebase echo "Searching for OpenStorageTrie function definition:" rg -A 5 "func.*OpenStorageTrie" --type goLength of output: 932
OpenStorageTrie Signature Verification Complete
The addition of the third parameter is verified to be correct. The function signature in bothcore/state/database.go
andlight/trie.go
expects three parameters (addrHash, root, origin common.Hash
), confirming that the state origin (passed asroot
in this call) is correctly propagated for PBSS. There are no further concerns.trie/zk_trie_proof_test.go (3)
66-66
: Updated database initialization to use the new unified database APIThe change from
NewZkTrie(common.Hash{}, NewZkDatabase((memorydb.New())))
updates the test to use the refactored database API that likely provides a unified interface for different trie implementations (including ZK tries).
151-151
: Updated database initialization for consistencySimilar to the previous instance, this change updates the test to use the refactored database API. This ensures consistency across the codebase when working with ZK tries.
183-183
: Updated database initialization in randomZktrie helper functionThe function
randomZktrie
is a testing helper that generates random ZK tries. This change ensures it's also using the consistent database API pattern introduced with PBSS.core/rawdb/database.go (1)
472-498
: Added database pruning function to remove legacy trie nodesThis function implements a database maintenance operation to clean up legacy trie nodes after migrating to the new PBSS system. It iterates over all database entries, checks if they are legacy trie nodes, and deletes them. The implementation includes proper progress logging to monitor the pruning process.
A few observations:
- Variable naming should use camelCase (
totalNum
instead oftotal_num
) for consistency with Go conventions- The function assumes
IsLegacyTrieNode
is defined elsewhere in the codebase- The progress logging is well-implemented, with both periodic counts and time-based reporting
Let's verify the existence of
IsLegacyTrieNode
:#!/bin/bash # Check for the IsLegacyTrieNode function definition echo "Searching for IsLegacyTrieNode function:" rg "func IsLegacyTrieNode" --type gocore/vm/access_list_tracer.go (1)
178-180
: Added transaction lifecycle hooks to AccessListTracerThese methods implement the EVMLogger interface for transaction start and end events. While currently empty, they ensure the AccessListTracer adheres to the interface contract and can be used in contexts that expect full transaction lifecycle tracking.
Let's verify this implementation matches the interface requirements:
#!/bin/bash # Check the EVMLogger interface definition echo "Searching for EVMLogger interface that includes Tx lifecycle methods:" rg "type EVMLogger interface" -A 20 --type gocore/state/state_object.go (5)
32-33
: Added proper import for zktrie typesGood addition of the zktrie types import to support the new Poseidon hash field.
72-73
: New field added for Poseidon address hashingThe addition of
addrPoseidonHash
as a separate field is a good approach for maintaining support for both hash types without conditional re-computation.
120-124
: Good conditional initialization of Poseidon hashThe code initializes the new
addrPoseidonHash
field only when ZkTrie is enabled, which is an efficient approach that avoids unnecessary computations.
175-178
: Avoid reassigning addrHash when possibleIf
addrHash
is only used for storage and not modified elsewhere, consider usings.addrHash
ors.addrPoseidonHash
directly in the call todb.OpenStorageTrie
instead of creating a temporary variable.- addrHash := s.addrHash - if db.TrieDB().IsPathZkTrie() { - addrHash = s.addrPoseidonHash - } - s.trie, err = db.OpenStorageTrie(addrHash, s.data.Root, s.db.originalRoot) + s.trie, err = db.OpenStorageTrie( + db.TrieDB().IsPathZkTrie() ? s.addrPoseidonHash : s.addrHash, + s.data.Root, + s.db.originalRoot)
181-181
: Ensure consistent error handling patternWhen attempting to recover from a failed trie opening, the code correctly uses the same address hash that was used in the original attempt, which is important for consistency.
eth/backend.go (3)
125-127
: Good default configuration for JournalFileNameSetting a default value for
JournalFileName
when it's not provided ensures the system will always have a valid configuration for the journal file.
141-144
: Proper error handling for StateScheme parsingThe code correctly parses the StateScheme from the configuration and handles any errors that might occur during parsing, which is essential for maintaining system integrity.
200-202
: Extended cache configuration with new fieldsThe addition of
PathSyncFlush
,StateScheme
, andJournalFilePath
to the cache configuration structure ensures that all necessary configuration parameters are passed to the blockchain component.trie/zk_trie.go (3)
37-37
: Updated database type for more unified structureChanging the type from
*ZktrieDatabase
to*Database
helps unify the database interfaces across the codebase, which is a good design improvement.
53-53
: Updated function signature for database type consistencyThe function signature change ensures consistency with the updated struct field type, which is important for maintaining code correctness.
115-116
: Simplified database access patternThe simplification from
t.db.db.preimages
tot.db.preimages
reduces nesting and improves code readability while maintaining the same functionality.cmd/geth/chaincmd.go (1)
53-53
: Added StateSchemeFlag to init commandAdding the StateSchemeFlag to the init command allows users to specify the state scheme during genesis initialization, which aligns with the broader system enhancements.
.github/workflows/docker_release.yml (2)
1-7
: Workflow configuration looks goodThe workflow is set up to trigger on pushes to tags matching "morph-v*", which is appropriate for a release workflow.
🧰 Tools
🪛 YAMLlint (1.35.1)
[error] 5-5: trailing spaces
(trailing-spaces)
32-35
:❓ Verification inconclusive
Verify tag extraction logic
The sed command assumes a specific format for the tag. Ensure this works correctly with your tag naming convention and consider adding validation.
🏁 Script executed:
#!/bin/bash # Verify the tag extraction logic with a sample tag # Test with a valid tag TEST_REF="refs/tags/morph-v1.2.3" VERSION=$(echo "$TEST_REF" | sed -e 's,.*/\(.*\),\1,') echo "Extracted version before strip: $VERSION" VERSION=$(echo $VERSION | sed -e 's/^morph-v//') echo "Final extracted version: $VERSION" # Test with an unexpected tag format TEST_REF="refs/tags/v1.2.3" VERSION=$(echo "$TEST_REF" | sed -e 's,.*/\(.*\),\1,') echo "Extracted version (unexpected format) before strip: $VERSION" VERSION=$(echo $VERSION | sed -e 's/^morph-v//') echo "Final extracted version (unexpected format): $VERSION"Length of output: 742
Tag extraction logic verified – Consider validating expected tag formats
The test outputs confirm that the current sed-based logic correctly strips the "morph-v" prefix when present (e.g., converting
"refs/tags/morph-v1.2.3"
to"1.2.3"
). However, when a tag doesn’t include the"morph-v"
prefix (e.g.,"refs/tags/v1.2.3"
), the resulting version remains unchanged. If your versioning strategy requires a consistent format (e.g., always stripping a leading"v"
or enforcing a specific prefix), you might want to add additional validation to handle unexpected formats.
- Current behavior:
- For tags like
"refs/tags/morph-v1.2.3"
, the logic returns"1.2.3"
.- For tags like
"refs/tags/v1.2.3"
, the logic returns"v1.2.3"
as-is.- Suggestion: Consider adding a validation step if a uniform version format is required, or update the extraction logic to handle all expected tag variations consistently.
eth/tracers/js/tracer_test.go (3)
83-83
: Update to support new tracer configuration parameterThe
newJsTracer
function call has been updated to include a newnil
parameter, which is likely for configuration data. This change is consistent with similar updates throughout the file.
133-133
: Consistent API changes across all tracer function callsAll instances of
newJsTracer
have been uniformly updated to include the additionalnil
parameter. This consistency is good for maintainability.Also applies to: 147-147, 171-171, 206-207, 220-220, 231-231, 241-241, 252-252, 262-262, 274-277, 281-281
83-89
:✅ Verification successful
Verify that the nil configuration doesn't affect existing test behavior
The tests are now passing
nil
as the configuration parameter. Ensure that this doesn't change the expected behavior of the tests.
🏁 Script executed:
#!/bin/bash # Check the signature of newJsTracer function to understand what the third parameter is for # Find the implementation of newJsTracer to verify the parameter's purpose rg -A 5 "func newJsTracer" --glob "*.go"Length of output: 392
Nil configuration safely retains current test behavior
After reviewing the implementation of
newJsTracer
ineth/tracers/js/tracer.go
, we see that while the function defaults a new context whennil
is passed, it does not actively use or modify the configuration parameter (cfg
). This confirms that providingnil
for the configuration does not alter the expected behavior of the tests. Currently, tests ineth/tracers/js/tracer_test.go
continue to pass without issues.core/state/statedb.go (5)
159-161
: Added getter for originalRoot fieldAdded a new accessor method
GetOriginRoot()
to provide access to the pre-state root. This is a good addition for encapsulation.
196-198
: Method renamed for consistencyThe method has been renamed from
IsZktrie
toIsZkTrie
for better naming consistency. The implementation now delegates to the trie database's method.
200-202
: Added method to check for path-based zkTrieNew method
IsPathZkTrie()
facilitates checking if the state is using a path-based zkTrie implementation. This supports the new path-based state scheme being implemented.
340-343
: Updated trie verification logicThe condition now uses the renamed
IsZkTrie()
method and improves variable naming fromaddr_s
toaddressKey
for better readability.
735-735
: Fixed state copying to include originalRootThe
Copy()
method now correctly copies theoriginalRoot
field, which ensures that copied states maintain their reference to the original state root. This is important for operations that rely on the original state for comparison or reference.core/blockchain_l2.go (2)
111-113
: Added state tracking variablesNew variables
current
andorigin
track the block number and original state root, which are used for the path-based state scheme optimization.
123-127
:❓ Verification inconclusive
Optimized state commit for path scheme
Added an optimization to use
CommitState
with path-specific parameters when using the path scheme, which skips unnecessary garbage collection. This should improve performance for path-based state management.Let's check how the path scheme is determined and the differences between commit methods:
🏁 Script executed:
#!/bin/bash # Check implementation of triedb.Scheme() and related methods echo "Checking path scheme implementation:" rg "func \(\w+\) Scheme\(\)" --glob "*.go" echo "\nChecking CommitState implementation:" rg -A 10 "func \(\w+\) CommitState" --glob "*.go"Length of output: 266
Action Required: Manual verification of the commit and scheme logic
It appears that our initial automated searches for the
Scheme()
andCommitState
implementations did not return any output. This lack of evidence means we can’t conclusively verify how the path scheme is determined or confirm that theCommitState
method indeed applies the appropriate optimization.To investigate further, please run the following shell script to get more information about these methods:
#!/bin/bash echo "Searching for 'Scheme()' definitions across Go files:" rg "func.*Scheme\\(\\)" --glob "*.go" echo -e "\nSearching for 'CommitState' definitions across Go files:" rg "func.*CommitState" --glob "*.go" echo -e "\nSearching for occurrences of 'PathScheme':" rg "PathScheme" --glob "*.go"Once you’ve reviewed the output, please verify manually:
- How
triedb.Scheme()
is implemented and ensure it correctly distinguishes between path and non-path schemes.- The implementation details of
CommitState
to confirm that calling it with thefalse
flag correctly skips unnecessary garbage collection in path mode.The optimization appears logically sound based on the code snippet, but because our automated queries didn’t yield results, further manual scrutiny is needed to ensure that the intended performance improvements are applied correctly.
cmd/utils/flags.go (6)
856-859
: Good addition of a new state scheme configuration flag.The new
StateSchemeFlag
provides a useful configuration option for specifying how zktrie state is stored, allowing users to choose between 'hash' or 'path' schemes. This aligns well with supporting different storage optimization strategies.
861-864
: Good addition of disk sync configuration for path schema.This flag provides users with control over performance vs. durability tradeoffs when using the path schema, allowing them to enable synchronous flushing of the nodes cache to disk.
1732-1734
: Correctly handles PathDBSyncFlag configuration.This implementation properly applies the PathDBSync flag to the Ethereum configuration, enabling synchronous flushing of nodes to disk when requested.
1736-1743
: Appropriate state scheme validation and configuration.The code properly parses the state scheme from CLI/config and sets the necessary global variables. Setting
trie.GenesisStateInPathZkTrie
when using path scheme ensures consistent behavior throughout the system.
1794-1796
: Critical compatibility check for path schema.This validation ensures that the path scheme is only used when zktrie is enabled, preventing potential inconsistencies or data corruption. This is a necessary safety check for Morph networks.
Also applies to: 1814-1816
2186-2202
: Well-implemented helper function for state scheme validation.This function encapsulates the logic for parsing and validating state schemes from different sources, with good error handling and informative logging. It properly handles potential conflicts between CLI and config values.
core/state_transition.go (3)
358-363
: Good addition of transaction-level tracing.This change appropriately adds transaction start and end tracing when debug mode is enabled, capturing gas information at both points. This will significantly improve debugging capabilities for transaction execution.
378-384
: Improved handling of L1 message transactions with insufficient gas.Instead of failing L1 message transactions with insufficient gas, this change allows them to be included in the block but to fail during execution. This provides better UX by making error handling more predictable and consistent.
389-389
: Proper exclusion of L1 message transactions from balance checks.This change correctly excludes L1 message transactions from the pre-transfer balance check, which is necessary since L1 message transactions have different fund availability semantics.
eth/tracers/native/noop.go (2)
38-39
: Improved tracer initialization with context support.Updating the
newNoopTracer
function to accept a context and return an error aligns with good error handling practices and provides more flexibility for initialization.
71-73
: Added transaction-level tracing methods.The addition of
CaptureTxStart
andCaptureTxEnd
methods completes the tracer interface to support transaction-level tracing, which is consistent with the changes tostate_transition.go
and enhances the overall tracing capabilities.core/genesis.go (3)
210-214
: Effective state initialization optimization.This code efficiently checks if a state is already initialized for the genesis header by verifying the existence of a state ID for the header root when using path-based zktrie. This avoids redundant initialization work.
216-233
: Robust error handling and fallback mechanism.This block provides proper error handling when state creation fails, with appropriate logging and fallback to genesis block creation. It ensures the system can recover from state initialization issues while maintaining data consistency.
308-311
: Improved flexibility in state database commitment.The addition of the
toDiskDb
flag and conditional use ofCommitGenesis
vsCommit
allows for appropriate state handling based on whether we're writing to a real database or an in-memory one. This distinction is important for performance and correctness.Also applies to: 358-362
cmd/geth/dbcmd.go (4)
73-73
: LGTM: Command registration looks good.The new
dbHbss2PbssCmd
command is appropriately added to the database subcommands list.
258-269
: LGTM: Command definition is comprehensive and follows the established pattern.The command definition properly includes:
- Action handler function
- Name and usage description
- Expected arguments pattern with clear documentation
- Required flags for proper operation
- Descriptive usage information
The command's purpose (converting Hash-Base trie nodes to Path-Base trie nodes) is clearly documented.
723-743
: Implementation looks good but consider validating the database state.The function initializes properly and checks for excess arguments. However, it only performs conversion when
lastStateID == 0
without explanation.Consider adding comments explaining the significance of
lastStateID == 0
condition and what it means if the conversion is skipped. This would improve maintainability.
744-755
: Error handling for pruning and compaction operations is well-implemented.The function properly:
- Attempts to prune hash trie nodes
- Logs an error if pruning fails
- Attempts to compact the trie nodes
- Logs an error if compaction fails
This ensures that issues during post-conversion cleanup are properly reported.
eth/tracers/js/tracer.go (1)
428-428
: Function signature updated properly to support configuration options.The
newJsTracer
function has been updated to accept ajson.RawMessage
parameter for configuration, aligning with the changes in the tracer framework.eth/tracers/internal/tracetest/calltrace_test.go (3)
187-187
: Test code properly updated to match new tracer API.The test has been correctly updated to pass
nil
as the third parameter totracers.New()
, matching the updated function signature.
305-305
: Benchmark code correctly updated to match new tracer API.The benchmark function has been properly updated to pass
nil
as the third parameter totracers.New()
.
375-375
: Test function correctly updated to match new tracer API.The test function has been properly updated to pass
nil
as the third parameter totracers.New()
.eth/tracers/tracers.go (2)
46-46
: Type signature appropriately updated to support configuration options.The
lookupFunc
type signature has been modified to accept ajson.RawMessage
parameter, enabling the passing of configuration data to tracers.
66-68
:❓ Verification inconclusive
New function updated to pass configuration to tracers correctly.
The
New
function now accepts ajson.RawMessage
parameter and correctly passes it to the lookup functions. This allows tracers to be configured with JSON data at creation time.However, there's no handling for the case where
cfg
is nil. Consider whether some tracers might expect a default empty object rather than nil.
🏁 Script executed:
#!/bin/bash # Check if there are any tracer implementations that might have issues with nil cfg rg "if\s+cfg\s*==\s*nil" --type goLength of output: 174
Action: Verify nil-config handling for tracers
The updatedNew
function ineth/tracers/tracers.go
correctly passes the JSON configuration to the lookup functions, which is a positive change. However, it currently does not address the scenario wherecfg
is nil. I found that some tracer implementations (e.g., incore/rawdb/accessors_metadata.go
andcore/vm/runtime/runtime.go
) include explicit checks for a nil configuration. Please review whether every tracer can safely handle a nilcfg
or if it's preferable to default to an empty JSON object for consistency.eth/tracers/native/gen_callframe_json.go (3)
1-13
: Code appears well-structured with proper imports and package declaration.The file is auto-generated by gencodec which is a standard approach for generating JSON marshaling/unmarshaling code in Go Ethereum. The imports are comprehensive and include all necessary dependencies.
16-48
: The MarshalJSON implementation looks correct and comprehensive.The implementation properly handles all callFrame fields by creating a temporary struct with appropriate JSON tags and converting types as needed. The approach of using a temporary struct for marshaling is a common pattern in Go for controlling the JSON output format.
The exclusion of the
Type
field from JSON output (usingjson:"-"
) while adding a derivedTypeString
field is a good approach for providing readable type information.
50-107
: UnmarshalJSON implementation correctly handles optional fields.The code properly checks for the presence of each field before assigning values, which is important for partial JSON objects. The pointer fields in the temporary struct allow for detecting missing fields.
One small observation is that all fields are treated as optional with their own nil checks, which is a safe approach but could potentially be optimized for required fields.
triedb/pathdb/errors.go (3)
1-16
: License header and package declaration look good.The standard Go Ethereum license header is correctly included, and the package declaration is appropriately set.
19-22
: Appropriate imports.The file only imports the standard errors package, which is sufficient for the error definitions.
23-43
: Error definitions are well documented and follow best practices.The error variables are clearly named with the
err
prefix following Go conventions. Each error has a descriptive comment explaining when it occurs, which is excellent for maintainability.The error messages are concise yet descriptive, making them useful for debugging. The grouping of related errors in a single file is a good organizational choice.
eth/tracers/api.go (8)
23-23
: Addition of json import is appropriate.This import is necessary for the new
json.RawMessage
type used in the TraceConfig struct.
177-180
: Good addition of TracerConfig field with clear documentation.The addition of the
TracerConfig
field as ajson.RawMessage
type provides flexibility for tracer-specific configuration. The comment clearly explains its purpose which helps with code maintenance.
184-186
: Improved struct design through composition.Embedding
TraceConfig
inTraceCallConfig
is a cleaner approach that avoids duplication and makes maintenance easier. This is a good refactoring that follows Go's composition over inheritance principle.
900-903
: Simplified reference to embedded config.The code now directly references the embedded
TraceConfig
which is cleaner than the previous approach of manually assigning each field.
918-930
: Improved tracer initialization with default configuration.The code now ensures a non-nil config by providing a default empty configuration when none is provided. This defensive programming approach prevents nil pointer dereferences.
The default tracer is now explicitly set to the struct logger, making the code's behavior more clear and predictable.
931-935
: Tracer creation now utilizes the new TracerConfig parameter.The code now passes the
TracerConfig
to the tracer constructor, allowing for more flexible and configurable tracers.
939-955
: Robust timeout handling with proper context cancellation.The implementation now correctly handles timeouts by:
- Parsing the timeout duration from config
- Creating a context with timeout
- Using a goroutine to monitor for timeout
- Properly canceling the EVM execution on timeout
- Ensuring cleanup with deferred cancel
This is a significant improvement in robustness for long-running traces.
967-971
: Cleaner tracer result handling.The code now uses a type assertion to check if the tracer is a StructLogger, and if so, sets the ResultL1DataFee field. This is a more straightforward approach than the previous implementation.
eth/tracers/native/tracer.go (6)
17-31
: Documentation comments are well-formatted and informative.The updated comments provide clear guidance on how to add new native tracers, including the registration process. This is helpful for future contributors.
34-35
: Added json import for the new configuration parameter.This is necessary for the
json.RawMessage
type used in the updated constructor function signature.
45-46
: Good addition of a type alias for constructor functions.Creating the
ctorFn
type alias makes the code more readable and maintainable by clearly defining the expected signature for tracer constructors. This is especially useful as the signature now includes the configuration parameter.
60-60
: Updated ctors map type to use the new constructor signature.The map is now correctly typed to use the new
ctorFn
type alias, ensuring consistency throughout the codebase.
63-68
: Updated register function to use the new constructor signature.The function correctly uses the new
ctorFn
type. The associated make call is also updated to use the new type.
71-77
: Lookup function now handles the new configuration parameter.The function signature is updated to accept a
json.RawMessage
configuration parameter, and it correctly passes this to the constructor when creating a tracer. This change is consistent with the overall refactoring to support configurable tracers.MakefileEc2.mk (2)
32-39
: Verify the S3 paths and naming schema
Before merging, confirm that these S3 locations are correct and still valid, especially if they differ from the mainnet paths.
48-54
: Check for potential parallel execution issues
If these targets might be run in parallel (make -j
), there is a risk of clobbering themorph-geth.tar.gz
ormorph-nccc-geth.tar.gz
file if two targets run simultaneously. Consider using unique filenames or concurrency locks to avoid collisions.eth/tracers/native/call.go (19)
28-30
: New ABI and hexutil imports
Imports foraccounts/abi
andcommon/hexutil
are necessary additions for handling revert reasons and hex encoding. The usage is appropriate.
33-33
: Added logging import
Addinggithub.com/morph-l2/go-ethereum/log
is consistent with the rest of the repository. Looks good.
36-36
: Code generation directive
Thego:generate
comment is maintained properly. Ensure that the command is still valid in your CI environment.
42-49
: New callLog struct
IntroducingcallLog
to record address, topics, data, and position is a clean approach. If logs can become large, ensure memory usage and data handling are well-managed.
52-63
: Extended callFrame with revert reason and logs
AddingRevertReason
,Logs
, and makingType
an actualvm.OpCode
provides richer trace information. This makes debugging easier and keeps the data structure cohesive.
68-70
: TypeString method
Neatly provides a string representation of thevm.OpCode
. This is straightforward and helpful for debugging or logging.
72-74
: failed helper method
Checks for any recorded error. This is a simple, direct approach that fits well with the call frame’s state.
98-106
: callFrameMarshaling
These JSON definitions ensure correct marshaling of the newly added fields. No issues noted.
108-114
: callTracer struct
Embedding anoopTracer
, storing config, and using anatomic.Bool
for interruption is clean. The concurrency approach is appropriate.
116-119
: Tracer configuration
OnlyTopCall
andWithLog
flags give fine-grained control over the depth of tracing and logging. Good addition for configurability.
123-134
: Initialize callTracer with config
Gracefully unmarshals JSON intocallTracerConfig
and returns the tracer. Be sure to test with invalid or incomplete JSON to confirm error handling.
137-145
: CaptureStart
Properly initializes the top-level callFrame. Make sure that the gas limit or block context is properly supplied if needed downstream.
147-147
: CREATE opcode detection
This simple conditional ensures the correct frame type is assigned for contract creation. Straightforward.
153-153
: Finalizing the top-level call
CallingprocessOutput
on the main call frame is consistent with the approach for subcalls.
209-211
: Skipping subcalls
Short-circuiting additional calls whenOnlyTopCall
is true is an elegant way to reduce noise in the trace.
217-224
: CaptureEnter
Appending new call frames to the stack is straightforward, though note that subcall data can grow deep for complex contract calls.
244-245
: Assigning GasUsed & error
Storing results incall.GasUsed
and handling errors viaprocessOutput
parallels the top-level approach. Good consistency.
249-259
: CaptureTxEnd
Conditionally clearing logs on failure helps produce more accurate final traces. This is a good approach to reflect EVM log discarding on revert.
281-292
: Recursive log clearing
clearFailedLogs
visits each callframe insideCalls
to discard logs upon a failure. This is a tidy depth-first approach.eth/tracers/native/4byte.go (4)
49-51
: Embedding noopTracer & using atomic.Bool
Inheriting fromnoopTracer
and usingatomic.Bool
forinterrupt
is a cleaner, more modern approach for concurrency control.
58-62
: newFourByteTracer signature
Accepting a context and a JSON config is consistent with the other tracers. The function properly returns(tracers.Tracer, error)
now.
96-96
: Interruption check
Usingt.interrupt.Load()
is thread-safe and straightforward for an early return if tracing is halted.
127-127
: Stop method
Callingt.interrupt.Store(true)
ensures consistent early-exit behavior throughout tracing.core/rawdb/accessors_trie.go (2)
50-69
: Consider logging retrieval errors.Within
ReadAccountTrieNode
, the errors fromdb.Get
and subsequent parsing are handled by returning defaults (nil and empty hash). For better debuggability, consider logging these errors before returning to provide more context when node retrieval fails.
71-92
: Double-check the reversed hash usage.In
IsLegacyTrieNode
, the key check callsBitReverse
on the computed hash before comparison. Ensure that the reversed hash is indeed the correct format for legacy nodes. If this reversal is intentional, a clarifying comment would help prevent confusion for future maintainers.core/state/database.go (4)
125-130
: Ensure config consistency checks.In
NewDatabaseWithConfig
, bothzktrie
andpathZkTrie
depend onconfig != nil && config.Zktrie
. If user configurations become inconsistent later (e.g., toggling PathZkTrie while Zktrie is off), this could cause unexpected behavior. Consider validating these relationships or documenting that path hashing strictly requires Zktrie to be enabled.
142-148
: Clear separation of path vs. hash-based tries.In
OpenTrie
, the path-based branch is clearly separated from the classic secure trie initialization. This structure is straightforward and easier to maintain. Keep an eye on future expansions (e.g., additional schemes) to ensure the function remains readable.
165-173
: Validate prefix usage for storage tries.When opening storage tries in path mode, the code reverses the address hash to build a prefix. Confirm that this reversed approach matches your upstream protocol. A clarifying comment or reference to a spec explaining why reversal is used can help future maintainers.
195-198
: Good extensibility for CopyTrie.Including the
case *trie.PathZkTrie:
branch inCopyTrie
is a clean, extensible approach for supporting new trie types. No immediate issues spotted.core/blockchain.go (3)
48-49
: New triedb imports utilized appropriately.The imports
hashdb
andpathdb
are introduced to differentiate hashing vs. path-based storage. This separation is an important design pattern for future expansions. No immediate issues spotted.
157-181
: Centralizetrie.Config
construction.The new
triedbConfig
method encapsulates the logic to generate different database configs depending on Zktrie and path-based flags, improving modularity. Ensure future changes (e.g., new fields) remain consistent here to avoid misconfiguration.
275-280
: VerifyMorph.ZktrieEnabled()
alignment.When creating the blockchain,
state.NewDatabaseWithConfig(db, cacheConfig.triedbConfig(chainConfig.Morph.ZktrieEnabled()))
is used. Ensure that the “enabled” logic matches the user’s intendedStateScheme
incacheConfig
to prevent mismatches between command-line flags and code-based toggles.core/state/state_prove.go (3)
56-62
: Ensure correct address hashing logic.You are conditionally hashing the address with Keccak when not using path-based ZkTrie. While this is logically sound, please confirm that all consumers of
GetStorageTrieForProof
are aware of the potential difference in howaddrHash
is derived between path-based and non-path-based modes.
75-75
: Validate new parameter usage.The new third parameter
s.originalRoot
inOpenStorageTrie
is handled correctly via the returned error. Confirm any downstream code that relies on this newly created trie properly verifies or storess.originalRoot
.
90-90
: Consistent naming for ZkTrie checks.Renaming
s.IsZktrie()
tos.IsZkTrie()
ensures consistent naming. Confirm that callers and references in other files are likewise updated.trie/zk_trie_test.go (3)
41-42
: Good practice enabling ZkTrie in the config.Providing
Zktrie: true
while constructing the database ensures the correct environment for ZkTrie tests. No concerns here.
190-190
: Initiate commit with caution.
zkTrie.db.Commit(common.Hash{}, true, nil)
is unconditionally called. Ensure your test environment is set up to handle the final commit state, especially regarding concurrency with any open references.
220-220
: Double-check final commit step.Similar to line 190, confirm the final commit doesn’t conflict with any read operations or partial states.
trie/hbss2pbss.go (4)
1-28
: Struct-driven approach is clear.Using a dedicated struct
Hbss2Pbss
to encapsulate relevant fields improves readability and maintainability. No immediate concerns.
30-52
: Handle potential nil head block gracefully.If
ReadHeadBlock
returnsnil
, the code will panic onheadBlock.Root()
. Consider early checks or error returns to avoid runtime panics.
96-115
: Robust error checks for genesis retrieval.You properly verify the existence of the genesis hash and block. This is crucial to avoid partial state issues. The approach looks correct.
138-146
: Differentiating account vs. storage trie node writes.Using separate rawdb methods is clean. Confirm that code analyzers or tests verify the presence/structure of each node once written.
triedb/pathdb/layertree.go (6)
34-37
: Good concurrency design on layerTree
Using an RWMutex to protect the shared map is appropriate. Ensure all access consistently goes through the locking mechanism.
39-44
: Constructor chaining via reset
Callingtree.reset(head)
insidenewLayerTree
helps maintain a clean initialization pattern.
87-112
: Reinforce repeated layer handling
The code logs “Skip add repeated difflayer” and returns. Verify whether this logic is correct if there's a possibility of new data being inserted under the same root in the future.
114-219
: Check partial flattening safety
If an error occurs duringdiff.persist(false)
, the flattening process might partially update the tree. Consider whether a rollback or a consistent recovery path is needed to avoid mismatched states.
221-239
: Type assertion for diskLayer
bottom()
ends withcurrent.(*diskLayer)
. Ifcurrent
is not adiskLayer
, this will panic. Confirm that the code path guarantees a disk layer at the bottom.
241-282
: Robust fork detection
front()
handles forks by returning early if multiple diff children exist. This approach appears solid for capturing a single chain tip.triedb/pathdb/nodebuffer.go (2)
44-59
: Verify initial size calculation
newNodeBuffer
sums up the byte lengths of each node. Ensure further logic remains consistent with this initial calculation, especially if new nodes are inserted concurrently.
138-171
: Flush consistency
flush
writes all data atomically, but if the batch write fails, the in-memory state might differ from disk. Consider how to ensure a rollback or guarantee consistency in failure modes.trie/database.go (18)
37-40
: Verify external library versions.
It's good to see these imports for zktrie support. Ensure that pinned versions in go.mod or vendor match these references to avoid conflicts.
102-103
: Document concurrency strategy.
These new fields (lock
,backend
) are critical for concurrency. A short doc comment can clarify how the lock is expected to be used.
288-294
: Validate default config usage.
The additional fields for Zktrie and PathZkTrie are helpful. Confirm that code reading them handles unset or zero values properly.
309-311
: Plan for partial failures.
When implementing multi-node commits, ensure errors are handled properly—particularly if a portion of the writes fails.
312-317
: Effective caching approach.
Size tracking in the backend is good. Ensure the final implementation factors in concurrency around these metrics.
319-321
: Concurrency consideration.
CommitState
might be called simultaneously for different roots. Confirm your concurrency model or gating logic.
505-507
: Unsupported method.
The error message is clear. If there's a path whereNodes()
is essential, consider either an implementation or a defensive panic in code.
526-533
: Reference handling for Zktrie.
EnsureReference
andDereference
handle Zktrie properly. Test fallback logic heavily for non-zktrie backends.
740-740
: Cap method conclusion looks fine.
No further issues. Just ensure all error paths have test coverage.
742-748
: Mind concurrency for CommitGenesis.
CommitGenesis
callsdb.Commit
, which can change internal structures. Ensure no parallel commits cause data races.
785-785
: Silent early return.
If there's an underlying error possible, consider logging or returning it. Currently it just returns on nil.
793-793
: Preimages concurrency.
db.preimages.commit(true)
may race if another usage is committing. Confirm safe concurrency or add locking in upstream calls.
993-994
: Check zero-hash usage.
Returning an empty hash forZktrie
is fine, but ensure it won't conflict with code that expects a sentinel different from zero.
997-997
: Expected empty root for regular trie.
Returning the canonicalemptyRoot
is consistent with existing code.
1012-1022
: Order of commits for preimages.
If the commit fails, there's a risk of partial writes. Verify that the partial data in preimages doesn't cause orphan references.
1045-1053
: Potential concurrency for Preimage.
Accessingdb.preimages
should follow the same locking patterns as other mutable fields to prevent data races.
1054-1068
: Journal fallback.
This is only supported by a path-based DB. Ensure upstream code checks for and handles the unsupported scenario.
1091-1100
: Graceful fallback.
ForGet
, returning an error for unsupported backends is fine. If there's a default fallback path, document it.triedb/pathdb/difflayer.go (10)
1-16
: License header.
The license usage is correct and consistent.
17-26
: Import set looks good.
Everything is straightforward. Ensure version consistency in go.mod for all imports.
83-88
: Check for nil origin.
originDiskLayer
returnsdl.origin
. If the parent is not a disk layer at all, ensure we handle that gracefully.
89-94
: Safely updating the origin.
A write lock is used properly. This is good practice to avoid race conditions.
95-99
: Simple accessor.
TherootHash()
method is straightforward.
101-105
: stateID accessor
No immediate concerns. Minimal logic is fine.
106-114
: Retrieving parent with read lock.
This is correct. Just ensure the caller doesn't rely on the parent pointer after unlocking.
137-141
: Delegation to node method.
TheNode
method is basically a pass-through. Implementation looks good.
149-168
: Flattening logic.
persist
recurses to the bottom-mostdiffLayer
and merges. For large reorgs or multi-layer diffs, watch out for performance.
170-178
: Minimal error handling
diffToDisk
mainly callsdisk.commit
. If commit fails, consider passing the error upward with context.trie/path_zk_trie.go (9)
1-16
: License header.
All standard disclaimers are present.
17-30
: Import set.
Usingscroll-tech/zktrie
plusposeidon
. Verify these dependencies are pinned in go.mod for consistent builds.
34-38
: Embedded struct.
Embedding*pathtrie.ZkTrie
is convenient. Just confirm method collisions won't occur.
40-42
: Global init side effect.
InitHashScheme
is set globally. If there's a test that changes the scheme, it might conflict.
66-72
: TryUpdateAccount approach.
Straightforward. The code does a quick key check, then updates the ZkTrie. No major concerns.
86-92
: Byte size checks.
sanityCheckByte32Key
helps guard against wrong key sizes, but watch out for value size constraints.
129-135
: Hash function.
We decode the underlyingZkTrie.Hash()
intocommon.Hash
. Straightforward approach.
137-141
: Ensure deep copy.
ZkTrie.Copy()
must produce an actual independent structure if concurrency is possible.
166-198
: Proof logic.
Prove
function writes partial nodes intoproofDb
. Verify that we won't leak data or degrade performance with large proofs.triedb/pathdb/disklayer.go (9)
1-32
: Well-structured licensing and imports.
No issues found in the license header and imports; everything appears consistent and necessary.
70-78
: Initialization logic appears correct.
Nothing critical stands out inNewTrieNodeBuffer
; the choice between sync and async buffers is clearly logged.
79-88
: Struct definition is clear.
ThediskLayer
struct fields and locking strategy look standard for this layer-based design.
90-105
: Constructing diskLayer with conditional cache creation is fine.
Reusing or creating a newfastcache.Cache
based onCleanCacheSize
is appropriate.
143-185
: Node retrieval flow is thorough.
The logic for checking the in-memory buffer, clean cache, and then disk is well-structured and thread-safe. No immediate issues spotted.
187-191
: Update method is straightforward.
Returning a newdiffLayer
is a simple pass-through.
193-226
: Commit transition looks correct.
Merging the diff’s nodes, writing state ID, marking the old layer stale, and returning a freshdiskLayer
aligns with typical layered DB flows. Error handling and locks appear appropriate.
228-237
: Buffer resizing logic is appropriate.
Checks for staleness before delegating to the underlying buffer. No further concerns.
239-249
: Size retrieval is direct and clear.
No issues regarding concurrent usage here, as it’s locked reads.triedb/pathdb/asyncnodebuffer.go (8)
1-30
: Package, imports, and base type assignment all look good.
No evident issues in these initial lines.
31-58
: Async buffer creation is straightforward.
newAsyncNodeBuffer
initializescurrent
andbackground
caches properly. The memory usage calculation is correct.
59-73
: Commit method handles potential immutability conflict gracefully.
Should failures occur, the code logs a critical error and aborts, which matches typical geth patterns.
75-86
:setSize
not supported in async mode.
Returning an error is consistent with the double-buffer design choice.
79-95
: Reset logic properly clears both buffers.
Locks are held for the duration, ensuring memory usage is fully reset.
145-151
: Stopping background flushing is well-handled.
Uses an atomic boolean and a simple wait loop to ensure no concurrency issues.
153-177
: Merging both buffers for getAllNodes.
The approach is correct, though any merge errors lead tolog.Crit
. Typical for irrecoverable states in geth.
178-355
:nodecache
architecture is cohesive.
The fields, commit logic, size tracking, and flush method are generally well-structured. Handling of immutability is consistent with async node buffer design.triedb/pathdb/journal.go (12)
1-39
: Header, imports, and error definitions look standard.
No immediate issues. Errors are properly named and used.
41-49
: Journal version constant is straightforward.
Keeping a separate version constant can help manage future compatibility changes.
50-55
:journalNode
struct naming is clear.
Represents a single persisted node with path and RLP blob.
56-67
: Journal interfaces are minimal and sufficient.
They define essential operations: basic I/O plus size and close methods.
68-151
: File-based and KV-based Journal Writers/Readers.
Implementation is direct and consistent. Non-handling of possibleClose()
errors is typical in go-ethereum code.
152-174
: Factory functions for readers/writers.
newJournalWriter
&newJournalReader
separate file-based from KV-based journaling. Error handling for missing journal is correct.
176-243
:loadJournal
&loadLayers
: robust checks for version & root continuity.
Discarding mismatched journal when the persistent state doesn’t align is sensible. Logging is adequately descriptive.
244-302
:loadDiskLayer
replays nodes from the journal properly.
Careful usage of RLP decoding, plus verifying state ID range. The optional SHA-256 check forJournalFileType
is a solid integrity measure.
304-363
:loadDiffLayer
builds multiple diffs sequentially.
Decodes and merges all diff journals until EOF. Logging the loaded layer is helpful for debugging.
365-414
:diskLayer.journal
ensures the layer is fresh before encoding.
Capturing all “unwritten” nodes into RLP plus an optional hash sum is well-structured.
416-462
:diffLayer.journal
includes parent journaling.
Recursively journals the parent first, then the current layer. The approach is consistent with incremental layering.
464-520
: Final commit inDatabase.Journal
elegantly persists the entire hierarchy.
Stopping any async flush, encoding version & root, then marking the DB read-only is a logical flow to prevent further mutations.core/rawdb/accessors_state.go (7)
20-21
: Good addition of the binary package
Theencoding/binary
import is needed for consistent encoding/decoding of numeric fields.
112-120
: Verify behavior on database error
ExistsAccountTrieNode
returnsfalse
if an error occurs indb.Has
. This can mask underlying DB issues. Consider handling DB errors distinctly or logging them for diagnostic purposes.
136-144
: Differentiate DB errors from non-existence
ExistsStorageTrieNode
returnsfalse
ifdb.Has
fails. This might conflate “not found” with a database failure. Consider broader error reporting as an alternative.
160-166
: Silent DB read error
ReadTrieJournal
ignores any read error and simply returns nil. If a failure occurs, it is treated the same as absence of data. Consider differentiating these cases to aid debugging.
183-190
: Confirm 0 as a fallback value
ReadPersistentStateID
returns 0 if the data is missing or invalid. Ensure that 0 is never confused with a valid persistent state ID, or consider returning an error to distinguish the scenarios.
199-207
: Check for silent read errors in ReadStateID
A DB error will return a nil pointer, the same as if the key does not exist. If this is desired behavior, please confirm; otherwise, separate the two scenarios.
225-231
: Review returned false on DB error
ExistsStateID
collapses DB errors and non-existent states into one case. Ensure this meets your needs for diagnosing any underlying I/O or storage issues.triedb/pathdb/database.go (13)
65-70
: JournalType scoping
EnsureJournalType
is used consistently (KV vs File) throughout. Mixed usage or accidental fallback can cause confusion when reading/writing journals.
114-123
: Double-check buffer size sanitization logic
sanitize()
setsDirtyCacheSize
toMaxDirtyBufferSize
if it’s too large. Also confirm whetherCleanCacheSize
is subject to similar constraints or if that’s intentional.
135-158
: Consider concurrency control for Database struct
While async.RWMutex
is present, ensure that code paths reading/writingdirties
andtree
are covered adequately to avoid data races, particularly if the node runs in multi-threaded contexts.
181-188
: Explain the Reader error paths
Reader
returns an error if the layer doesn’t exist. If the DB is read-only or not yet persisted, consider clarifying how that interacts with ephemeral layers.
190-207
: Be mindful of panics in CommitGenesis vs. normal commit
CommitGenesis
andCommitState
both handledb.dirties
but in different ways. Confirm that differences in usage of the batch, error handling, andWriteStateID
calls are correct and consistent.
242-255
: Ensure correct transitions to read-only
Close()
setsreadOnly = true
. Validate whether any in-flight commits or flushes are possible after closure, as these would then fail (modifyAllowed
check).
271-282
: Distinguish uninitialized from read-only states
Initialized
method just checks any layer’s root hash not beingEmptyRootHash
. Confirm it’s the correct measure for “initialized,” especially if read-only or partial layers exist.
284-295
: Capped buffer size changes at runtime
SetBufferSize
updatesdb.bufferSize
and readjusts the bottom layer. Confirm no disruptions occur if layers above are already using old buffer sizes.
302-307
: Document concurrency usage of Head
Head
just returns the front root viadb.tree.front()
. If reorgs occur concurrently, confirm whether locking around thetree
operations is robust.
309-316
: Read-only check might need more context
modifyAllowed
returnserrDatabaseReadOnly
ifdb.readOnly
is set, but it’s not immediately clear whereerrDatabaseReadOnly
is defined in this snippet. Ensure it’s exposed or documented.
339-346
: Validate config vs. actual journal usage
DetermineJournalTypeForWriter
referencesconfig.JournalFile
. Confirm it’s always aligned with the chosen journaling approach, especially if the config toggles after initialization.
361-375
: DeleteTrieJournal safety
Removing the local file plus the KV-based journal in one go ensures cleanliness, but confirm that partially successful deletes don’t leave the system in a confusing state.
377-385
: Consider basic validation before Put
Put
compresses the key withCompactStorageTrieNodeKey
and directly stores it. Ifk
is invalid, or if concurrency conflicts arise, the code might silently fail or overwrite unexpected data.
func (db *odrDatabase) OpenStorageTrie(addrHash, root, origin common.Hash) (state.Trie, error) { | ||
return &odrTrie{db: db, id: StorageTrieID(db.id, addrHash, root)}, nil | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Updated OpenStorageTrie signature with unused origin parameter.
The method signature has been updated to include a new origin
parameter to match interface changes elsewhere, but the implementation doesn't actually use this parameter yet. Consider updating the implementation to utilize this parameter if needed.
🏁 Script executed:
#!/bin/bash
# Check for other implementations of OpenStorageTrie that use the origin parameter
# Find all implementations of OpenStorageTrie with the new parameter
rg -B 2 -A 5 "func.*OpenStorageTrie.*origin common.Hash" --type go
# Find the interface that defines this method
rg -B 5 -A 15 "interface.*OpenStorageTrie" --type go
Length of output: 1036
Attention: Inconsistent Handling of the origin
Parameter in OpenStorageTrie
- In
core/state/database.go
, theorigin
parameter is actively used in creating the trie (viatrie.NewPathZkTrie
), whereas inlight/trie.go
the parameter is present only in the signature and not utilized. - Please update the
odrDatabase
implementation inlight/trie.go
to either make use of theorigin
parameter (if the downstream logic requires it) or add a clarifying comment explaining why it is intentionally unused.
@@ -190,7 +191,7 @@ This command dumps out the state for a given block (or latest, if none provided) | |||
// the zero'd block (i.e. genesis) or will fail hard if it can't succeed. | |||
func initGenesis(ctx *cli.Context) error { | |||
// Make sure we have a valid genesis JSON | |||
genesisPath := ctx.Args().First() | |||
genesisPath := ctx.GlobalString(utils.DataDirFlag.Name) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Changed source of genesis path - possible regression
The change from using ctx.Args().First()
to ctx.GlobalString(utils.DataDirFlag.Name)
for retrieving the genesis path is significant. This changes the expected input from a direct argument to a global flag value, which might cause confusion or backward compatibility issues.
This change appears to be incorrect. The DataDirFlag
typically points to the Ethereum data directory, not the genesis JSON file path. Check whether this change was intentional or if it should continue to use the first argument as the genesis file path:
🏁 Script executed:
#!/bin/bash
# Check if the usage of DataDirFlag instead of the first argument is intentional
# by examining the help text and other uses of DataDirFlag in the codebase
# First, look at the help text of the initCommand
grep -A 10 "ArgsUsage" cmd/geth/chaincmd.go | head -n 5
# Then, check how DataDirFlag is described in flags.go
grep -A 5 "DataDirFlag" cmd/utils/flags.go | grep "Usage"
# Look for similar patterns in other command implementations
grep -A 2 "ctx.Args().First()" cmd/geth/chaincmd.go
Length of output: 872
Action Required: Revert Genesis Path Retrieval to Use Command Argument
The current change uses the data directory flag (ctx.GlobalString(utils.DataDirFlag.Name)
) to obtain the genesis file path, which contradicts the command’s documented usage of <genesisPath>
. Other parts of the codebase consistently use ctx.Args().First()
for file path retrieval, confirming that the intended behavior is to use the first argument.
- File:
cmd/geth/chaincmd.go
at line 194 - Issue: Using
DataDirFlag
(intended for specifying the data directory) as the source for the genesis JSON file path is misleading and may break backward compatibility. - Snippet Recommendation:
- genesisPath := ctx.GlobalString(utils.DataDirFlag.Name) + genesisPath := ctx.Args().First()
Please review whether this change was intentional. If not, revert to using ctx.Args().First()
to align with both the command’s usage documentation and similar implementations elsewhere in the code.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
genesisPath := ctx.GlobalString(utils.DataDirFlag.Name) | |
genesisPath := ctx.Args().First() |
func (t *jsTracer) CaptureTxStart(gasLimit uint64) {} | ||
|
||
func (t *jsTracer) CaptureTxEnd(restGas uint64) {} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Transaction lifecycle hooks implementation is incomplete.
The CaptureTxStart
and CaptureTxEnd
methods are added but left empty. This means the tracer now conforms to the interface but doesn't actually implement the transaction lifecycle tracing functionality.
These empty methods should be implemented to properly capture transaction start and end events. Consider implementing them to collect relevant information such as:
func (t *jsTracer) CaptureTxStart(gasLimit uint64) {
// Store the gas limit
t.ctx["txGasLimit"] = gasLimit
// Call the JavaScript tracer's txStart method if it exists
if t.vm.GetPropString(t.tracerObject, "txStart") {
t.vm.Pop()
t.vm.PushUint(gasLimit)
if _, err := t.call(true, "txStart"); err != nil {
t.err = wrapError("txStart", err)
}
}
}
func (t *jsTracer) CaptureTxEnd(restGas uint64) {
// Store the remaining gas
t.ctx["txRestGas"] = restGas
// Call the JavaScript tracer's txEnd method if it exists
if t.vm.GetPropString(t.tracerObject, "txEnd") {
t.vm.Pop()
t.vm.PushUint(restGas)
if _, err := t.call(true, "txEnd"); err != nil {
t.err = wrapError("txEnd", err)
}
}
}
if stateObject == nil { | ||
// still return a empty trie | ||
addrHash := crypto.Keccak256Hash(addr[:]) | ||
dummy_trie, _ := s.db.OpenStorageTrie(addrHash, common.Hash{}) | ||
dummy_trie, _ := s.db.OpenStorageTrie(addrHash, common.Hash{}, common.Hash{}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Avoid discarding the error from OpenStorageTrie.
Currently, the error is ignored. If the operation fails, the code will proceed without any indication of the failure, which may complicate debugging.
- dummy_trie, _ := s.db.OpenStorageTrie(addrHash, common.Hash{}, common.Hash{})
+ dummy_trie, err := s.db.OpenStorageTrie(addrHash, common.Hash{}, common.Hash{})
+ if err != nil {
+ return nil, fmt.Errorf("failed to open dummy storage trie for %s: %v", addrHex, err)
+ }
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
dummy_trie, _ := s.db.OpenStorageTrie(addrHash, common.Hash{}, common.Hash{}) | |
dummy_trie, err := s.db.OpenStorageTrie(addrHash, common.Hash{}, common.Hash{}) | |
if err != nil { | |
return nil, fmt.Errorf("failed to open dummy storage trie for %s: %v", addrHex, err) | |
} |
func makeTestZkTrie() (*Database, *ZkTrie, map[string][]byte) { | ||
// Create an empty trie | ||
triedb := NewZktrieDatabase(memorydb.New()) | ||
triedb := NewZkDatabase(memorydb.New()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider returning errors from NewZkTrie creation.
Line 50 discards the second return value from NewZkTrie
, and potential errors are ignored. Capture the error to avoid silent failures during test initialization.
-func makeTestZkTrie() (*Database, *ZkTrie, map[string][]byte) {
- triedb := NewZkDatabase(memorydb.New())
- trie, _ := NewZkTrie(common.Hash{}, triedb)
+func makeTestZkTrie() (*Database, *ZkTrie, map[string][]byte, error) {
+ triedb := NewZkDatabase(memorydb.New())
+ trie, err := NewZkTrie(common.Hash{}, triedb)
+ if err != nil {
+ return nil, nil, nil, fmt.Errorf("failed to create ZkTrie: %v", err)
+ }
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
func makeTestZkTrie() (*Database, *ZkTrie, map[string][]byte) { | |
// Create an empty trie | |
triedb := NewZktrieDatabase(memorydb.New()) | |
triedb := NewZkDatabase(memorydb.New()) | |
func makeTestZkTrie() (*Database, *ZkTrie, map[string][]byte, error) { | |
// Create an empty trie | |
triedb := NewZkDatabase(memorydb.New()) | |
trie, err := NewZkTrie(common.Hash{}, triedb) | |
if err != nil { | |
return nil, nil, nil, fmt.Errorf("failed to create ZkTrie: %v", err) | |
} | |
// ... rest of the function remains unchanged ... | |
} |
trie/database.go
Outdated
func (db *Database) Commit(node common.Hash, report bool, callback func(common.Hash)) error { | ||
if db.backend != nil { | ||
zdb, ok := db.backend.(*hashdb.ZktrieDatabase) | ||
if !ok { | ||
return errors.New("not supported") | ||
} | ||
if db.preimages != nil { | ||
db.preimages.commit(true) | ||
} | ||
return zdb.CommitState(node, common.Hash{}, 0, report) | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potential partial flush issue.
Commits from ephemeral states might skip some references, leading to partial data. Confirm that child references are fully resolved.
// Example approach:
- // silent partial flush
+ // log or handle partial references explicitly
Committable suggestion skipped: line range outside the PR's diff.
func (db *Database) Put(k, v []byte) error { | ||
if db.backend != nil { | ||
return db.backend.Put(k, v) | ||
} | ||
return nil | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Return an error on nil backend.
Currently it silently returns nil. This can mask misconfiguration.
if db.backend == nil {
- return nil
+ return errors.New("Put called with no backend configured")
}
Committable suggestion skipped: line range outside the PR's diff.
// Close flushes the dangling preimages to disk and closes the trie database. | ||
// It is meant to be called when closing the blockchain object, so that all | ||
// resources held can be released correctly. | ||
func (db *Database) Close() error { | ||
if db.preimages != nil { | ||
db.preimages.commit(true) | ||
} | ||
|
||
if db.backend != nil { | ||
return db.backend.Close() | ||
} | ||
return nil | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Close operation reliability.
Ensure the database is in a clean state even if db.backend.Close()
fails. Possibly log or handle partial closure.
if db.backend != nil {
- return db.backend.Close()
+ err := db.backend.Close()
+ if err != nil {
+ log.Error("Failed to close backend", "err", err)
+ }
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
// Close flushes the dangling preimages to disk and closes the trie database. | |
// It is meant to be called when closing the blockchain object, so that all | |
// resources held can be released correctly. | |
func (db *Database) Close() error { | |
if db.preimages != nil { | |
db.preimages.commit(true) | |
} | |
if db.backend != nil { | |
return db.backend.Close() | |
} | |
return nil | |
} | |
// Close flushes the dangling preimages to disk and closes the trie database. | |
// It is meant to be called when closing the blockchain object, so that all | |
// resources held can be released correctly. | |
func (db *Database) Close() error { | |
if db.preimages != nil { | |
db.preimages.commit(true) | |
} | |
if db.backend != nil { | |
err := db.backend.Close() | |
if err != nil { | |
log.Error("Failed to close backend", "err", err) | |
} | |
} | |
return nil | |
} |
// Get returns the value for key stored in the trie. | ||
// The value bytes must not be modified by the caller. | ||
func (t *PathZkTrie) Get(key []byte) []byte { | ||
sanityCheckByte32Key(key) | ||
res, err := t.TryGet(key) | ||
if err != nil { | ||
log.Error(fmt.Sprintf("Unhandled trie error: %v", err)) | ||
} | ||
return res | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Silent error logging.
We only log errors in Get
but do not return them. If an error is critical, consider returning it instead.
// New attempts to load an already existing layer from a persistent key-value | ||
// store (with a number of memory layers from a journal). If the journal is not | ||
// matched with the base persistent layer, all the recorded diff layers are discarded. | ||
func New(diskdb ethdb.KeyValueStore, config *Config) *Database { | ||
if config == nil { | ||
config = Defaults | ||
} | ||
config = config.sanitize() | ||
db := &Database{ | ||
readOnly: config.ReadOnly, | ||
bufferSize: config.DirtyCacheSize, | ||
config: config, | ||
diskdb: diskdb, | ||
dirties: make(dbtypes.KvMap), | ||
} | ||
// Construct the layer tree by resolving the in-disk singleton state | ||
// and in-memory layer journal. | ||
db.tree = newLayerTree(db.loadLayers()) | ||
|
||
return db | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Batch size risk during genesis commit
CommitGenesis
lumps all db.dirties
into one batch write. If dirties
is large, it risks hitting the batch size limit or pausing for too long. A chunked approach or progressive flush might improve stability.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (8)
triedb/pathdb/journal.go (8)
40-46
: [Error Variables Clarity]
Declaring errors as package-level variables is fine. Consider adding descriptive doc comments for better clarity.
48-48
: [Journal Version Initialization]
Usingconst journalVersion uint64 = 0
is valid, but starting from at least1
might better convey its intentional usage as a version number.-const journalVersion uint64 = 0 +const journalVersion uint64 = 1
76-84
: [KVWriter/KVReader Structs Overview]
These structs buffer journal entries and store them in anethdb.KeyValueStore
upon close. Ensure large batch writes are acceptable in your environment to avoid potential performance bottlenecks.
90-92
: [Close Method Omitting Error Handling]
file.Close()
returns an error which is being ignored. If ignoring is inline with your design, this is fine; otherwise, consider logging it.func (fw *JournalFileWriter) Close() { - fw.file.Close() + if err := fw.file.Close(); err != nil { + log.Warn("Failed to close journal file", "err", err) + } }
133-149
: [newJournalWriter Error Handling]
Whenos.OpenFile
fails, the function returnsnil
without logging the error. Consider at least logging the failure to facilitate debugging.
304-363
: [loadDiffLayer Recursion]
This method recursively loads subsequent diff layers until EOF. If there are many diff layers, it could grow the call stack. Consider an iterative approach if extremely deep layering is possible.
415-461
: [diffLayer.journal Parent-First Journaling]
Recursively journaling the parent ensures that all layers are captured. LikeloadDiffLayer
, consider a non-recursive approach if extremely large layer chains are possible.
463-520
: [Database.Journal Full Commit & Read-Only Mode]
Marking the database as read-only after journaling prevents further mutations, which is a solid approach if that’s required post-commit. However, keep in mind that ignoring errors indefer journal.Close()
might obscure final write issues.- defer journal.Close() + defer func() { + if err := journal.Close(); err != nil { + log.Warn("Error closing journal writer", "err", err) + } +}()
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Lite
📒 Files selected for processing (1)
triedb/pathdb/journal.go
(1 hunks)
🔇 Additional comments (16)
triedb/pathdb/journal.go (16)
1-16
: [License Header Check]
The license header references 2022 and references the go-ethereum library. If this project is consistently maintaining this license structure across all files, then it's acceptable. Otherwise, ensure the year is correct and that the license terms remain consistent with your repository’s policies.
17-17
: [Package Declaration Check]
Declaringpackage pathdb
is clear. No issues here.
19-38
: [Imports Review]
All imported packages appear relevant. No duplicates or unused imports are detected.
50-54
: [Struct Definition Check]
journalNode
holds path and blob data properly. No immediate concerns.
56-67
: [Interfaces Returning No Errors on Close]
TheJournalWriter
andJournalReader
interfaces defineClose()
methods that do not return errors. This is acceptable as long as the workflow does not require error handling upon close. Otherwise, returning an error could help diagnose issues like failed file closures.
68-75
: [FileWriter/FileReader Structs Overview]
JournalFileWriter
andJournalFileReader
store a pointer to anos.File
. Straightforward approach, no issues.
85-88
: [Write Implementation for JournalFileWriter]
Simple pass-through to the underlying file. This looks fine, but keep in mind partial writes should be considered if the underlying filesystem might behave unexpectedly.
105-107
: [JournalKVWriter.Write Logic Check]
Writes to the in-memory buffer are straightforward. No immediate issues, but be mindful of potential memory growth for large journals.
109-112
: [JournalKVWriter.Close Persistence]
Callingrawdb.WriteTrieJournal
to store the in-memory buffer into the database is a clear approach. Monitor large data usage, as storing a massive byte slice in a single key could impact performance.
114-116
: [KVWriter.Size Calculation]
uint64(kw.journalBuf.Len())
is correct for size retrieval. No issues.
118-131
: [JournalFileReader & JournalKVReader Methods]
BothRead
methods pass through to the underlying file or buffer, andClose()
in KVReader is a no-op. This is acceptable, but confirm that the no-op for KVReader aligns with the design.
151-174
: [newJournalReader Missing File Handling]
ReturnserrMissJournal
if file does not exist or if the journal is empty (for KV). The approach is coherent. No critical issues noted.
176-222
: [loadJournal Workflow Validation]
This method thoroughly checks journal integrity (version, disk root, etc.). The mismatch handling in lines 208-209 helps discard incomplete journals. The approach is sensible.
224-242
: [loadLayers Graceful Fallback Logic]
The code attemptsloadJournal
and falls back if errors occur, discarding mismatched journals. This ensures database integrity at startup. Good defensive approach.
244-302
: [loadDiskLayer Implementation Check]
Decodes the disk layer root, ID, and node data from the journal. The check for stored ID > resolved ID in lines 273-275 is correct for data integrity. The method is well-structured.
364-413
: [diskLayer.journal Concurrency & Checksum]
Lock usage withRLock()
is consistent. The optional SHA-256 checksum for file journaling ensures integrity. Overall looks good.
func (fw *JournalFileWriter) Size() uint64 { | ||
if fw.file == nil { | ||
return 0 | ||
} | ||
fileInfo, err := fw.file.Stat() | ||
if err != nil { | ||
log.Crit("Failed to stat journal", "err", err) | ||
} | ||
return uint64(fileInfo.Size()) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
[Size Calculation and Logging Strategy]
log.Crit
at line 100 will terminate the program if Stat()
fails. This might be too severe for library code. Consider returning or logging an error instead of crashing the process.
- log.Crit("Failed to stat journal", "err", err)
+ log.Error("Failed to stat journal", "err", err)
+ return 0
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
func (fw *JournalFileWriter) Size() uint64 { | |
if fw.file == nil { | |
return 0 | |
} | |
fileInfo, err := fw.file.Stat() | |
if err != nil { | |
log.Crit("Failed to stat journal", "err", err) | |
} | |
return uint64(fileInfo.Size()) | |
} | |
func (fw *JournalFileWriter) Size() uint64 { | |
if fw.file == nil { | |
return 0 | |
} | |
fileInfo, err := fw.file.Stat() | |
if err != nil { | |
log.Error("Failed to stat journal", "err", err) | |
return 0 | |
} | |
return uint64(fileInfo.Size()) | |
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (16)
core/blockchain.go (4)
141-143
: New fields in CacheConfig.
These fields (PathSyncFlush
,JournalFilePath
, andStateScheme
) expand configuration flexibility for trie flushing and journaling. Ensure that all users ofCacheConfig
properly handle these new fields, especially if concurrency-sensitive code modifies them at runtime.
271-271
: Disabling snapshot limit.
SettingcacheConfig.SnapshotLimit = 0
will force a no-snapshot mode. While this might be intentional forZkTrie
usage, consider confirming if disabling snapshots for all scenarios is desired. This might slow restarts in certain cases.
842-847
: Path-based scheme journaling logic.
The code attempts to journal in-memory trie nodes before shutdown. Logged withlog.Info
on error, which might downplay the severity. Consider usinglog.Warn
orlog.Error
to better reflect failure to persist.- log.Info("Failed to journal in-memory trie nodes", "err", err) + log.Error("Failed to journal in-memory trie nodes", "err", err)
848-888
: Final state commit for non-path scheme.
This block ensures HEAD, HEAD-1, and HEAD-127 states are committed, then cleans up references, snapshot states, and pre-warms the disk with cache content. The approach is consistent with the existing spirit of multiple safe-guard commits. One minor concern is that leftover errors from commits only log an error instead of returning; if data corruption is suspected, consider returning an interrupt or panic to avoid silent partial data commits.Would you like a follow-up patch to propagate commit errors more aggressively and fail the shutdown if persistent writes fail?
trie/database.go (4)
37-41
: Introduced triedb references for hashdb and pathdb.
These imports align with the newly added logic around multi-scheme trie backends. Ensure the repository uses pinned, compatible versions of these modules to avoid version conflicts.
299-300
: Global variable for GenesisStateInPathZkTrie.
Using a global can be risky if changed by multiple consumers. If not essential, consider scoping it to the relevant code path or including it in a config object.
301-308
: Reader interface.
Defines minimalNode
retrieval but does not specify concurrency. Confirm that if multiple readers are created, underlying data is safe for concurrent reads.
309-331
: Newbackend
interface.
This abstraction allows different trie schemes to commit data separately. The interface is well-defined, but ensure robust error handling for partial writes to disk (especially in distributed or path-based setups).triedb/hashdb/zk_trie_database.go (7)
18-32
: Config defaults with zero clean-cache.
Explicitly forcing a 0 MB memory cache is a safe default to avoid accidental memory usage. Consider documenting trade-offs if a user sets a nonzero cache.
33-42
: ZktrieDatabase struct.
Includesdiskdb
,cleans
fastcache,dirties
map, and size metrics. Current concurrency approach useslock sync.RWMutex
. Keep in mind any large batch writes will block reads.
64-65
:Scheme()
returns rawdb.HashScheme.
ReturningHashScheme
might cause confusion if the underlying code is specifically Zk based. Consider returning a more descriptive scheme ID if needed for clarity.
77-100
: Committing state in memory to disk.
The method logs the persisting operation and time, then callscommitAllDirties()
. The approach is straightforward but might risk partial commits if the batch fails mid-way. Consider providing a fallback strategy if flush partially succeeds.
102-124
: Single commit for all dirty pairs.
Locks are taken, all kv pairs are inserted into the batch, then the memory map is cleared. This is an all-or-nothing approach. Evaluate if any incremental flushing might be needed for very large sets.
151-180
: Get method uses two-tier cache.
Reads fromdirties
first, thencleans
, and falls back todiskdb
. The logic looks fine, but watch for possible duplication if the disk fetch is an error (lines 174-179). Currently, we do a partial condition: we only set the clean cache if there's an error?if err != nil && db.cleans != nil { - db.cleans.Set(key[:], v) + if v != nil { + db.cleans.Set(key[:], v) + } }
200-220
: Periodic cache save.
Runs a background tick to callsaveCache(dir, 1)
. This is a straightforward approach but may lock out concurrent operations for a noticeable time if the cache is large. Consider warning on potential lag if the snapshot is big.triedb/pathdb/database.go (1)
386-394
: Validate key/value for nil or unexpected inputs.While the current implementation directly stores the passed key/value, it may be prudent to log or handle nil inputs explicitly if unexpected usage scenarios arise.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Lite
📒 Files selected for processing (5)
core/blockchain.go
(7 hunks)core/blockchain_l2.go
(3 hunks)trie/database.go
(14 hunks)triedb/hashdb/zk_trie_database.go
(1 hunks)triedb/pathdb/database.go
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- core/blockchain_l2.go
🔇 Additional comments (16)
core/blockchain.go (5)
48-49
: Use of new triedb imports looks consistent.
The newly importedhashdb
andpathdb
packages are properly referenced later in the code; there are no obvious issues with unused imports or version mismatches.
156-181
: Centralized trie database configuration.
ThistriedbConfig
function sets up either a path-based or hash-based Zktrie config. The branching logic is clear:
PathZkTrie
→config.PathDB
- Otherwise, if
Zktrie
→config.HashDB
.
Consider handling unexpected combinations more explicitly (e.g., unknown scheme). Currently, no error is thrown ifzktrie
is true butc.StateScheme
is neither hash nor path scheme.
201-205
: Added fields for chain's state storage and GC process.
The new fields (db
,snaps
,triegc
,gcproc
,flushInterval
) support more advanced state management and snapshot usage. Double-check thatflushInterval
updates are consistently performed under locking or atomics to avoid data races.
277-281
: Refactored blockchain instantiation flow.
UtilizingcacheConfig.triedbConfig(chainConfig.Morph.ZktrieEnabled())
ensures that the correct trie database backend is constructed. The usage looks correct, and the code remains clear.
297-297
: Flush interval set from config.
StoringcacheConfig.TrieTimeLimit
into an atomic helps handle concurrency reliably. Verify that the read side also uses atomic operations if employed across threads.trie/database.go (4)
23-23
: Import of math/big.
The new import is used for ZkTrie-specific big.Int operations. No issues found here.
102-104
: Addedbackend
field and interface usage.
TheDatabase
struct can now delegate node operations to abackend
, which is set ifZktrie
orPathZkTrie
is enabled. Verify that all relevant methods check ifdb.backend
is non-nil before usage, to avoid nil-pointer panics.
341-342
:NewZkDatabase
constructor.
This convenience function helps create a Zktrie-based database quickly. Be sure to clarify usage in documentation so developers know it bypasses the defaultCacheConfig
.
371-383
: Fallback to pathdb or hashdb.
The logic enforces a mutual exclusivity: if bothPathDB
andHashDB
are set, the code logs a critical error. This is good for preventing a conflicting config. No issues found.triedb/hashdb/zk_trie_database.go (5)
1-16
: New package for Zktrie-based database.
This file introduces theZktrieDatabase
with caching and dirty tracking. The approach is consistent with other chain-level caches. Overall structure is clear.
66-75
: Size measurement logic.
This method calculates sizes for memory usage by including a small metadata overhead per dirty entry. That’s good, but double-check that the overhead is accurate if the code evolves (e.g., if each entry stores more references).
135-149
: Put method usesBitReverse
key transformation.
Ensure callers are aware that the key stored indiskdb
differs from the original. This is consistent with ZkTrie usage, but it might be surprising if an external tool is used to read raw keys.
182-198
: Save cache concurrency.
saveCache
usesfastcache.SaveToFileConcurrent(dir, threads)
. If multiple saves overlap, consider blocking or using a single writer approach. Otherwise, the cache might be overwritten.
222-230
: zkReader’s Node method simply callsdb.Get
.
This is consistent with the standardReader
interface usage. Be mindful thatBitReverse
is performed indb.Get
, so higher-level code must supply the reversed path bits or adapt.triedb/pathdb/database.go (2)
190-207
: Reiterate batch size risk for genesis commit.This large batch write may risk hitting database limits or causing long pauses when handling substantial state data, echoing a prior recommendation to use a chunk-based or progressive flush approach.
222-226
: Request verification of condition on identical roots with single dirty entry.When
root == parentRoot
andlen(db.dirties) == 1
, the code returns early. It's unusual to require exactly one dirty entry instead of zero if the state truly has not changed. Please verify the intended logic.
type Config struct { | ||
Cache int // Memory allowance (MB) to use for caching trie nodes in memory | ||
Journal string // Journal of clean cache to survive node restarts | ||
Preimages bool // Flag whether the preimage of trie key is recorded | ||
Zktrie bool // use zktrie | ||
Cache int // Memory allowance (MB) to use for caching trie nodes in memory | ||
Journal string // Journal of clean cache to survive node restarts | ||
Preimages bool // Flag whether the preimage of trie key is recorded | ||
Zktrie bool // use zktrie | ||
PathZkTrie bool // use path zktrie | ||
|
||
HashDB *hashdb.Config // Configs for hash-based scheme | ||
PathDB *pathdb.Config // Configs for experimental path-based scheme | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Extended Config struct with ZkTrie options.
New fields (Zktrie
, PathZkTrie
, HashDB
, PathDB
) expand database configuration. Make sure that partial or conflicting settings (e.g., both Zktrie
and HashDB
unset) do not cause ambiguous states.
// Close closes the trie database and the held freezer. | ||
func (db *Database) Close() error { | ||
db.lock.Lock() | ||
defer db.lock.Unlock() | ||
|
||
// Set the database to read-only mode to prevent all | ||
// following mutations. | ||
db.readOnly = true | ||
|
||
// Release the memory held by clean cache. | ||
db.tree.bottom().resetCache() | ||
|
||
return nil | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider closing the underlying disk database upon Close.
Currently, db.Close()
sets the database to read-only but never closes db.diskdb
, which can lead to resource leaks. Calling db.diskdb.Close()
ensures the file handles and resources are properly released.
func (db *Database) Close() error {
db.lock.Lock()
defer db.lock.Unlock()
db.readOnly = true
db.tree.bottom().resetCache()
+ if closer, ok := db.diskdb.(io.Closer); ok {
+ if err := closer.Close(); err != nil {
+ return err
+ }
+ }
return nil
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
// Close closes the trie database and the held freezer. | |
func (db *Database) Close() error { | |
db.lock.Lock() | |
defer db.lock.Unlock() | |
// Set the database to read-only mode to prevent all | |
// following mutations. | |
db.readOnly = true | |
// Release the memory held by clean cache. | |
db.tree.bottom().resetCache() | |
return nil | |
} | |
// Close closes the trie database and the held freezer. | |
func (db *Database) Close() error { | |
db.lock.Lock() | |
defer db.lock.Unlock() | |
// Set the database to read-only mode to prevent all | |
// following mutations. | |
db.readOnly = true | |
// Release the memory held by clean cache. | |
db.tree.bottom().resetCache() | |
if closer, ok := db.diskdb.(io.Closer); ok { | |
if err := closer.Close(); err != nil { | |
return err | |
} | |
} | |
return nil | |
} |
1. Purpose or design rationale of this PR
...
2. PR title
Your PR title must follow conventional commits (as we are doing squash merge for each PR), so it must start with one of the following types:
3. Deployment tag versioning
Has the version in
params/version.go
been updated?4. Breaking change label
Does this PR have the
breaking-change
label?Summary by CodeRabbit
New Features
Hbss2Pbss
struct for converting hierarchical blockchain state databases.MuxTracer
to capture transaction start and end events.Refactor
Tests
Chores