Conversation
…lysis - 00-codebase-mapping.md: maps existing PoRBFT v2 to Petri concepts - 01-implementation-plan.md: 8-phase implementation plan - 02-risks-and-considerations.md: risks, mitigations, open questions - petri.md: pitch document in markdown - Epic #9 created in mycelium with 37 atomic tasks - Team Mode activated and persisted to AGENTS.md
- Update codebase mapping (v2) with post-stabilisation file paths - Rewrite implementation plan (v2) with corrected paths and finalized design decisions - Replace open questions in risks doc with finalized decisions table - Add VADEMECUM operational guide (team mode, testing, guardrails) - Update mycelium tasks with test phases and dependency links
New phase for exposing soft finality to SDK consumers. Marked as requiring user coordination before touching ../sdks/.
- Create src/libs/consensus/petri/ directory structure - Define TransactionClassification enum (PRE_APPROVED, TO_APPROVE, PROBLEMATIC) - Define StateDelta, PeerDelta interfaces - Define ContinuousForgeRound, ForgeConfig, ForgeState interfaces - Define PetriConfig with defaults (2s forge, 10s block, 7/10 threshold, 5-round TTL) - Define DeltaComparison, RoundDeltaResult interfaces - Add petriConsensus feature flag (default: false) to SharedState - Create index.ts entry point stub with type re-exports - Update VADEMECUM with diagram agent requirement and Phase 8 tests
- Create TransactionClassifier: classify txs as PRE_APPROVED (read-only) or TO_APPROVE (state-changing) based on GCR edit generation - Create SpeculativeExecutor: simulate GCR edits without mutating state, produce deterministic delta hash via canonical JSON + SHA-256 - Create canonicalJson utility: deterministic serialization with sorted keys, BigInt handling, Map/Set support - Extend MempoolTx entity with classification and delta_hash columns - Add classification queries to Mempool: getByClassification(), getPreApproved(), updateClassification() - Wire classifier into endpointValidation.ts (gated by petriConsensus flag) - Add 28 unit tests: canonicalJson determinism, classifier logic, delta hash determinism — all passing - Add Phase 0 architecture diagram
Add Phase 1 data flow, new module boxes (classifier, speculativeExecutor, canonicalJson), feature flag gate visualization, and modified module entries (Mempool entity, mempool_v2). Phase 0 content preserved.
… validation Moved Petri classification from endpointValidation (where tx doesn't exist in mempool yet) to Mempool.addTransaction() (where the row is created). Classification and delta_hash are now saved atomically with the mempool entry. Also removed unused IsNull/Not imports from mempool_v2.
- Create DeltaAgreementTracker: tracks per-tx delta agreement across forge rounds, promotes at 7/10 threshold, flags after 5-round TTL - Create ContinuousForge: 2s forge loop with mempool sync, speculative execution, all-to-all delta exchange, agreement evaluation, and classification updates (TO_APPROVE → PRE_APPROVED or PROBLEMATIC) - Create forgeInstance singleton for RPC handler access - Add petri_exchangeDeltas RPC handler to manageConsensusRoutines (gated by petriConsensus flag) - Wire petriConsensusRoutine() in index.ts to create and start forge - Add 22 new tests: DeltaAgreementTracker (15 tests covering promotion, flagging, TTL, mixed scenarios, boundary cases) + ContinuousForge state lifecycle (7 tests) - Total: 50 tests passing across 5 files
Add Continuous Forge cyclic flow diagram, DeltaAgreementTracker detail, forgeInstance/RPC bridge, and complete round data flow. All P0/P1/P2 components shown with phase annotations.
- PetriBlockCompiler: compiles PRE_APPROVED txs into blocks at 10s boundary using existing orderTransactions() + createBlock() infrastructure - BFTArbitrator: one BFT round per PROBLEMATIC tx — resolve or reject (chain never stalls, rejection is fail-safe) - PetriBlockFinalizer: broadcastBlockHash() + isBlockValid() threshold + insertBlock() + BroadcastManager propagation - petriConsensusRoutine() full lifecycle: forge → pause → arbitrate → compile → finalize → reset → resume - Consensus dispatch switching in mainLoop.ts and manageConsensusRoutines.ts (petriConsensus flag gates which routine is called) - 21 new tests (BFT threshold, result structures, lifecycle, dispatch) - All 71 Petri tests passing
- ShardMapper: getShardForAddress() — single-shard testnet (always 'default'), interface ready for multi-shard expansion - PetriRouter: selectMembers() uses Alea PRNG seeded with tx hash for deterministic routing to exactly 2 shard members - PetriRouter.relay(): sends ValidityData to selected members via existing nodeCall/RELAY_TX RPC pattern (same as DTR but targeted) - endpointExecution.ts: when petriConsensus flag is on, routes via PetriRouter.relay() instead of DTR (early return before validator check) - 16 new tests (ShardMapper, selectMembers determinism/count/uniqueness, routing flag gating, response shapes) - All 87 Petri tests passing
- BFTArbitrator: document roundNumber: -1 sentinel reuse of petri_exchangeDeltas - endpointExecution: flag Petri routing early-return mempool flow for P6 verification
- Add soft_finality_at column to MempoolTx and Transactions entities - Set soft_finality_at in updateClassification() on PRE_APPROVED - Add getTransactionFinality() function (chain-first, then mempool) - Add getTransactionFinality RPC endpoint in rpcDispatch - Re-export Phase 5 components from petri/index.ts - Add finality.test.ts (15 tests covering structure, transitions, RPC shape)
Only set soft_finality_at on first PRE_APPROVED classification, not on subsequent updates. Prevents timestamp overwrite if a tx is re-classified.
- happyPath.test.ts: full lifecycle (classify → agree → compile → finalize) - conflictPath.test.ts: double-spend → PROBLEMATIC → BFT resolution/rejection - byzantineFault.test.ts: Byzantine minority tolerance (f < n/3) - liveness.test.ts: chain never stalls (empty blocks, bounded PROBLEMATIC TTL) - featureFlagRollback.test.ts: clean ON/OFF/ON toggle with state isolation - benchmark.test.ts: DeltaTracker throughput (5K txs), routing (10K calls), BFT O(1) - Add getPetriForgeInstance() getter to forgeInstance.ts 84 new tests (186 total), 0 failures
- Add @deprecated JSDoc to SecretaryManager class (1018 lines) - Add @deprecated to Secretary RPC handlers (setValidatorPhase, greenlight, getValidatorPhase, getBlockTimestamp) in manageConsensusRoutines.ts - Add @deprecated to OmniProtocol consensus handlers (opcodes 0x35-0x38) - Add @deprecated to ValidationPhase types in validationStatusTypes.ts - No deletions — kept for PoRBFT v2 fallback until testnet validation - Task #119 (flag removal) deferred to post-testnet validation
Reflexion finding: the getBlockTimestamp case block lacked its own deprecation comment — only covered indirectly by the getValidatorPhase group comment. Added direct marker for clarity.
- Add test:petri script to package.json (bun run test:petri) - Register Petri consensus in TESTING_MAP.md coverage map
TR1: Configurable Petri params - Add PETRI_CONSENSUS, PETRI_FORGE_INTERVAL_MS, PETRI_BLOCK_INTERVAL_MS, PETRI_AGREEMENT_THRESHOLD, PETRI_PROBLEMATIC_TTL_ROUNDS, PETRI_SHARD_SIZE to envKeys.ts - Add PetriConsensusConfig interface to config/types.ts - Add petri defaults to config/defaults.ts - Wire through config/loader.ts with envBool/envInt - Load into getSharedState.petriConsensus and petriConfig in index.ts TR2: Docker devnet wiring - Add PETRI_CONSENSUS env var to all 4 nodes in docker-compose.yml - Add PETRI_CONSENSUS to devnet/.env.example (default: false) - Set PETRI_CONSENSUS=true in .env to enable Petri on devnet
Register petri_block_production and petri_tx_inclusion scenarios in loadgen main.ts so they are discoverable via run-scenario.sh and the petri suite in run-suite.ts.
Submits TX to node-1 and verifies propagation to all other nodes via getTx and getTransactionFinality, ensuring Petri consensus relay layer works across the full cluster.
Registers and collects petri_enabled, petri_forge_running, petri_forge_paused, petri_forge_round, petri_pending_tx_count, and petri_tracker_tx_count gauges for Prometheus observability.
Sends sustained load over configurable rounds, measuring TX throughput, soft/hard finality latency (p50/p95/p99), block production rate, and error rate. Outputs baseline JSON summary.
When Petri consensus is active, receiving nodes no longer compile their own candidate block to compare hashes. Instead, they sign the proposer's block hash directly and return the signature. This fixes BFT vote failures caused by independent block compilation producing different timestamps/hashes across nodes. In Petri, only the proposer compiles the block — peers validate and vote.
… update docs Petri consensus is now the default consensus engine (PETRI_CONSENSUS=true). Soak tested on 4-node devnet: 10/10 TXs, 0% error, blocks advancing continuously. Bugs fixed: - chainBlocks: savepoint-based error isolation prevents DB transaction poisoning when individual TX inserts fail (e.g., duplicate hashes) - petriBlockCompiler: TX cutoff now uses milliseconds (was comparing ms timestamps against second-granularity cutoff, filtering out all TXs) - petri/index: startingConsensus flag reset in finally block - petriSecretary: election includes self in sorted identity list - broadcastBlockHash: Promise.allSettled + sequential sig verification - manageProposeBlockHash: accept-and-sign model for Petri consensus - orderTransactions: hash tiebreaker for deterministic ordering - broadcastManager: removed signer filter so members receive finalized block Config changes: - defaults.ts: petri.enabled=true, omni.mode=OMNI_PREFERRED - .env.example: OMNI_ENABLED=true, OMNI_MODE=OMNI_PREFERRED, PETRI_CONSENSUS=true - devnet: PETRI_CONSENSUS default true, OMNI_MODE=OMNI_PREFERRED on all nodes Docs updated: - petri/01-implementation-plan.md: status table, P7 completed, fixes documented - petri/VADEMECUM.md: feature flag default, modified files list, coordination dir - petri/03-secretary-coordinated-signing.md: P9 design doc (future work) Mycelium: P7 closed, P8/P9 remaining tasks added (#133-#135)
…anager - petri/consensus.md: source-level explanation of how Petri consensus actually works, with file paths, line numbers, flow diagram, invariants, and soak test results - PoRBFT.ts: mark consensusRoutine() as @deprecated - l2ps_mempool.ts: guard SecretaryManager access behind petriConsensus flag — when Petri is active, use Chain.getLastBlockNumber() directly instead of stale SecretaryManager state Soak test re-run: ok=true, 10/10 TXs, 25 blocks, zero PoRBFT activity.
# Conflicts: # src/libs/blockchain/chainBlocks.ts # src/libs/consensus/v2/routines/broadcastBlockHash.ts # testing/loadgen/src/features/consensus/petri_block_production.ts # testing/loadgen/src/features/consensus/petri_relay_flow.ts # testing/loadgen/src/features/consensus/petri_soak.ts # testing/loadgen/src/features/consensus/petri_tx_inclusion.ts
WalkthroughAdds a new Petri Consensus implementation and feature-flag plumbing across config, runtime, mempool, consensus, RPCs, DB schema, tests, devnet tooling, and docs; also flips Omni defaults and introduces Team Mode/AGENTS.md changes. Petri is feature-flagged and wired into the main loop. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant Client as Client/SDK
participant RPC as RPC Endpoint
participant Validator as Validation Layer
participant Router as Petri Router
participant Members as Shard Members
participant Forge as ContinuousForge
participant Arb as BFT Arbitrator
participant Compiler as Block Compiler
participant Secretary as Secretary
participant Chain as Blockchain
Client->>RPC: submit transaction
RPC->>Validator: validate & compute GCR edits
Validator->>Router: classify & relay (if petri)
Router->>Members: relay to selected members
Members->>Forge: add to local mempool
Forge->>Forge: periodic rounds (speculative exec, exchange deltas)
Forge->>Arb: on block boundary -> arbitrate problematic txs
Arb->>Compiler: return resolved/rejected lists
Compiler->>Secretary: compile candidate block
Secretary->>Members: collect signatures (petri_submitBlockHash)
Members->>Secretary: return signatures
Secretary->>Chain: insert finalized block (on threshold)
sequenceDiagram
autonumber
participant MainLoop as mainLoop
participant Petri as PetriConsensus
participant PoRBFT as PoRBFTv2
MainLoop->>MainLoop: check getSharedState.petriConsensus
alt petri enabled
MainLoop->>Petri: petriConsensusRoutine(shard)
else
MainLoop->>PoRBFT: consensusRoutine()
end
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
|
ⓘ You are approaching your monthly quota for Qodo. Upgrade your plan Review Summary by QodoPetri Consensus Protocol: Complete Implementation with Byzantine Fault Tolerance and Dual Finality
WalkthroughsDescription• **Petri Consensus Protocol Implementation**: Complete implementation of a leaderless, Byzantine fault-tolerant consensus protocol with 9 phases spanning transaction classification, continuous forge loop, delta agreement tracking, BFT arbitration, block compilation, and secretary-coordinated finalization • **Core Consensus Components**: - ContinuousForge (Phase 2): 2-second forge loop with speculative execution and delta exchange - DeltaAgreementTracker: Per-transaction agreement tracking with threshold-based promotion and TTL-based flagging - BFTArbitrator: Resolves delta disagreements via 2/3+1 member consensus - PetriBlockCompiler: Deterministic block compilation at 10-second boundaries - PetriBlockFinalizer: Secretary-coordinated block signing with verify-then-sign model • **Transaction Classification**: Dual-path routing based on GCR edit analysis (PRE_APPROVED for read-only, TO_APPROVE for state-changing) • **Routing & Coordination**: Deterministic shard member selection via Alea PRNG, secretary election as first peer in shard • **Mempool Integration**: Classification at insertion time, soft/hard finality tracking, speculative execution support • **RPC Endpoints**: getTransactionFinality for dual finality queries, petri_exchangeDeltas for Phase 2, petri_submitBlockHash for Phase 9 • **Comprehensive Test Suite**: 186 unit and integration tests covering happy path, conflict resolution, Byzantine fault tolerance, liveness guarantees, performance benchmarks, and feature flag rollback • **Devnet Scenarios**: 4 E2E test scenarios (block production, TX inclusion, relay flow, soak test) for performance validation • **Configuration System**: Environment-based Petri config with sensible defaults (forge interval 2s, block interval 10s, agreement threshold 7, shard size 10) • **Database Schema**: Mempool and Transaction entities extended with classification, delta_hash, and soft_finality_at columns • **Documentation**: Implementation plan, codebase mapping, operational guide, and consensus flow reference • **Backward Compatibility**: Feature flag gating (petriConsensus) with PoRBFT v2 fallback, deprecation notices for replaced components Diagramflowchart LR
TX["Transaction Submitted"]
CLASSIFY["Transaction Classifier"]
ROUTE["Petri Router"]
FORGE["Continuous Forge<br/>2s Loop"]
DELTA["Delta Agreement<br/>Tracker"]
BFT["BFT Arbitrator<br/>2/3+1 Consensus"]
COMPILE["Block Compiler<br/>10s Boundary"]
FINALIZE["Block Finalizer<br/>Secretary Coordination"]
CHAIN["Chain Insertion"]
TX --> CLASSIFY
CLASSIFY -->|PRE_APPROVED| ROUTE
CLASSIFY -->|TO_APPROVE| ROUTE
ROUTE --> FORGE
FORGE --> DELTA
DELTA -->|Agreement| COMPILE
DELTA -->|Disagreement| BFT
BFT --> COMPILE
COMPILE --> FINALIZE
FINALIZE --> CHAIN
File Changes1. better_testing/petri/secretaryCoordination.test.ts
|
Code Review by Qodo
1. Unverified hash signing
|
Move 15 Petri test files from better_testing/petri/ to testing/petri/ to complete the directory consolidation started in c549a6f. Update all references in package.json, CLAUDE.md, and petri/*.md docs.
There was a problem hiding this comment.
Actionable comments posted: 19
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
🟡 Minor comments (13)
TEAM.md-24-29 (1)
24-29:⚠️ Potential issue | 🟡 MinorAdd language identifiers to fenced code blocks
These fenced blocks are missing language tags and are flagged by markdownlint (MD040).
Proposed doc fix
-``` +```text [TEAM MODE: ON] Operating as Tech Lead. I'll decompose, delegate, and verify. Persisted to AGENTS.md — compaction-safe. Standing by for a task or objective.-
+text
[TEAM MODE: OFF]
Removed from AGENTS.md. Back to direct execution mode.-``` +```text `@junior` TASK: [imperative steps] FILES: [exact paths] PATTERN: [code to mimic] CONSTRAINT: [explicit boundaries] DONE_WHEN: [observable criteria]-
+text
@seniorOBJECTIVE: [what and why]
SCOPE: [files/modules in play]
CONTEXT: [architectural decisions, constraints]
APPROACH: [suggested direction, not prescriptive]
ACCEPTANCE: [what done looks like]-``` +```text [PLAN] 3 subtasks: 1 Junior (scaffolding), 1 Senior (implementation), 1 Lead (integration) [DISPATCHED] `@junior` — scaffold endpoint files [DISPATCHED] `@senior` — implement auth middleware [VERIFIED] junior output ✓ [VERIFIED] senior output — requested revision on error handling [INTEGRATING] assembling and running tests [DONE] feature complete, tests passing</details> Also applies to: 34-37, 74-80, 83-89, 112-120 <details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against the current code and only fix it if needed.
In
@TEAM.mdaround lines 24 - 29, The fenced code blocks in TEAM.md lack
language identifiers and trigger markdownlint MD040; update each triple-backtick
block (the ones containing the TEAM MODE snippets, the@junior/@senior blocks,
the PLAN/dispatch/verification block and the other ranges noted) to include a
language tag (e.g., changetotext) so every fenced block begins withON/OFF snippets, the `@junior/`@senior templates, and the PLAN block) to make markdownlint happy.AGENTS.md-3-3 (1)
3-3:⚠️ Potential issue | 🟡 MinorUse a proper file reference for TEAM.md
@TEAM.mdis ambiguous in Markdown rendering and may not resolve as a file link. PreferTEAM.mdor[TEAM.md](TEAM.md)for reliable navigation.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@AGENTS.md` at line 3, In AGENTS.md replace the ambiguous reference string "@TEAM.md" with a proper file reference such as "TEAM.md" or a Markdown link "[TEAM.md](TEAM.md)" so the link renders and resolves correctly; update the occurrence of the literal "@TEAM.md" in that file to the chosen form to ensure reliable navigation.src/libs/consensus/petri/utils/canonicalJson.ts-32-34 (1)
32-34:⚠️ Potential issue | 🟡 MinorSet sorting may be non-deterministic for numeric values.
Array.from(value).sort()uses lexicographic string comparison by default. For numeric Sets, this produces[1, 10, 2]instead of[1, 2, 10]. If Sets containing numbers are ever serialized, different insertion orders could yield different canonical strings—breaking the determinism guarantee.🛡️ Suggested fix for numeric stability
if (value instanceof Set) { - return Array.from(value).sort() + return Array.from(value).sort((a, b) => { + const strA = String(a) + const strB = String(b) + return strA.localeCompare(strB) + }) }Using
localeCompareensures consistent string-based ordering. Alternatively, if numeric ordering is preferred:return Array.from(value).sort((a, b) => { if (typeof a === "number" && typeof b === "number") return a - b return String(a).localeCompare(String(b)) })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libs/consensus/petri/utils/canonicalJson.ts` around lines 32 - 34, The current Set handling in canonicalJson (the branch with "if (value instanceof Set) return Array.from(value).sort()") uses default lexicographic sort which is non-deterministic for numeric values; update this to use a comparator that numerically compares when both elements are numbers (a - b) and otherwise falls back to string comparison (String(a).localeCompare(String(b))) so Sets of mixed types serialize deterministically.src/libs/omniprotocol/protocol/handlers/consensus.ts-2-4 (1)
2-4:⚠️ Potential issue | 🟡 MinorSwap the
0x36/0x37labels in the deprecation header.The handlers below still document
getValidatorPhaseas0x36andgreenlightas0x37, so the new header is reversed.Suggested fix
-// `@deprecated` — Secretary-related handlers (0x35 setValidatorPhase, 0x36 greenlight, -// 0x37 getValidatorPhase, 0x38 getBlockTimestamp) replaced by Petri Consensus. +// `@deprecated` — Secretary-related handlers (0x35 setValidatorPhase, 0x36 getValidatorPhase, +// 0x37 greenlight, 0x38 getBlockTimestamp) replaced by Petri Consensus.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libs/omniprotocol/protocol/handlers/consensus.ts` around lines 2 - 4, The deprecation header comment labels for the Secretary-related handlers are reversed; update the header so 0x36 is labeled "greenlight" and 0x37 is labeled "getValidatorPhase" to match the actual handler documentation below (referencing the getValidatorPhase and greenlight handler names in this file). Edit the header comment at the top of consensus.ts to swap the `0x36` and `0x37` labels accordingly so they align with the implemented handlers.petri/02-risks-and-considerations.md-142-142 (1)
142-142:⚠️ Potential issue | 🟡 MinorFix sentence fragment in the rationale.
Line [142] reads as a fragment; add a subject for clarity.
📝 Suggested wording
-| 1 | **Forge interval** | 2 seconds | Conservative start. Gives ample time for delta exchange even on high-latency networks. Can be optimized to 1s later once benchmarked. | +| 1 | **Forge interval** | 2 seconds | Conservative start. Gives ample time for delta exchange even on high-latency networks. It can be optimized to 1s later once benchmarked. |🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@petri/02-risks-and-considerations.md` at line 142, The rationale for the "Forge interval" row is a sentence fragment; edit the table cell for "Forge interval" (the row with subject "Forge interval" and value "2 seconds") to add a clear subject to the rationale—e.g., change "Can be optimized to 1s later once benchmarked." to "We can optimize this to 1s later once benchmarked."—so the rationale reads as a complete sentence.petri/02-risks-and-considerations.md-79-79 (1)
79-79:⚠️ Potential issue | 🟡 MinorThe L2PSMempool “no change needed” claim is now outdated.
Line [79] conflicts with this PR’s actual changes in
src/libs/blockchain/l2ps_mempool.ts. Please soften to reflect that core execution logic is unchanged, but integration/timing/block-ref plumbing did change.📝 Suggested wording
-**No change needed** to `L2PSMempool`, `L2PSTransactionExecutor`, or `L2PSConsensus` core logic. Only the timing of `applyPendingProofs()` changes slightly. +**No core execution-model change needed** to `L2PSTransactionExecutor` or `L2PSConsensus`. `L2PSMempool` integration paths were adjusted for Petri-aware block reference handling.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@petri/02-risks-and-considerations.md` at line 79, Update the doc line claiming "No change needed" for L2PSMempool to reflect that while core execution logic in L2PSMempool, L2PSTransactionExecutor, and L2PSConsensus remains unchanged, integration/timing and block-reference plumbing did change (notably the timing of applyPendingProofs() in L2PSMempool). Replace the absolute "No change needed" phrasing with a softened statement specifying that core logic is unchanged but integration/timing/block-ref plumbing (see L2PSMempool.applyPendingProofs()) was modified to accommodate this PR.petri/petri.md-1-2 (1)
1-2:⚠️ Potential issue | 🟡 MinorRemove leftover conversion artifact.
Line 1 appears to be a remnant from converting this document from another format. It should be removed as it's not part of the specification content.
🧹 Proposed fix
-I'll convert this pitch deck into a clean text-only document for you. - ---🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@petri/petri.md` around lines 1 - 2, Remove the stray conversion artifact "I'll convert this pitch deck into a clean text-only document for you." from the top of petri.md; delete that lone line so the document starts with the actual specification content and contains no leftover conversion notes.petri/petri.md-89-95 (1)
89-95:⚠️ Potential issue | 🟡 MinorAdd language identifier to fenced code block.
The ASCII diagram should have a language identifier for proper rendering and to satisfy markdown linting (MD040).
🧹 Proposed fix
-``` +```text CLIENT → RPC → SHARD (Verify) → MEMPOOL SYNC → [STATE DELTA]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@petri/petri.md` around lines 89 - 95, The fenced ASCII diagram in petri.md is missing a language identifier which triggers MD040; update the opening fence for the block containing "CLIENT → RPC → SHARD (Verify) → MEMPOOL SYNC → [STATE DELTA]" to include a language label (e.g., use ```text) so the diagram renders correctly and satisfies the markdown linter; ensure the closing fence remains ``` and do not alter the diagram content itself.src/libs/consensus/petri/classifier/transactionClassifier.ts-11-12 (1)
11-12:⚠️ Potential issue | 🟡 MinorGCRGeneration is an undocumented export from the SDK.
The
@kynesyslabs/demosdk/websdksubpath is correctly configured and exported. However,GCRGenerationdoes not appear in the SDK's public API documentation and is likely an internal or undocumented export. Verify with the SDK maintainers that this API is stable and intended for external use, or consider using an alternative documented export if available.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libs/consensus/petri/classifier/transactionClassifier.ts` around lines 11 - 12, The import of GCRGeneration from "@kynesyslabs/demosdk/websdk" in transactionClassifier.ts references an undocumented SDK export; confirm with the SDK maintainers whether GCRGeneration is a stable public API or replace its usage with a documented alternative (or a supported factory/function) from the SDK, and if unavailable remove the GCRGeneration import and refactor any code in transactionClassifier.ts that calls GCRGeneration to use the supported API (or add feature-flag/guarding and a TODO noting maintainer confirmation) so the file no longer depends on an undocumented symbol.petri/00-codebase-mapping.md-99-99 (1)
99-99:⚠️ Potential issue | 🟡 MinorTable formatting issue — pipe characters in inline code break column structure.
The
|characters inside the backtick code block are interpreted as table column separators, causing the table to render incorrectly.🔧 Proposed fix — escape or restructure
-| Classification result type | NEW | `{ status: 'PRE-APPROVED' | 'TO-APPROVE' | 'PROBLEMATIC', delta?: StateDelta }` | +| Classification result type | NEW | `{ status: 'PRE-APPROVED' \| 'TO-APPROVE' \| 'PROBLEMATIC', delta?: StateDelta }` |Or move the type definition outside the table:
-| Classification result type | NEW | `{ status: 'PRE-APPROVED' | 'TO-APPROVE' | 'PROBLEMATIC', delta?: StateDelta }` | +| Classification result type | NEW | See type definition below | + +```typescript +{ status: 'PRE-APPROVED' | 'TO-APPROVE' | 'PROBLEMATIC', delta?: StateDelta } +```🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@petri/00-codebase-mapping.md` at line 99, The table cell containing the type for "Classification result type" breaks the Markdown table because the inline code contains pipe characters; fix it by either escaping the pipes (replace '|' with '\|' inside the backticks) or move the entire type definition out of the table into a fenced code block beneath the table (use the type { status: 'PRE-APPROVED' | 'TO-APPROVE' | 'PROBLEMATIC', delta?: StateDelta } and reference it in the table cell), updating the table row for the "Classification result type" header to point to the external/fenced code instead of embedding pipes inline.src/libs/consensus/petri/finality/transactionFinality.ts-54-54 (1)
54-54:⚠️ Potential issue | 🟡 MinorConsider returning
nullinstead of0when timestamp is missing.Using
0for a missing timestamp can be misleading since0represents a valid epoch timestamp (1970-01-01). Returningnullwould be more accurate for "timestamp not available".🔧 Proposed fix
- result.hardFinalityAt = Number(confirmedTx.content?.timestamp ?? 0) + result.hardFinalityAt = confirmedTx.content?.timestamp + ? Number(confirmedTx.content.timestamp) + : null🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libs/consensus/petri/finality/transactionFinality.ts` at line 54, The assignment to result.hardFinalityAt currently coerces a missing confirmedTx.content?.timestamp to 0; change this so that when timestamp is absent you assign null instead (i.e., set result.hardFinalityAt = confirmedTx.content?.timestamp ?? null) and update the type/signature of result.hardFinalityAt (and any related interfaces or return types in transactionFinality.ts) from number to number | null so callers using result.hardFinalityAt handle the null case appropriately.testing/loadgen/src/features/consensus/petri_relay_flow.ts-137-144 (1)
137-144:⚠️ Potential issue | 🟡 MinorRemove unnecessary
as anycasts forconfirm()andbroadcast()methods.The SDK properly types these methods—other files in the codebase (e.g.,
transfer_loadgen.ts) call them directly without casts. Theas anycasts only serve to suppress type checking and should be removed:const validity = await demos.confirm(signedTx) const broadcast = await demos.broadcast(validity)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@testing/loadgen/src/features/consensus/petri_relay_flow.ts` around lines 137 - 144, The code is using unnecessary "as any" casts when calling demos.confirm and demos.broadcast in petri_relay_flow; remove the casts so the calls use the SDK types directly (replace "(demos as any).confirm" with "demos.confirm" and "(demos as any).broadcast" with "demos.broadcast"), keeping the existing await, validity/result checks and error messages intact so TypeScript properly type-checks confirm and broadcast.src/libs/consensus/petri/forge/deltaAgreementTracker.ts-103-110 (1)
103-110:⚠️ Potential issue | 🟡 MinorEdge case:
Math.max(...hashCounts.values())can return-Infinityfor empty maps.If
hashCountsis empty (theoretically possible if a state exists butmemberHashesis empty),Math.max()with no arguments returns-Infinity, producing a confusing log message.🛡️ Suggested defensive fix
log.warn( `[DeltaTracker] TX ${txHash} FLAGGED: no agreement after ${state.roundsTracked} rounds ` + - `(best: ${Math.max(...hashCounts.values())}/${this.threshold} needed)`, + `(best: ${hashCounts.size > 0 ? Math.max(...hashCounts.values()) : 0}/${this.threshold} needed)`, )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libs/consensus/petri/forge/deltaAgreementTracker.ts` around lines 103 - 110, When checking TTL expiry in deltaAgreementTracker (variables: state.roundsTracked, this.ttlRounds), defensively handle empty hashCounts so Math.max(...hashCounts.values()) doesn't yield -Infinity; compute a safe maxCount from hashCounts (e.g., default to 0 when hashCounts is empty or use Math.max(0, ...values)) and use that maxCount in the log message instead of calling Math.max directly on possibly-empty hashCounts.values(); keep the rest of the logic (flagged.push(txHash)) unchanged.
🧹 Nitpick comments (27)
.gitignore (1)
293-293: Remove duplicate ignore entry for/testing/runs.This pattern already exists at Line 289, so this addition is redundant.
Suggested cleanup
-/testing/runs🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.gitignore at line 293, Remove the duplicate ignore entry "/testing/runs" from .gitignore (it already appears earlier in the file); locate the redundant line containing the pattern "/testing/runs" and delete it so the file contains only the single existing ignore entry.src/libs/consensus/petri/execution/speculativeExecutor.ts (1)
78-82: Consider logging or validating unknown edit types.The
defaultcase silently passes through for unrecognized edit types (storage,tls, etc.), which is documented. However, if a malformed or typo'd edit type arrives (e.g.,"balancee"), it will succeed with"passthrough"rather than being caught.Adding a debug log or maintaining an explicit allowlist would help surface unexpected edit types during development.
♻️ Suggested improvement
default: - // For other GCR edit types (storage, tls, etc.), we still produce a delta - // but skip simulation — the edit presence itself is the state change signal + // Known passthrough types: storage, tls, etc. + log.debug( + `[PetriSpecExec] Passthrough for edit type=${edit.type} on TX ${tx.hash}`, + ) result = { success: true, message: "passthrough" } break🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libs/consensus/petri/execution/speculativeExecutor.ts` around lines 78 - 82, The default branch in speculativeExecutor.ts currently returns a silent passthrough ({ success: true, message: "passthrough" }) for any unrecognized GCR edit type; change this to validate the edit type against an explicit allowlist of supported GCR edit types (e.g., storage, tls, etc.) and, if the incoming edit type is not in the allowlist, emit a debug/warn log (using the module's logger) and return a failure or a clearly flagged result instead of silently succeeding; update the code around the switch/default handling in the speculative execution function (the block assigning result = { success: true, message: "passthrough" }) to perform the allowlist check and logging so malformed or typo'd edit types are surfaced during development.src/libs/consensus/petri/utils/canonicalJson.ts (1)
16-18: Consider handling circular references.
JSON.stringifythrows on circular structures. If any state delta inadvertently contains a cycle, the entire consensus path will crash. Depending on trust in upstream data, you may want to catch this or use a cycle-safe serializer.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libs/consensus/petri/utils/canonicalJson.ts` around lines 16 - 18, The canonicalJson function currently calls JSON.stringify(value, replacer, 0) which will throw on circular structures; update canonicalJson to handle cycles by using a cycle-safe serialization strategy (e.g., track seen objects with a WeakSet inside the replacer or wrap the JSON.stringify call in try/catch and fall back to a safe serializer) so circular references don't crash the consensus path; reference the canonicalJson function and the replacer to add the WeakSet tracking (or a fallback serializer) and ensure the output remains deterministic for the same structural content.src/libs/consensus/v2/routines/orderTransactions.ts (1)
31-33: Remove identity map—this is a no-op.
.map(transaction => transaction)returns the same array elements unchanged. This appears to be dead code, possibly a remnant from a previous implementation that performed transformation or stringification.♻️ Suggested simplification
- // Stringify the transactions - const orderedTransactions = orderedTransactionsObjects.map( - transaction => transaction, - ) - return orderedTransactions + return orderedTransactionsObjects🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libs/consensus/v2/routines/orderTransactions.ts` around lines 31 - 33, The map call that creates orderedTransactions is a no-op—replace the identity mapping of orderedTransactionsObjects with either a direct reference to orderedTransactionsObjects or, if an independent copy is required, use a shallow copy (e.g., spread) instead; update the code where orderedTransactions is used so it consumes orderedTransactionsObjects (or the shallow copy) and remove the redundant .map(transaction => transaction) to simplify the logic.src/features/metrics/MetricsCollector.ts (1)
739-742: Improve readability ofpetriEnabledcheck.The variable
petriEnabledis assigned1or0for the gauge, then used in a boolean check (!petriEnabled). While this works due to JavaScript's truthiness rules, it's clearer to use an explicit boolean check.♻️ Clearer boolean handling
- const petriEnabled = getSharedState.petriConsensus ? 1 : 0 - this.metricsService.setGauge("petri_enabled", petriEnabled) - - if (!petriEnabled) return + const isPetriEnabled = !!getSharedState.petriConsensus + this.metricsService.setGauge("petri_enabled", isPetriEnabled ? 1 : 0) + + if (!isPetriEnabled) return🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/features/metrics/MetricsCollector.ts` around lines 739 - 742, The current code mixes a numeric flag and boolean check for petriEnabled; change it to an explicit boolean then derive the gauge value from it: create a boolean (e.g., petriEnabled = !!getSharedState.petriConsensus) and pass petriEnabled ? 1 : 0 to this.metricsService.setGauge("petri_enabled", ...), then continue using the boolean in the if (!petriEnabled) return check so the intent is clear; update references around petriEnabled in MetricsCollector to match.src/model/entities/Mempool.ts (1)
49-51: Consider usingbiginttype forsoft_finality_atproperty.The column is defined as
"bigint"in the database, but the TypeScript type isnumber | null. Ifsoft_finality_atstores Unix timestamps in milliseconds or other large values, JavaScriptnumberloses precision beyondNumber.MAX_SAFE_INTEGER(9,007,199,254,740,991). Consider using TypeScript'sbiginttype for type safety.♻️ Proposed type alignment
// REVIEW: Petri Consensus soft finality timestamp (Phase 5) `@Column`("bigint", { name: "soft_finality_at", nullable: true }) - soft_finality_at: number | null + soft_finality_at: bigint | null🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/model/entities/Mempool.ts` around lines 49 - 51, The Mempool entity defines the DB column as "bigint" but types the property as number | null; change the TypeScript type to bigint | null for soft_finality_at and add a value transformer on the `@Column` to serialize/deserialize between DB string and JS BigInt (or store as string if your ORM expects it) so reads return BigInt and writes convert BigInt to the DB format; update the property declaration soft_finality_at and the `@Column`(...) decorator in src/model/entities/Mempool.ts accordingly to ensure safe handling of large integer timestamps.testing/scripts/run-suite.ts (1)
272-279: Consider addingpetrisuite toresolveLocalTargetsfor--localmode.The
resolveLocalTargetsfunction handles special cases for several suites (startup-cold-boot,gcr-routine,cluster-health, etc.) but doesn't includepetri. If the Petri suite requires specific target handling in local mode, add it here.🔧 Potential addition if needed
async function resolveLocalTargets(suite: string, explicitTargets: string | null): Promise<string | null> { if (explicitTargets) return explicitTargets if (suite === "startup-cold-boot") return defaultLocalTargets if (suite === "gcr-routine") return defaultLocalTargets + if (suite === "petri") return defaultLocalTargets if (suite === "cluster-health" || suite === "gcr-focus" || suite === "prod-gate") { return process.env.TARGETS ?? defaultLocalTargets } return null }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@testing/scripts/run-suite.ts` around lines 272 - 279, The resolveLocalTargets function omits the "petri" suite, so update resolveLocalTargets to handle petri in local mode by adding "petri" to the appropriate conditional branch (either the branch that returns defaultLocalTargets alongside "startup-cold-boot" and "gcr-routine" or the branch that returns process.env.TARGETS alongside "cluster-health"/"gcr-focus"/"prod-gate" depending on which behavior petri needs); locate the resolveLocalTargets function and modify the conditional checks to include "petri" and ensure defaultLocalTargets and process.env.TARGETS behavior remains consistent.better_testing/petri/speculativeExecutor.test.ts (1)
11-32: Avoid duplicating hashing logic inside the test helper.Keeping a local replica of the production algorithm can give false confidence when logic changes. Prefer extracting a shared
computeDeltaHashutility from runtime code and importing it here.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@better_testing/petri/speculativeExecutor.test.ts` around lines 11 - 32, The test helper duplicates the production hashing logic (computeDeltaHash) which risks divergence; instead extract the canonical hashing routine from the runtime/speculativeExecutor module into a shared utility and import it in the test. Update the test to remove the local computeDeltaHash and import the exported function (e.g., computeDeltaHash or computeDeltaHashForTests) from the runtime code (where speculativeExecutor performs canonicalJson + Hashing.sha256) so the test uses the same implementation; ensure the shared utility handles bigint/string normalization the same way as speculativeExecutor.src/libs/consensus/petri/forge/forgeInstance.ts (1)
15-23: Avoid exporting mutable singleton state directly.Expose only
getPetriForgeInstance/setPetriForgeInstanceand keep the backing variable module-private to prevent accidental external mutation.🔧 Suggested refactor
-export let petriForgeInstance: ContinuousForge | null = null +let petriForgeInstance: ContinuousForge | null = null🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libs/consensus/petri/forge/forgeInstance.ts` around lines 15 - 23, The exported mutable singleton petriForgeInstance should be module-private: remove the export from the petriForgeInstance binding and keep only setPetriForgeInstance and getPetriForgeInstance exported so external code can only access it via the setter/getter; update the declaration of petriForgeInstance (type ContinuousForge | null) to be non-exported while leaving exported functions setPetriForgeInstance and getPetriForgeInstance unchanged so callers use those accessors rather than mutating the variable directly.better_testing/petri/happyPath.test.ts (1)
18-29: Minor: Mock peer structure has unusual property name.Line 22 uses
connection: { string: ... }which appears to be a typo or placeholder. While harmless in tests, consider using a more realistic property name for clarity:- connection: { string: `localhost:${3000 + i}` }, + connectionString: `localhost:${3000 + i}`,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@better_testing/petri/happyPath.test.ts` around lines 18 - 29, The mockPeers helper creates peer objects in mockPeers with an unrealistic property connection.string; change that to a clearer property name (e.g., connection.address or connection.url) inside the mockPeers function so the peer shape reads connection: { address: `localhost:${3000 + i}` }, and update any tests that consume connection.string to use the new property (references: mockPeers function and any assertions using connection.string).src/libs/consensus/v2/routines/broadcastBlockHash.ts (1)
93-102: Clarify return value semantics.The return value
[signatureCount, shard.length - signatureCount]may not accurately represent pro/con votes:
signatureCountincludes all signatures inblock.validation_data.signatures, potentially including pre-existing ones before this broadcast- The second value assumes each shard member maps to exactly one signature, but signatures are keyed by
identity(public key), not peerThe
procounter (line 79) tracks RPC successes, but isn't used in the return. If the caller expects[pro, con], consider:🔧 Suggested clarification
- return [signatureCount, shard.length - signatureCount] + // Return actual signature count for BFT threshold validation + // Note: signatureCount may exceed shard.length if signatures accumulate across rounds + return [signatureCount, Math.max(0, shard.length - signatureCount)]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libs/consensus/v2/routines/broadcastBlockHash.ts` around lines 93 - 102, The return currently uses signatureCount and shard.length - signatureCount which misrepresents pro/con; update broadcastBlockHash to return accurate pro and con: use the existing pro counter for the first element and compute con as shard.length - pro; alternatively, if you must derive pro from signatures, map block.validation_data.signatures keys (identities) to shard peers to count unique shard signers and set pro to that count and con to shard.length - pro; ensure you reference the pro variable, shard array, and block.validation_data.signatures when making the change.better_testing/petri/blockCompiler.test.ts (1)
173-195: Tests are trivial — consider removing or documenting intent.These tests verify JavaScript ternary operator behavior rather than actual consensus dispatch logic. If the intent is to document expected behavior for future integration tests, consider adding a comment explaining this is a placeholder. Otherwise, these provide little value.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@better_testing/petri/blockCompiler.test.ts` around lines 173 - 195, These tests in the "Consensus dispatch switching" describe block (tests "petriConsensus flag gates dispatch" and "dispatch function selection is deterministic") only assert basic ternary behavior rather than exercising the actual dispatch logic; either delete these trivial tests or replace them with meaningful checks that import and invoke the real dispatch/select function from mainLoop.ts (or add a clear comment that they are placeholders documenting expected behavior until integration tests exist). Specifically, remove or update the tests named "petriConsensus flag gates dispatch" and "dispatch function selection is deterministic" in blockCompiler.test.ts, and if keeping them as placeholders add a comment referencing the real selector (e.g., the dispatch/chooseConsensus function in mainLoop.ts) and why the tests are present.petri/consensus.md (1)
212-222: Consider moving soak test results to a separate file or CI artifact.Embedding specific test results (dated 2026-03-22) in documentation may become stale. Consider referencing CI artifacts or a separate test results file that can be updated independently.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@petri/consensus.md` around lines 212 - 222, The "Soak Test Results (2026-03-22)" section in petri/consensus.md (the heading and the results table) should be removed from the static docs and instead referenced from a separate, updatable artifact; extract the table and heading into a new file (e.g., a soak-results markdown or CI artifact) and replace the block under the heading "Soak Test Results (2026-03-22)" with a short pointer like "See latest soak test results: [link to artifact]" or a CI badge; update any references in petri/consensus.md that mention the soak results to point to the new file/CI artifact and ensure the unique heading "Soak Test Results (2026-03-22)" and the table rows (Devnet size, TXs submitted, Blocks produced, Block rate, Hard finality observed, PoRBFT activity, Test verdict) are moved to the new artifact so they can be updated independently.better_testing/petri/benchmark.test.ts (2)
206-236: These tests assert design constants, not actual measurements.The "finality latency design targets" tests verify arithmetic on hardcoded values rather than measuring actual latency. This documents design expectations but doesn't validate runtime behavior. Consider renaming to "Finality Design Constants" to clarify intent.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@better_testing/petri/benchmark.test.ts` around lines 206 - 236, The tests under describe("Benchmark — Finality Latency Design Targets") are asserting hardcoded design constants rather than measuring runtime behavior; rename the suite and/or individual tests to make intent explicit (e.g., describe("Finality Design Constants") and test names like "soft finality design constant", "hard finality design constant", "finality gap design constant") so readers know these are static expectations, not live measurements; update the strings in the describe and the three test(...) calls to the new names (referencing the existing describe and the three test blocks) and keep the assertions unchanged.
268-286: Use dynamicimport()instead ofrequire()for consistency.The test uses CommonJS
require()while the rest of the file uses ESM imports. For consistency and better TypeScript support, use dynamicimport():♻️ Proposed fix
test("forge getCurrentDeltas returns copy (no reference leak)", async () => { const config = { forgeIntervalMs: 60000, blockIntervalMs: 10000, agreementThreshold: 7, problematicTTLRounds: 5, } - const forge = new (require("@/libs/consensus/petri/forge/continuousForge").ContinuousForge)(config) + const { ContinuousForge } = await import("@/libs/consensus/petri/forge/continuousForge") + const forge = new ContinuousForge(config) forge.start(mockPeers(3))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@better_testing/petri/benchmark.test.ts` around lines 268 - 286, Replace the CommonJS require in the test with a dynamic ESM import: make the test async, await import("@/libs/consensus/petri/forge/continuousForge"), pull ContinuousForge from the imported module, then instantiate it (new ContinuousForge(config)), call forge.start(...), call getCurrentDeltas twice and assert as before, and finally call forge.stop(); ensure the test function signature is updated to async so the dynamic import can be awaited and teardown still runs.petri/VADEMECUM.md (2)
32-34: Add language specifier to fenced code block.The code block at line 32 is missing a language specifier. Since this shows a directory path, use an empty specifier or
text:📝 Proposed fix
-``` +```text better_testing/petri/</details> <details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against the current code and only fix it if needed.
In
@petri/VADEMECUM.mdaround lines 32 - 34, The fenced code block containing
the directory path better_testing/petri/ lacks a language specifier; update the
opening fence from ``` to include a neutral language such as text (e.g.,consistently in the VADEMECUM.md markdown.
149-197: Add language specifiers to file path code blocks.Multiple code blocks showing file paths are missing language specifiers (lines 149, 167, 179, 189). Add
textto silence linter warnings and improve rendering consistency.📝 Proposed fix for all path blocks
-``` +```text src/libs/consensus/petri/ index.ts # petriConsensusRoutine()Apply the same change to code blocks at lines 167, 179, and 189.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@petri/VADEMECUM.md` around lines 149 - 197, The fenced code blocks that list repository files (the blocks containing entries like index.ts, types/*.ts, canonicalJson.ts, petriSecretary.ts and the other filenames shown) are missing a language specifier; update each such triple-backtick block to use ```text so the path listings render consistently and linter warnings are silenced, ensuring you change all similar blocks in the document that show file/path listings.better_testing/petri/routing.test.ts (1)
151-162: Flag gating tests are trivial and don't verify actual module behavior.These tests only verify JavaScript ternary operator behavior, not the actual feature flag routing implementation. Consider importing and testing the actual routing logic from the implementation modules.
💡 Example of testing actual module behavior
// Instead of testing local ternary logic: // const path = petriConsensus ? "petri" : "dtr" // Consider importing and testing the actual routing function: // import { getRoutingPath } from "@/libs/consensus/petri/routing/..." // expect(getRoutingPath({ petriConsensus: true })).toBe("petri")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@better_testing/petri/routing.test.ts` around lines 151 - 162, Replace the trivial ternary check with tests that call the real routing logic: import the routing function (e.g., getRoutingPath or whichever function implements flag-based routing in the consensus/petri routing module) and assert its return for both { petriConsensus: true } and { petriConsensus: false } scenarios; if the routing logic reads flags from a config or feature-flag service, either inject a test config/mock or call the function with a param object to control the flag, and update the test block in routing.test.ts to use those imports and assertions instead of the local const path = petriConsensus ? "petri" : "dtr".better_testing/petri/byzantineFault.test.ts (1)
24-48: Remove redundant type annotations from default parameter values.ESLint correctly identifies that the type annotations on
ttlRoundsandcurrentRoundare redundant since TypeScript can infer them from the default values.🔧 Proposed fix
function simulateRound( shardSize: number, byzantineCount: number, txHash: string, correctDelta: string, - ttlRounds: number = 5, - currentRound: number = 1, + ttlRounds = 5, + currentRound = 1, ): { promoted: string[]; flagged: string[] } {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@better_testing/petri/byzantineFault.test.ts` around lines 24 - 48, The function simulateRound has redundant TypeScript type annotations on parameters with default values (ttlRounds: number = 5, currentRound: number = 1); remove the explicit ": number" annotations so the defaults alone provide the inferred types. Update the simulateRound signature to use ttlRounds = 5 and currentRound = 1 (leave other params and body unchanged) so linting passes while preserving behavior in the DeltaAgreementTracker/recordDelta/evaluate workflow.better_testing/petri/finality.test.ts (1)
112-125: Test does not actually verify idempotency behavior.The test claims to verify that "soft finality is only set once" but it only assigns the value once and then checks it wasn't changed. To test idempotency, you would need to simulate multiple promotion attempts or call the actual
updateClassificationfunction.💡 Suggested improvement
test("soft finality is only set once (first PRE_APPROVED)", () => { const firstTime = 1700000000000 const laterTime = 1700000002000 // Simulate updateClassification behavior: only set if null let softFinalityAt: number | null = null // First promotion sets the timestamp if (softFinalityAt === null) { softFinalityAt = firstTime } // Second promotion attempt should NOT overwrite if (softFinalityAt === null) { softFinalityAt = laterTime } expect(softFinalityAt).toBe(firstTime) })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@better_testing/petri/finality.test.ts` around lines 112 - 125, The test "soft finality is only set once (first PRE_APPROVED)" is not exercising idempotent behavior; replace the manual single assignment with a simulation of multiple promotion attempts (or call into updateClassification if available) so the first assignment sets softFinalityAt and subsequent attempts do not overwrite it; specifically, in the test body use the softFinalityAt variable and perform two promotion steps (e.g., conditional assignments that only set softFinalityAt when it is null) or invoke the updateClassification path that implements that logic, then assert softFinalityAt equals firstTime and not laterTime.testing/loadgen/src/features/consensus/petri_relay_flow.ts (1)
186-193: Consider making finality propagation delay configurable.The hardcoded
2000mssleep may not be appropriate for all environments. Consider using an environment variable consistent with other timing configurations in this file.🔧 Proposed fix
// Check finality consistency across all nodes const finalityChecks: NodeFinalityCheck[] = [] if (txHash) { // Give finality a moment to propagate - await sleep(2000) + const finalityPropagationMs = envInt("CONSENSUS_FINALITY_PROPAGATION_MS", 2000) + await sleep(finalityPropagationMs) for (const url of rpcUrls) { finalityChecks.push(await checkFinalityOnNode(url, txHash)) } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@testing/loadgen/src/features/consensus/petri_relay_flow.ts` around lines 186 - 193, The hardcoded 2000ms delay before checking finality should be configurable; replace the literal sleep(2000) used in the finality propagation block (where txHash is present and you call checkFinalityOnNode for each rpcUrls) with a configurable value read from an environment variable (e.g., FINALITY_PROPAGATION_MS) with a sensible default fallback (2000). Update the code around the finalityChecks array and the await sleep(...) call to parse the env var to a number and use that value so different environments can adjust the delay without changing code.better_testing/petri/liveness.test.ts (1)
214-227: Add explicit type annotation to avoidnever[]inference.The
mempoolarray is initialized empty and then elements are pushed to it. Without a type annotation, TypeScript may infernever[]which can cause type issues.🔧 Proposed fix
test("high volume: many PRE_APPROVED with some PROBLEMATIC", () => { - const mempool = [] + const mempool: Array<{ hash: string; classification: string }> = [] for (let i = 0; i < 50; i++) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@better_testing/petri/liveness.test.ts` around lines 214 - 227, The mempool variable is declared as an empty array which can infer never[]; update the declaration in the "high volume: many PRE_APPROVED with some PROBLEMATIC" test by giving mempool an explicit type such as { hash: string; classification: TransactionClassification }[] (or Array<{ hash: string; classification: TransactionClassification }>), then keep pushing the same objects so TypeScript correctly types mempool and TransactionClassification usages.better_testing/petri/featureFlagRollback.test.ts (1)
34-62: Dispatch logic tests are trivial — same pattern as routing tests.These tests only verify JavaScript ternary operator behavior rather than the actual feature flag dispatch implementation. The value would be in testing the actual routing code path.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@better_testing/petri/featureFlagRollback.test.ts` around lines 34 - 62, The tests under the "Feature Flag — Dispatch Logic" describe block are only verifying a ternary expression; replace them to call the real dispatch/routing function (e.g., import and call your consensus dispatcher such as chooseConsensusPath or dispatchConsensus) instead of in-test ternaries; update each test (the "flag ON → petri path selected", "flag OFF → porbft path selected", "flag toggle: ON → OFF → ON produces correct sequence", and "rapid flag changes always resolve to current value") to set the feature flag, invoke the actual dispatch function, and assert that the correct handler/path was returned or the correct handler was invoked (use spies/mocks for the petri and porbft handlers and verify calls) so the tests exercise real routing logic rather than plain JS boolean expressions.src/libs/blockchain/mempool_v2.ts (1)
318-335: Consider adding existence check or return value forupdateClassification.The method silently no-ops if the transaction hash doesn't exist. Consider returning a boolean indicating success, or logging when the update affects 0 rows to aid debugging.
Additionally, there's a minor TOCTOU window between
findOne(line 329) andupdate(line 334) where another process could setsoft_finality_at. In practice this is low-risk since the worst outcome is a slightly different timestamp, but if precision matters, consider using a conditional SQL update instead:UPDATE mempool SET soft_finality_at = ? WHERE hash = ? AND soft_finality_at IS NULL🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libs/blockchain/mempool_v2.ts` around lines 318 - 335, The updateClassification method currently silently no-ops when the txHash doesn't exist and has a TOCTOU between repo.findOne and repo.update; update it to return a boolean (or throw) indicating whether the update affected any rows and log when 0 rows are affected, and eliminate the TOCTOU by performing a conditional update that sets soft_finality_at only when NULL (e.g., use repo.update with a where clause that includes soft_finality_at IS NULL or execute a single conditional SQL UPDATE) rather than relying on a separate findOne; reference updateClassification, repo.findOne and repo.update when making the change.src/libs/consensus/petri/arbitration/bftArbitrator.ts (1)
82-112: Consider a dedicated RPC method for BFT arbitration.Using
roundNumber: -1as a sentinel to distinguish arbitration from regular forge exchange (as noted in the REVIEW comment) is pragmatic but couples arbitration behavior to the forge exchange handler's implementation details.If the arbitration protocol diverges from forge exchange (e.g., different timeout, retry logic, or response format), a dedicated
petri_arbitrateDeltasRPC method would make the intent clearer and avoid accidental breakage.This is not blocking since the current approach works correctly, but worth tracking as technical debt.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libs/consensus/petri/arbitration/bftArbitrator.ts` around lines 82 - 112, The code reuses the petri_exchangeDeltas RPC with roundNumber: -1 as a sentinel inside the consensus_routine longCall (see peers.map / longCall usage) which couples BFT arbitration behavior to the forge exchange handler; add a dedicated RPC (e.g., petri_arbitrateDeltas) and update the caller in bftArbitrator.ts to invoke consensus_routine with params.method set to "petri_arbitrateDeltas" (and define its contract/response shape), implement a corresponding handler that returns deltas regardless of roundNumber and handles arbitration-specific timeouts/retries/response format, and then replace the sentinel-based checks around roundNumber and data.deltas[tx.hash] so arbitration logic is isolated from petri_exchangeDeltas.testing/loadgen/src/features/consensus/petri_soak.ts (1)
154-160: Type casts suggest SDK type definitions may be incomplete.The casts to
(demos as any).confirmand(demos as any).broadcastsuggest these methods might not be properly typed in the SDK. While acceptable for test code, consider updating the SDK type definitions if these are valid public methods.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@testing/loadgen/src/features/consensus/petri_soak.ts` around lines 154 - 160, The casts (demos as any).confirm and (demos as any).broadcast indicate missing typings for the SDK methods; fix this by updating the demos client type instead of using any — add confirm(...) and broadcast(...) signatures to the SDK's client interface (or the local type used in petri_soak.ts) so the code can call demos.confirm(signedTx) and demos.broadcast(result) with proper types, then remove the casts in petri_soak.ts (referencing the demos variable and the confirm/broadcast methods).petri/03-secretary-coordinated-signing.md (1)
13-28: Add language specifier to fenced code blocks for lint compliance.The markdownlint warnings flag code blocks without language specifiers. For ASCII diagrams and pseudocode, use
textorplaintext:📝 Suggested fixes
Line 13:
-``` +```text All 10 shard members compile block independently (deterministic)Line 123:
-``` +```text if (isWeSecretary(shard)):Line 180:
-```text Task 1 (petriSecretary.ts)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@petri/03-secretary-coordinated-signing.md` around lines 13 - 28, Add a language specifier (use "text") to the fenced code blocks in petri/03-secretary-coordinated-signing.md to satisfy markdownlint: update the block containing the ASCII diagram starting "All 10 shard members compile block independently (deterministic)" to ```text, the pseudocode block containing "if (isWeSecretary(shard)):" to ```text, and the block around "Task 1 (petriSecretary.ts)" to ```text so all fenced code blocks include a language tag.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 21dc0966-9680-4ca0-a3e2-928029a2bf1b
⛔ Files ignored due to path filters (2)
.mycelium/mycelium.dbis excluded by!**/*.dbpetri/pitch.pdfis excluded by!**/*.pdf
📒 Files selected for processing (84)
.env.example.gitignoreAGENTS.mdTEAM.mdbetter_testing/petri/benchmark.test.tsbetter_testing/petri/blockCompiler.test.tsbetter_testing/petri/byzantineFault.test.tsbetter_testing/petri/canonicalJson.test.tsbetter_testing/petri/classifier.test.tsbetter_testing/petri/conflictPath.test.tsbetter_testing/petri/continuousForge.test.tsbetter_testing/petri/deltaTracker.test.tsbetter_testing/petri/featureFlagRollback.test.tsbetter_testing/petri/finality.test.tsbetter_testing/petri/happyPath.test.tsbetter_testing/petri/liveness.test.tsbetter_testing/petri/routing.test.tsbetter_testing/petri/secretaryCoordination.test.tsbetter_testing/petri/speculativeExecutor.test.tsdata/genesis.jsonpackage.jsonpetri/00-codebase-mapping.mdpetri/01-implementation-plan.mdpetri/02-risks-and-considerations.mdpetri/03-secretary-coordinated-signing.mdpetri/VADEMECUM.mdpetri/architecture-diagram.mdpetri/consensus.mdpetri/petri.mdsrc/config/defaults.tssrc/config/envKeys.tssrc/config/index.tssrc/config/loader.tssrc/config/types.tssrc/features/metrics/MetricsCollector.tssrc/index.tssrc/libs/blockchain/chainBlocks.tssrc/libs/blockchain/l2ps_mempool.tssrc/libs/blockchain/mempool_v2.tssrc/libs/communications/broadcastManager.tssrc/libs/consensus/petri/arbitration/bftArbitrator.tssrc/libs/consensus/petri/block/petriBlockCompiler.tssrc/libs/consensus/petri/block/petriBlockFinalizer.tssrc/libs/consensus/petri/classifier/transactionClassifier.tssrc/libs/consensus/petri/coordination/petriSecretary.tssrc/libs/consensus/petri/execution/speculativeExecutor.tssrc/libs/consensus/petri/finality/transactionFinality.tssrc/libs/consensus/petri/forge/continuousForge.tssrc/libs/consensus/petri/forge/deltaAgreementTracker.tssrc/libs/consensus/petri/forge/forgeInstance.tssrc/libs/consensus/petri/index.tssrc/libs/consensus/petri/routing/petriRouter.tssrc/libs/consensus/petri/routing/shardMapper.tssrc/libs/consensus/petri/types/classificationTypes.tssrc/libs/consensus/petri/types/continuousForgeTypes.tssrc/libs/consensus/petri/types/deltaComparison.tssrc/libs/consensus/petri/types/petriConfig.tssrc/libs/consensus/petri/types/stateDelta.tssrc/libs/consensus/petri/utils/canonicalJson.tssrc/libs/consensus/v2/PoRBFT.tssrc/libs/consensus/v2/routines/broadcastBlockHash.tssrc/libs/consensus/v2/routines/ensureCandidateBlockFormed.tssrc/libs/consensus/v2/routines/manageProposeBlockHash.tssrc/libs/consensus/v2/routines/orderTransactions.tssrc/libs/consensus/v2/types/secretaryManager.tssrc/libs/consensus/v2/types/validationStatusTypes.tssrc/libs/network/endpointExecution.tssrc/libs/network/endpointValidation.tssrc/libs/network/manageConsensusRoutines.tssrc/libs/network/rpcDispatch.tssrc/libs/omniprotocol/protocol/handlers/consensus.tssrc/model/entities/Mempool.tssrc/model/entities/Transactions.tssrc/utilities/mainLoop.tssrc/utilities/sharedState.tstesting/TESTING_MAP.mdtesting/devnet/.env.exampletesting/devnet/docker-compose.ymltesting/loadgen/src/features/consensus/petri_block_production.tstesting/loadgen/src/features/consensus/petri_relay_flow.tstesting/loadgen/src/features/consensus/petri_soak.tstesting/loadgen/src/features/consensus/petri_tx_inclusion.tstesting/loadgen/src/main.tstesting/scripts/run-suite.ts
| function electSecretary<T>(shard: T[]): T { | ||
| return shard[0] | ||
| } |
There was a problem hiding this comment.
Use the real election logic in this suite.
This helper just returns shard[0], but the production coordinator elects from a sorted identity set that also includes the local pubkey. The tests can pass while the shipped code elects a different secretary, so they aren't validating the behavior this PR actually introduces.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@better_testing/petri/secretaryCoordination.test.ts` around lines 17 - 19, The
test helper electSecretary currently returns shard[0]; replace it with the real
election logic used in production so tests reflect actual behavior: ensure
electSecretary<T> constructs the full identity set (including the local pubkey),
sorts the identities deterministically the same way the coordinator does, and
selects the secretary from that sorted set (or call the production election
function if available) so the test suite elects the same secretary as the
running coordinator.
…ring Address S2871 lint rule — Array.prototype.sort() without a compare function uses Unicode code-point order which can be locale-dependent.
There was a problem hiding this comment.
Actionable comments posted: 6
♻️ Duplicate comments (4)
testing/petri/secretaryCoordination.test.ts (1)
17-19:⚠️ Potential issue | 🟠 MajorTest helper doesn't match production election logic.
This
electSecretaryreturnsshard[0], but production (inpetriSecretary.ts) sorts all identities including the local pubkey before selecting. Tests pass while the shipped code may elect a different secretary. Import and use the realelectSecretaryandisWeSecretaryfrom the production module, or replicate the exact logic including local pubkey handling.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@testing/petri/secretaryCoordination.test.ts` around lines 17 - 19, The test helper electSecretary in testing/petri/secretaryCoordination.test.ts is inconsistent with production; replace the stubbed logic by importing and using the real electSecretary (and isWeSecretary if needed) from the production module (petriSecretary.ts), or update the helper to precisely replicate production: include the local pubkey in the identity list, sort deterministically the full set, then select the first element as production does; update tests to call the imported electSecretary/isWeSecretary so test election outcomes match production behavior.src/libs/consensus/petri/coordination/petriSecretary.ts (3)
142-148:⚠️ Potential issue | 🔴 Critical
resetCollection()at start wipes early-arriving submissions.
receiveBlockHashSubmission()buffers responses as they arrive via RPC. CallingresetCollection()at line 147 clears that buffer, dropping any submissions received beforecollectBlockHashes()starts. This creates a race condition where early responses become false timeouts.🔧 Proposed fix — initialize per-collection state without clearing inbox
export async function collectBlockHashes( shard: Peer[], block: Block, timeoutMs = 5000, ): Promise<CollectionResult> { - resetCollection() + // Don't clear pendingSubmissions — keep early-arriving responses + // Only reset the resolve callback for this collection round + collectionResolve = null🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libs/consensus/petri/coordination/petriSecretary.ts` around lines 142 - 148, Calling resetCollection() at the start of collectBlockHashes wipes the global inbox and drops early-arriving submissions from receiveBlockHashSubmission; instead, stop clearing the shared buffer and scope collection state per-invocation: remove the resetCollection() call from collectBlockHashes and either (a) create a new per-call collection map/identifier that collectBlockHashes uses to gather responses while receiveBlockHashSubmission appends to the shared inbox, or (b) if using a shared inbox data structure, filter/consume only entries matching the current block id within collectBlockHashes rather than clearing everything; update collectBlockHashes to initialize its own local result-tracking (e.g., a local Map or collectionId) and leave resetCollection() for teardown or explicit global resets.
112-118:⚠️ Potential issue | 🟠 Major
electSecretary()fallback toshard[0]is problematic.When the elected secretary is the local node,
foundisundefined(since local pubkey isn't in the shard peer list), and the function returnsshard[0]— a different peer. This causeshandleSecretaryOffline()to probe the wrong member. For a single-node shard, this returnsundefined.The comment says callers should use
isWeSecretary, but the function signature promises to return aPeer, which is misleading.🔧 Proposed fix
-export function electSecretary(shard: Peer[]): Peer { +export function electSecretary(shard: Peer[]): Peer | null { const secretaryId = getSecretaryIdentity(shard) const found = shard.find(p => p.identity === secretaryId) - // If we are the secretary, return shard[0] as a fallback peer reference - // (the caller should use isWeSecretary to decide the code path) - return found ?? shard[0] + // Returns null if the secretary is the local node (not in shard list) + return found ?? null }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libs/consensus/petri/coordination/petriSecretary.ts` around lines 112 - 118, The electSecretary function currently returns shard[0] as a fallback when getSecretaryIdentity(...) resolves to the local node (found === undefined), which misleads callers and causes probing the wrong peer (see handleSecretaryOffline); change electSecretary to stop returning shard[0] and instead return an optional Peer (e.g., Peer | undefined or null) when the secretary identity is not present in shard, update the function signature and callers (including places using isWeSecretary and handleSecretaryOffline) to handle the undefined case explicitly, and ensure getSecretaryIdentity, electSecretary, and any code assuming a guaranteed Peer are adjusted to check for absence before proceeding.
175-218:⚠️ Potential issue | 🔴 CriticalSubmissions with wrong block number or invalid signatures are re-counted each iteration.
The loop at lines 177-218 only records matching valid submissions in
signatures. Submissions with:
- Wrong block number (line 180-186): skipped with
continue, re-processed next iteration- Mismatched hash (line 210-216): increments
mismatchCountbut not marked processed- Invalid signature (line 204-208): increments
mismatchCountbut not marked processedThis causes
mismatchCountto inflate on each poll iteration, potentially exceedingtotalMembersand making the threshold calculation incorrect.🔧 Proposed fix — track all processed senders
const signatures: Record<string, string> = { [ourPubkey]: uint8ArrayToHex(ourSignature.signature), } + const processed = new Set<string>([ourPubkey]) let matchCount = 1 // counting ourselves let mismatchCount = 0 // ... while (Date.now() < deadline && matchCount < totalMembers) { for (const [pubkey, submission] of pendingSubmissions) { - if (signatures[pubkey]) continue // Already processed + if (processed.has(pubkey)) continue // Already processed if (submission.blockNumber !== block.number) { log.warn(/* ... */) + processed.add(pubkey) + mismatchCount++ continue } if (submission.blockHash === expectedHash) { // Verify signature... if (isValid) { signatures[pubkey] = submission.signature + processed.add(pubkey) matchCount++ } else { + processed.add(pubkey) mismatchCount++ } } else { + processed.add(pubkey) mismatchCount++ } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libs/consensus/petri/coordination/petriSecretary.ts` around lines 175 - 218, The loop in petriSecretary.ts re-processes the same pubkeys across iterations because only valid matches are recorded in signatures, causing mismatchCount to inflate; fix by marking a submission as processed regardless of outcome: introduce a processedSenders Set (or remove entries from pendingSubmissions) and, inside the pendingSubmissions for-loop, add pubkey to processedSenders (or delete from pendingSubmissions) when you skip for wrong blockNumber, on hash mismatch, or on invalid signature so each sender increments mismatchCount at most once; keep signatures mapping only for valid signatures and use processedSenders when deciding whether to skip already-handled senders.
🧹 Nitpick comments (15)
testing/petri/classifier.test.ts (1)
16-40: UnusedmockTxhelper function.The
mockTxfunction is defined but never used in any of the tests. All tests callclassifyFromEditsdirectly with edit arrays, bypassing this helper. Consider removing it to reduce dead code, or add tests that use the full transaction mock.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@testing/petri/classifier.test.ts` around lines 16 - 40, The helper function mockTx is dead code and not used by any tests that call classifyFromEdits; either delete the mockTx function declaration to remove the unused helper, or update tests that currently construct edit arrays directly to instead create a full transaction via mockTx(...) and pass its content.gcr_edits (or equivalent) into classifyFromEdits so tests exercise the full transaction shape; locate the mockTx function and adjust or remove it and update any tests referencing edit arrays to use mockTx when opting to keep the helper.src/libs/consensus/petri/coordination/petriSecretary.ts (1)
55-56: Module-level mutable state creates test isolation challenges.
pendingSubmissionsandcollectionResolveare module-level variables. This works for runtime but makes unit testing difficult — tests must callresetCollection()in setup to avoid cross-test pollution. Consider exposing a test-only reset or using a class-based approach.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libs/consensus/petri/coordination/petriSecretary.ts` around lines 55 - 56, The module-level mutable variables pendingSubmissions and collectionResolve cause test pollution; encapsulate that state so tests don't need global cleanup. Either (A) move pendingSubmissions and collectionResolve into an instance of a PetriSecretary class (make existing exported functions become methods on that class and update callers to use a singleton or new instance), or (B) keep the module functions but add and export a test-only resetCollection() that clears pendingSubmissions (pendingSubmissions.clear()) and sets collectionResolve = null; reference the symbols pendingSubmissions, collectionResolve, and resetCollection in your change so tests can call the reset or callers can use the new class instance.testing/petri/featureFlagRollback.test.ts (1)
34-62: Dispatch tests validate trivial logic rather than actual routing.These tests verify that
petriConsensus ? "petri" : "porbft"works correctly, but don't test the actualgetSharedState.petriConsensusintegration or the real dispatch paths in the codebase. Consider adding integration tests that mockgetSharedStateand verify actual routing behavior.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@testing/petri/featureFlagRollback.test.ts` around lines 34 - 62, Current tests only assert the ternary expression; update them to mock the actual shared state and exercise the real dispatch logic: import and spy/mock getSharedState to return petriConsensus true/false (and sequences), then call the real routing/dispatch function (the module that consumes getSharedState—e.g., the dispatcher or router entrypoint used in production) and assert it chose the "petri" or "porbft" path; also add an integration-style test that toggles the mocked getSharedState values over time and verifies the dispatcher resolves to the current value. Ensure you reference and spy on getSharedState and invoke the actual dispatch function used in your codebase rather than asserting the ternary expression directly.petri/03-secretary-coordinated-signing.md (1)
13-28: Add language identifiers to fenced code blocks.Per markdownlint, fenced code blocks should specify a language for proper syntax highlighting:
- Line 13: Add
textorplaintext- Line 123: Add
typescriptortext- Line 180: Add
textorplaintextAlso applies to: 123-140, 180-192
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@petri/03-secretary-coordinated-signing.md` around lines 13 - 28, The fenced code blocks in the document (the block starting with "All 10 shard members compile block independently (deterministic)" and the other fenced blocks around lines 123-140 and 180-192) lack language identifiers; update each triple-backtick fence to include an appropriate language tag such as text or plaintext (or typescript for the code-like block at ~123 if relevant) so markdownlint passes and syntax highlighting is applied consistently.testing/petri/happyPath.test.ts (2)
18-37: Extract shared test helpers to a common module.
mockPeers,bftThreshold, andisBlockValidare duplicated across multiple test files (happyPath.test.ts,blockCompiler.test.ts,routing.test.ts,conflictPath.test.ts,liveness.test.ts,byzantineFault.test.ts).Consider creating a shared test utilities file:
// testing/petri/helpers.ts export function mockPeers(count: number): any[] { ... } export function bftThreshold(totalMembers: number): number { ... } export function isBlockValid(pro: number, totalVotes: number): boolean { ... }This would reduce duplication and ensure consistent behavior across all test suites.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@testing/petri/happyPath.test.ts` around lines 18 - 37, Extract the duplicated helpers into a single module and update tests to import them: create testing/petri/helpers.ts that exports mockPeers, bftThreshold, and isBlockValid (copy the existing implementations from happyPath.test.ts), then replace the local declarations in happyPath.test.ts, blockCompiler.test.ts, routing.test.ts, conflictPath.test.ts, liveness.test.ts, and byzantineFault.test.ts with imports from that module (import { mockPeers, bftThreshold, isBlockValid } from "testing/petri/helpers"), ensuring the exported signatures and behavior remain identical to the originals (including mockPeers return shape and bftThreshold logic).
196-256: Duplicate coverage with continuousForge.test.ts.The "Forge State Machine" tests here overlap significantly with the dedicated
continuousForge.test.tssuite. Consider either:
- Removing these tests and relying on
continuousForge.test.tsfor forge lifecycle coverage- Keeping only the happy-path-specific integration aspects here
This would reduce test maintenance burden and clarify test ownership.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@testing/petri/happyPath.test.ts` around lines 196 - 256, These tests duplicate lifecycle coverage from continuousForge.test.ts; either delete the overlapping "Happy Path — Forge State Machine" test block or trim it to only keep higher-level integration assertions (e.g., only the end-to-end happy path flows that exercise ContinuousForge.start/shard interaction and getState transitions) while removing unit-like cases already covered (reset, double-start, and lifecycle sub-assertions). Locate the tests referencing ContinuousForge, mockPeers, and methods start/pause/resume/reset/stop in testing/petri/happyPath.test.ts and remove or reduce them to a single integration test that asserts state transitions for a real shard scenario, leaving detailed lifecycle specs in continuousForge.test.ts.testing/petri/blockCompiler.test.ts (3)
200-223: Lifecycle state sequence test has limited value.This test constructs an array of strings and then asserts it equals the same hardcoded array. It validates the expected sequence as documentation but doesn't exercise any real code.
Consider either:
- Importing and testing the actual
runBlockPeriod()implementation with mocks- Moving this to inline documentation in the production code
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@testing/petri/blockCompiler.test.ts` around lines 200 - 223, The test currently only asserts a hardcoded array and doesn't exercise behavior; either replace it with an integration/unit test that invokes runBlockPeriod() and verifies the actual pause/compile/finalize/reset/resume lifecycle using mocks/stubs for pause(), compileBlock(), finalizeBlock(), reset(), and resume() (asserting the sequence and side-effects), or remove the test and move the sequence example into inline documentation near runBlockPeriod() as a commented example; update the test named "forge pause/resume pattern for block compilation" in testing/petri/blockCompiler.test.ts accordingly and ensure mocks verify call order on the real functions rather than comparing two identical arrays.
173-195: Consider removing trivial dispatch tests.These tests only verify that a JavaScript ternary operator works correctly (
flag ? "petri" : "porbft"). They don't actually test any production code paths or integration with the real dispatch logic.If the intent is to document the expected dispatch behavior, consider importing and testing the actual dispatch function, or convert these to documentation comments instead.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@testing/petri/blockCompiler.test.ts` around lines 173 - 195, The tests under "Consensus dispatch switching" are trivially exercising a JavaScript ternary rather than production logic; remove or replace them with real integration/unit tests that call the actual dispatch code (e.g., import and invoke the dispatch function used in mainLoop.ts) instead of local ternary checks in the tests "petriConsensus flag gates dispatch" and "dispatch function selection is deterministic"; if keeping a lightweight check for documentation, convert the two tests into a short comment explaining expected behavior and add one real test that imports the dispatch implementation and asserts that passing petriConsensus=true selects the "petri" handler and false selects the "porbft" handler.
83-169: Result structure tests use inline objects instead of actual types.These tests construct plain objects matching expected shapes but don't import
CompilationResult,ArbitrationResult, orFinalizationResulttypes from the production code. This means type changes in production won't cause test failures.Consider importing and using the actual types:
import type { CompilationResult, ArbitrationResult, FinalizationResult } from "@/libs/consensus/petri/block/types"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@testing/petri/blockCompiler.test.ts` around lines 83 - 169, Import the actual types (CompilationResult, ArbitrationResult, FinalizationResult) and annotate the test objects with those types instead of using plain inline objects: add "import type { CompilationResult, ArbitrationResult, FinalizationResult } from '@/libs/consensus/petri/block/types'" at the top, then change each test's result declaration to use explicit typing (e.g., "const result: CompilationResult = { ... }", "const result: ArbitrationResult = { ... }", "const result: FinalizationResult = { ... }") so the tests will fail if the production types change; update the three describe blocks named "CompilationResult structure", "ArbitrationResult structure", and "FinalizationResult structure" accordingly.testing/petri/conflictPath.test.ts (1)
273-275: Move helper function definition to top of file.
isBlockValidis defined at the end of the file but used earlier (line 242). While function hoisting makes this work, it's cleaner and more readable to define helpers near the top withbftThreshold:♻️ Suggested reordering
// ---- Helpers ---- function bftThreshold(totalMembers: number): number { return Math.floor((totalMembers * 2) / 3) + 1 } + +function isBlockValid(pro: number, total: number): boolean { + return pro >= bftThreshold(total) +} // ---- Conflict Path: Delta Disagreement → PROBLEMATIC ---- ... - -function isBlockValid(pro: number, total: number): boolean { - return pro >= bftThreshold(total) -}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@testing/petri/conflictPath.test.ts` around lines 273 - 275, Move the helper function isBlockValid so it's declared near the top of the file alongside bftThreshold before any tests that call it; update the file by cutting the isBlockValid(pro: number, total: number): boolean { return pro >= bftThreshold(total) } definition from the end and pasting it above the tests (near bftThreshold) so readers see the helper before it's used.testing/petri/finality.test.ts (1)
200-215: Simplify classification timestamp logic test.The loop with nested conditionals is harder to read than necessary. Consider a more direct approach:
♻️ Suggested simplification
test("only PRE_APPROVED classification triggers soft_finality_at", () => { - const classifications = [ - "TO_APPROVE", - "PROBLEMATIC", - "PRE_APPROVED", - ] - - for (const cls of classifications) { - const shouldSetTimestamp = cls === "PRE_APPROVED" - if (cls === "PRE_APPROVED") { - expect(shouldSetTimestamp).toBe(true) - } else { - expect(shouldSetTimestamp).toBe(false) - } - } + const shouldSet = (cls: string) => cls === "PRE_APPROVED" + + expect(shouldSet("TO_APPROVE")).toBe(false) + expect(shouldSet("PROBLEMATIC")).toBe(false) + expect(shouldSet("PRE_APPROVED")).toBe(true) })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@testing/petri/finality.test.ts` around lines 200 - 215, The test "only PRE_APPROVED classification triggers soft_finality_at" currently uses an unnecessary inner if; simplify by asserting the boolean expression directly for each classification. Replace the loop body so that for each cls in classifications you call expect(cls === "PRE_APPROVED").toBe(expectedValue) (or build an expected array like [false, false, true] and assert classifications.map(c => c === "PRE_APPROVED") equals that array). Update the test function (the test case with that name) to use this direct assertion pattern rather than the nested conditional.testing/petri/routing.test.ts (1)
61-78: Probabilistic test assertion is reasonable but could be more robust.The test checks that "at least some should differ" with
diffCount > 5out of 20 pairs. This is reasonable for a PRNG-based selection, but the threshold is somewhat arbitrary.Consider documenting the expected collision probability or using a higher iteration count for more statistical confidence:
// With 10 peers and 2 selections, the probability of two random selections // being identical is roughly 1/45 (C(10,2)). Over 20 pairs, we expect ~19-20 // to differ. Threshold of 5 gives us very high confidence.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@testing/petri/routing.test.ts` around lines 61 - 78, The probabilistic assertion in the test using selectMembers and mockPeers is weak; increase confidence by running more trials and tightening the threshold or documenting the expected collision probability—e.g., run 100 iterations by looping i from 0..99, compute diffCount as you do now, and assert a higher minimum (such as expect(diffCount).toBeGreaterThan(80)); additionally add a brief comment near the test explaining the expected collision probability given 10 peers and 2 selections so the threshold is understandable.testing/petri/benchmark.test.ts (1)
24-29: Consider adding proper type for mock peers.The
any[]return type loses type safety. Consider defining an interface that matches what the actual peer structure expects to catch type mismatches early.+interface MockPeer { + identity: string + connection: { string: string } +} + -function mockPeers(count: number): any[] { +function mockPeers(count: number): MockPeer[] { return Array.from({ length: count }, (_, i) => ({ identity: `peer_${String(i).padStart(3, "0")}`, connection: { string: `localhost:${3000 + i}` }, })) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@testing/petri/benchmark.test.ts` around lines 24 - 29, The mockPeers function currently returns any[] losing type safety; define a proper interface (e.g., Peer or PeerInfo) matching the shape used by tests — at least identity: string and connection: { string: string } — or import the existing peer type from the codebase, then change function mockPeers(count: number): any[] to mockPeers(count: number): Peer[] (or the imported type) and update any references so the tests use the strongly typed peer objects.petri/01-implementation-plan.md (1)
60-60: Minor: Add period after "etc".In American English, abbreviations like "etc." require a period.
-| Chain (blocks) | `src/libs/blockchain/chainBlocks.ts` | `insertBlock()`, `getBlockByNumber()` etc | +| Chain (blocks) | `src/libs/blockchain/chainBlocks.ts` | `insertBlock()`, `getBlockByNumber()`, etc. |🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@petri/01-implementation-plan.md` at line 60, Minor punctuation fix: update the table entry for "Chain (blocks)" in petri/01-implementation-plan.md so the abbreviation "etc" is written as "etc." (add the missing period) — specifically edit the cell showing `src/libs/blockchain/chainBlocks.ts` and the function list (`insertBlock()`, `getBlockByNumber()` etc) to read `insertBlock()`, `getBlockByNumber()` etc.; ensuring the period follows "etc.".petri/VADEMECUM.md (1)
32-34: Add language specifiers to fenced code blocks.Several fenced code blocks lack language specifiers. Adding them improves syntax highlighting and satisfies markdown linting rules.
✏️ Suggested fixes
For file path listings (lines 32-34, 149-164, etc.), use
textor leave empty but consistent:-``` +```text testing/petri/For command examples (lines 65-73, 82-94), already correctly use `bash`. For status format templates (lines 97-101), use `text`: ```diff -``` +```text [PHASE X] Starting: <phase name>Also applies to: 97-101, 149-164
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@petri/VADEMECUM.md` around lines 32 - 34, Several fenced code blocks (e.g., the file-path listing "testing/petri/" and the status-template "[PHASE X] Starting: <phase name>") are missing language specifiers; update those triple-backtick blocks to include appropriate languages (use "text" for file listings and status templates) while keeping existing "bash" specifiers for command examples, so all fenced blocks have explicit language markers and satisfy markdown linting.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@petri/03-secretary-coordinated-signing.md`:
- Line 54: The doc declares petri_submitBlockHash returns { status: "collected"
| "mismatch" | "error" } but receiveBlockHashSubmission in petriSecretary.ts
currently always returns { status: "collected" }; either implement the missing
branches or align the doc: update receiveBlockHashSubmission to detect and
return "mismatch" when the submitted hash disagrees with the expected/previously
collected hash and return "error" on exceptional conditions (e.g., validation
failure or internal exception), ensuring these branches emit the correct status
string, or instead update petri/03-secretary-coordinated-signing.md to only list
{ status: "collected" } if you choose not to implement mismatch/error; reference
petri_submitBlockHash and receiveBlockHashSubmission when making the change.
In `@testing/petri/benchmark.test.ts`:
- Around line 268-286: The test "forge getCurrentDeltas returns copy (no
reference leak)" currently calls forge.start(...) and then forge.stop() but may
leak timers if an assertion throws; wrap the test body so that the
ContinuousForge instance created from ContinuousForge
(require("@/libs/consensus/petri/forge/continuousForge").ContinuousForge) is
always cleaned up by using a try/finally where forge.start(mockPeers(3)) is
before the try and forge.stop() is invoked in the finally block to ensure
forge.stop() runs even when expect(...) assertions fail.
In `@testing/petri/byzantineFault.test.ts`:
- Around line 29-30: Remove the redundant explicit type annotations on the two
test variables: change the declarations of ttlRounds: number = 5 and
currentRound: number = 1 to rely on type inference (ttlRounds = 5 and
currentRound = 1) so ESLint no longer flags trivially inferred types; update the
declarations where ttlRounds and currentRound are defined in
byzantineFault.test.ts accordingly.
In `@testing/petri/finality.test.ts`:
- Around line 112-125: The test currently only assigns softFinalityAt locally
and doesn't verify idempotency; change it to call the real updateClassification
twice and assert the stored timestamp stays the same: make the test async,
create or use a txHash, call updateClassification(txHash, "PRE_APPROVED"), read
the stored value via getTransactionFinality(txHash) (or the equivalent getter),
call updateClassification(txHash, "PRE_APPROVED") again, read the value a second
time, and assert that first.softFinalityAt === second.softFinalityAt to ensure
the timestamp is not overwritten.
In `@testing/petri/happyPath.test.ts`:
- Line 11: Remove the unused import "beforeEach" from the import statement that
currently imports describe, expect, test, mock, beforeEach (e.g., the import
line referencing bun:test) so the file only imports the symbols actually used
(describe, expect, test, mock); update the import to omit beforeEach to avoid
the unused-import lint/error.
In `@testing/petri/speculativeExecutor.test.ts`:
- Around line 12-32: The test helper computeDeltaHash should mirror production
fallback behavior: when mapping gcrEdits to editsForHashing use e.operation ??
"" and e.account ?? "" for operation and account (keep the existing
bigint/string handling for amount), then pass that through canonicalJson and
Hashing.sha256 as before so the test helper matches speculativeExecutor.ts
behavior exactly.
---
Duplicate comments:
In `@src/libs/consensus/petri/coordination/petriSecretary.ts`:
- Around line 142-148: Calling resetCollection() at the start of
collectBlockHashes wipes the global inbox and drops early-arriving submissions
from receiveBlockHashSubmission; instead, stop clearing the shared buffer and
scope collection state per-invocation: remove the resetCollection() call from
collectBlockHashes and either (a) create a new per-call collection
map/identifier that collectBlockHashes uses to gather responses while
receiveBlockHashSubmission appends to the shared inbox, or (b) if using a shared
inbox data structure, filter/consume only entries matching the current block id
within collectBlockHashes rather than clearing everything; update
collectBlockHashes to initialize its own local result-tracking (e.g., a local
Map or collectionId) and leave resetCollection() for teardown or explicit global
resets.
- Around line 112-118: The electSecretary function currently returns shard[0] as
a fallback when getSecretaryIdentity(...) resolves to the local node (found ===
undefined), which misleads callers and causes probing the wrong peer (see
handleSecretaryOffline); change electSecretary to stop returning shard[0] and
instead return an optional Peer (e.g., Peer | undefined or null) when the
secretary identity is not present in shard, update the function signature and
callers (including places using isWeSecretary and handleSecretaryOffline) to
handle the undefined case explicitly, and ensure getSecretaryIdentity,
electSecretary, and any code assuming a guaranteed Peer are adjusted to check
for absence before proceeding.
- Around line 175-218: The loop in petriSecretary.ts re-processes the same
pubkeys across iterations because only valid matches are recorded in signatures,
causing mismatchCount to inflate; fix by marking a submission as processed
regardless of outcome: introduce a processedSenders Set (or remove entries from
pendingSubmissions) and, inside the pendingSubmissions for-loop, add pubkey to
processedSenders (or delete from pendingSubmissions) when you skip for wrong
blockNumber, on hash mismatch, or on invalid signature so each sender increments
mismatchCount at most once; keep signatures mapping only for valid signatures
and use processedSenders when deciding whether to skip already-handled senders.
In `@testing/petri/secretaryCoordination.test.ts`:
- Around line 17-19: The test helper electSecretary in
testing/petri/secretaryCoordination.test.ts is inconsistent with production;
replace the stubbed logic by importing and using the real electSecretary (and
isWeSecretary if needed) from the production module (petriSecretary.ts), or
update the helper to precisely replicate production: include the local pubkey in
the identity list, sort deterministically the full set, then select the first
element as production does; update tests to call the imported
electSecretary/isWeSecretary so test election outcomes match production
behavior.
---
Nitpick comments:
In `@petri/01-implementation-plan.md`:
- Line 60: Minor punctuation fix: update the table entry for "Chain (blocks)" in
petri/01-implementation-plan.md so the abbreviation "etc" is written as "etc."
(add the missing period) — specifically edit the cell showing
`src/libs/blockchain/chainBlocks.ts` and the function list (`insertBlock()`,
`getBlockByNumber()` etc) to read `insertBlock()`, `getBlockByNumber()` etc.;
ensuring the period follows "etc.".
In `@petri/03-secretary-coordinated-signing.md`:
- Around line 13-28: The fenced code blocks in the document (the block starting
with "All 10 shard members compile block independently (deterministic)" and the
other fenced blocks around lines 123-140 and 180-192) lack language identifiers;
update each triple-backtick fence to include an appropriate language tag such as
text or plaintext (or typescript for the code-like block at ~123 if relevant) so
markdownlint passes and syntax highlighting is applied consistently.
In `@petri/VADEMECUM.md`:
- Around line 32-34: Several fenced code blocks (e.g., the file-path listing
"testing/petri/" and the status-template "[PHASE X] Starting: <phase name>") are
missing language specifiers; update those triple-backtick blocks to include
appropriate languages (use "text" for file listings and status templates) while
keeping existing "bash" specifiers for command examples, so all fenced blocks
have explicit language markers and satisfy markdown linting.
In `@src/libs/consensus/petri/coordination/petriSecretary.ts`:
- Around line 55-56: The module-level mutable variables pendingSubmissions and
collectionResolve cause test pollution; encapsulate that state so tests don't
need global cleanup. Either (A) move pendingSubmissions and collectionResolve
into an instance of a PetriSecretary class (make existing exported functions
become methods on that class and update callers to use a singleton or new
instance), or (B) keep the module functions but add and export a test-only
resetCollection() that clears pendingSubmissions (pendingSubmissions.clear())
and sets collectionResolve = null; reference the symbols pendingSubmissions,
collectionResolve, and resetCollection in your change so tests can call the
reset or callers can use the new class instance.
In `@testing/petri/benchmark.test.ts`:
- Around line 24-29: The mockPeers function currently returns any[] losing type
safety; define a proper interface (e.g., Peer or PeerInfo) matching the shape
used by tests — at least identity: string and connection: { string: string } —
or import the existing peer type from the codebase, then change function
mockPeers(count: number): any[] to mockPeers(count: number): Peer[] (or the
imported type) and update any references so the tests use the strongly typed
peer objects.
In `@testing/petri/blockCompiler.test.ts`:
- Around line 200-223: The test currently only asserts a hardcoded array and
doesn't exercise behavior; either replace it with an integration/unit test that
invokes runBlockPeriod() and verifies the actual
pause/compile/finalize/reset/resume lifecycle using mocks/stubs for pause(),
compileBlock(), finalizeBlock(), reset(), and resume() (asserting the sequence
and side-effects), or remove the test and move the sequence example into inline
documentation near runBlockPeriod() as a commented example; update the test
named "forge pause/resume pattern for block compilation" in
testing/petri/blockCompiler.test.ts accordingly and ensure mocks verify call
order on the real functions rather than comparing two identical arrays.
- Around line 173-195: The tests under "Consensus dispatch switching" are
trivially exercising a JavaScript ternary rather than production logic; remove
or replace them with real integration/unit tests that call the actual dispatch
code (e.g., import and invoke the dispatch function used in mainLoop.ts) instead
of local ternary checks in the tests "petriConsensus flag gates dispatch" and
"dispatch function selection is deterministic"; if keeping a lightweight check
for documentation, convert the two tests into a short comment explaining
expected behavior and add one real test that imports the dispatch implementation
and asserts that passing petriConsensus=true selects the "petri" handler and
false selects the "porbft" handler.
- Around line 83-169: Import the actual types (CompilationResult,
ArbitrationResult, FinalizationResult) and annotate the test objects with those
types instead of using plain inline objects: add "import type {
CompilationResult, ArbitrationResult, FinalizationResult } from
'@/libs/consensus/petri/block/types'" at the top, then change each test's result
declaration to use explicit typing (e.g., "const result: CompilationResult = {
... }", "const result: ArbitrationResult = { ... }", "const result:
FinalizationResult = { ... }") so the tests will fail if the production types
change; update the three describe blocks named "CompilationResult structure",
"ArbitrationResult structure", and "FinalizationResult structure" accordingly.
In `@testing/petri/classifier.test.ts`:
- Around line 16-40: The helper function mockTx is dead code and not used by any
tests that call classifyFromEdits; either delete the mockTx function declaration
to remove the unused helper, or update tests that currently construct edit
arrays directly to instead create a full transaction via mockTx(...) and pass
its content.gcr_edits (or equivalent) into classifyFromEdits so tests exercise
the full transaction shape; locate the mockTx function and adjust or remove it
and update any tests referencing edit arrays to use mockTx when opting to keep
the helper.
In `@testing/petri/conflictPath.test.ts`:
- Around line 273-275: Move the helper function isBlockValid so it's declared
near the top of the file alongside bftThreshold before any tests that call it;
update the file by cutting the isBlockValid(pro: number, total: number): boolean
{ return pro >= bftThreshold(total) } definition from the end and pasting it
above the tests (near bftThreshold) so readers see the helper before it's used.
In `@testing/petri/featureFlagRollback.test.ts`:
- Around line 34-62: Current tests only assert the ternary expression; update
them to mock the actual shared state and exercise the real dispatch logic:
import and spy/mock getSharedState to return petriConsensus true/false (and
sequences), then call the real routing/dispatch function (the module that
consumes getSharedState—e.g., the dispatcher or router entrypoint used in
production) and assert it chose the "petri" or "porbft" path; also add an
integration-style test that toggles the mocked getSharedState values over time
and verifies the dispatcher resolves to the current value. Ensure you reference
and spy on getSharedState and invoke the actual dispatch function used in your
codebase rather than asserting the ternary expression directly.
In `@testing/petri/finality.test.ts`:
- Around line 200-215: The test "only PRE_APPROVED classification triggers
soft_finality_at" currently uses an unnecessary inner if; simplify by asserting
the boolean expression directly for each classification. Replace the loop body
so that for each cls in classifications you call expect(cls ===
"PRE_APPROVED").toBe(expectedValue) (or build an expected array like [false,
false, true] and assert classifications.map(c => c === "PRE_APPROVED") equals
that array). Update the test function (the test case with that name) to use this
direct assertion pattern rather than the nested conditional.
In `@testing/petri/happyPath.test.ts`:
- Around line 18-37: Extract the duplicated helpers into a single module and
update tests to import them: create testing/petri/helpers.ts that exports
mockPeers, bftThreshold, and isBlockValid (copy the existing implementations
from happyPath.test.ts), then replace the local declarations in
happyPath.test.ts, blockCompiler.test.ts, routing.test.ts, conflictPath.test.ts,
liveness.test.ts, and byzantineFault.test.ts with imports from that module
(import { mockPeers, bftThreshold, isBlockValid } from "testing/petri/helpers"),
ensuring the exported signatures and behavior remain identical to the originals
(including mockPeers return shape and bftThreshold logic).
- Around line 196-256: These tests duplicate lifecycle coverage from
continuousForge.test.ts; either delete the overlapping "Happy Path — Forge State
Machine" test block or trim it to only keep higher-level integration assertions
(e.g., only the end-to-end happy path flows that exercise
ContinuousForge.start/shard interaction and getState transitions) while removing
unit-like cases already covered (reset, double-start, and lifecycle
sub-assertions). Locate the tests referencing ContinuousForge, mockPeers, and
methods start/pause/resume/reset/stop in testing/petri/happyPath.test.ts and
remove or reduce them to a single integration test that asserts state
transitions for a real shard scenario, leaving detailed lifecycle specs in
continuousForge.test.ts.
In `@testing/petri/routing.test.ts`:
- Around line 61-78: The probabilistic assertion in the test using selectMembers
and mockPeers is weak; increase confidence by running more trials and tightening
the threshold or documenting the expected collision probability—e.g., run 100
iterations by looping i from 0..99, compute diffCount as you do now, and assert
a higher minimum (such as expect(diffCount).toBeGreaterThan(80)); additionally
add a brief comment near the test explaining the expected collision probability
given 10 peers and 2 selections so the threshold is understandable.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 01c9238a-e2ec-4f83-9b93-f64e3d8b3e00
📒 Files selected for processing (22)
package.jsonpetri/01-implementation-plan.mdpetri/03-secretary-coordinated-signing.mdpetri/VADEMECUM.mdpetri/architecture-diagram.mdsrc/libs/consensus/petri/coordination/petriSecretary.tssrc/libs/consensus/petri/utils/canonicalJson.tstesting/petri/benchmark.test.tstesting/petri/blockCompiler.test.tstesting/petri/byzantineFault.test.tstesting/petri/canonicalJson.test.tstesting/petri/classifier.test.tstesting/petri/conflictPath.test.tstesting/petri/continuousForge.test.tstesting/petri/deltaTracker.test.tstesting/petri/featureFlagRollback.test.tstesting/petri/finality.test.tstesting/petri/happyPath.test.tstesting/petri/liveness.test.tstesting/petri/routing.test.tstesting/petri/secretaryCoordination.test.tstesting/petri/speculativeExecutor.test.ts
✅ Files skipped from review due to trivial changes (2)
- src/libs/consensus/petri/utils/canonicalJson.ts
- package.json
| New consensus submethod: `petri_submitBlockHash` | ||
| - **Direction**: member -> secretary | ||
| - **Request params**: `[blockHash, signature, blockNumber]` | ||
| - **Response**: `{ status: "collected" | "mismatch" | "error" }` |
There was a problem hiding this comment.
RPC response status "mismatch" documented but not implemented.
The doc specifies petri_submitBlockHash returns { status: "collected" | "mismatch" | "error" }, but receiveBlockHashSubmission in petriSecretary.ts (line 80) always returns { status: "collected" }. Either update the implementation to return appropriate status codes, or update the doc to reflect actual behavior.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@petri/03-secretary-coordinated-signing.md` at line 54, The doc declares
petri_submitBlockHash returns { status: "collected" | "mismatch" | "error" } but
receiveBlockHashSubmission in petriSecretary.ts currently always returns {
status: "collected" }; either implement the missing branches or align the doc:
update receiveBlockHashSubmission to detect and return "mismatch" when the
submitted hash disagrees with the expected/previously collected hash and return
"error" on exceptional conditions (e.g., validation failure or internal
exception), ensuring these branches emit the correct status string, or instead
update petri/03-secretary-coordinated-signing.md to only list { status:
"collected" } if you choose not to implement mismatch/error; reference
petri_submitBlockHash and receiveBlockHashSubmission when making the change.
| test("forge getCurrentDeltas returns copy (no reference leak)", () => { | ||
| const config = { | ||
| forgeIntervalMs: 60000, | ||
| blockIntervalMs: 10000, | ||
| agreementThreshold: 7, | ||
| problematicTTLRounds: 5, | ||
| } | ||
| const forge = new (require("@/libs/consensus/petri/forge/continuousForge").ContinuousForge)(config) | ||
| forge.start(mockPeers(3)) | ||
|
|
||
| const deltas1 = forge.getCurrentDeltas() | ||
| const deltas2 = forge.getCurrentDeltas() | ||
|
|
||
| // Should be different object references (spread copy) | ||
| expect(deltas1).not.toBe(deltas2) | ||
| expect(deltas1).toEqual(deltas2) | ||
|
|
||
| forge.stop() | ||
| }) |
There was a problem hiding this comment.
Ensure forge cleanup on test failure.
If assertions fail before forge.stop() is called, the forge's internal timers may continue running, potentially affecting subsequent tests. Consider wrapping in try/finally:
🛡️ Proposed fix
test("forge getCurrentDeltas returns copy (no reference leak)", () => {
const config = {
forgeIntervalMs: 60000,
blockIntervalMs: 10000,
agreementThreshold: 7,
problematicTTLRounds: 5,
}
const forge = new (require("@/libs/consensus/petri/forge/continuousForge").ContinuousForge)(config)
forge.start(mockPeers(3))
- const deltas1 = forge.getCurrentDeltas()
- const deltas2 = forge.getCurrentDeltas()
-
- // Should be different object references (spread copy)
- expect(deltas1).not.toBe(deltas2)
- expect(deltas1).toEqual(deltas2)
-
- forge.stop()
+ try {
+ const deltas1 = forge.getCurrentDeltas()
+ const deltas2 = forge.getCurrentDeltas()
+
+ // Should be different object references (spread copy)
+ expect(deltas1).not.toBe(deltas2)
+ expect(deltas1).toEqual(deltas2)
+ } finally {
+ forge.stop()
+ }
})📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| test("forge getCurrentDeltas returns copy (no reference leak)", () => { | |
| const config = { | |
| forgeIntervalMs: 60000, | |
| blockIntervalMs: 10000, | |
| agreementThreshold: 7, | |
| problematicTTLRounds: 5, | |
| } | |
| const forge = new (require("@/libs/consensus/petri/forge/continuousForge").ContinuousForge)(config) | |
| forge.start(mockPeers(3)) | |
| const deltas1 = forge.getCurrentDeltas() | |
| const deltas2 = forge.getCurrentDeltas() | |
| // Should be different object references (spread copy) | |
| expect(deltas1).not.toBe(deltas2) | |
| expect(deltas1).toEqual(deltas2) | |
| forge.stop() | |
| }) | |
| test("forge getCurrentDeltas returns copy (no reference leak)", () => { | |
| const config = { | |
| forgeIntervalMs: 60000, | |
| blockIntervalMs: 10000, | |
| agreementThreshold: 7, | |
| problematicTTLRounds: 5, | |
| } | |
| const forge = new (require("@/libs/consensus/petri/forge/continuousForge").ContinuousForge)(config) | |
| forge.start(mockPeers(3)) | |
| try { | |
| const deltas1 = forge.getCurrentDeltas() | |
| const deltas2 = forge.getCurrentDeltas() | |
| // Should be different object references (spread copy) | |
| expect(deltas1).not.toBe(deltas2) | |
| expect(deltas1).toEqual(deltas2) | |
| } finally { | |
| forge.stop() | |
| } | |
| }) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@testing/petri/benchmark.test.ts` around lines 268 - 286, The test "forge
getCurrentDeltas returns copy (no reference leak)" currently calls
forge.start(...) and then forge.stop() but may leak timers if an assertion
throws; wrap the test body so that the ContinuousForge instance created from
ContinuousForge
(require("@/libs/consensus/petri/forge/continuousForge").ContinuousForge) is
always cleaned up by using a try/finally where forge.start(mockPeers(3)) is
before the try and forge.stop() is invoked in the finally block to ensure
forge.stop() runs even when expect(...) assertions fail.
| ttlRounds: number = 5, | ||
| currentRound: number = 1, |
There was a problem hiding this comment.
Remove redundant type annotations.
ESLint correctly flags that these types are trivially inferred from the default values:
🔧 Fix type annotations
function simulateRound(
shardSize: number,
byzantineCount: number,
txHash: string,
correctDelta: string,
- ttlRounds: number = 5,
- currentRound: number = 1,
+ ttlRounds = 5,
+ currentRound = 1,
): { promoted: string[]; flagged: string[] } {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ttlRounds: number = 5, | |
| currentRound: number = 1, | |
| ttlRounds = 5, | |
| currentRound = 1, |
🧰 Tools
🪛 ESLint
[error] 29-29: Type number trivially inferred from a number literal, remove type annotation.
(@typescript-eslint/no-inferrable-types)
[error] 30-30: Type number trivially inferred from a number literal, remove type annotation.
(@typescript-eslint/no-inferrable-types)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@testing/petri/byzantineFault.test.ts` around lines 29 - 30, Remove the
redundant explicit type annotations on the two test variables: change the
declarations of ttlRounds: number = 5 and currentRound: number = 1 to rely on
type inference (ttlRounds = 5 and currentRound = 1) so ESLint no longer flags
trivially inferred types; update the declarations where ttlRounds and
currentRound are defined in byzantineFault.test.ts accordingly.
| test("soft finality is only set once (first PRE_APPROVED)", () => { | ||
| const firstTime = 1700000000000 | ||
| const laterTime = 1700000002000 | ||
|
|
||
| // First promotion sets the timestamp | ||
| let softFinalityAt: number | null = null | ||
| softFinalityAt = firstTime | ||
|
|
||
| // Second call should not overwrite (simulating idempotency) | ||
| // In practice, updateClassification always sets it, | ||
| // but the first call is what matters | ||
| expect(softFinalityAt).toBe(firstTime) | ||
| expect(softFinalityAt).not.toBe(laterTime) | ||
| }) |
There was a problem hiding this comment.
Test doesn't actually verify idempotency behavior.
The test comment states it's "simulating idempotency" but the code only assigns a value once (softFinalityAt = firstTime) and never attempts a second assignment. The assertion at line 124 (expect(softFinalityAt).not.toBe(laterTime)) is trivially true since laterTime was never assigned.
To actually test idempotency, you would need to call the production updateClassification function twice and verify the timestamp doesn't change:
test("soft finality is only set once (first PRE_APPROVED)", async () => {
// Create a tx
await updateClassification(txHash, "PRE_APPROVED")
const first = await getTransactionFinality(txHash)
// Attempt to update again
await updateClassification(txHash, "PRE_APPROVED")
const second = await getTransactionFinality(txHash)
expect(second.softFinalityAt).toBe(first.softFinalityAt)
})🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@testing/petri/finality.test.ts` around lines 112 - 125, The test currently
only assigns softFinalityAt locally and doesn't verify idempotency; change it to
call the real updateClassification twice and assert the stored timestamp stays
the same: make the test async, create or use a txHash, call
updateClassification(txHash, "PRE_APPROVED"), read the stored value via
getTransactionFinality(txHash) (or the equivalent getter), call
updateClassification(txHash, "PRE_APPROVED") again, read the value a second
time, and assert that first.softFinalityAt === second.softFinalityAt to ensure
the timestamp is not overwritten.
| * Mocks: Mempool, Chain, broadcastBlockHash, insertBlock, BroadcastManager, peer RPCs | ||
| * Real: DeltaAgreementTracker, ContinuousForge state machine, isBlockValid logic | ||
| */ | ||
| import { describe, expect, test, mock, beforeEach } from "bun:test" |
There was a problem hiding this comment.
Remove unused import.
beforeEach is imported but never used in this file.
-import { describe, expect, test, mock, beforeEach } from "bun:test"
+import { describe, expect, test, mock } from "bun:test"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| import { describe, expect, test, mock, beforeEach } from "bun:test" | |
| import { describe, expect, test, mock } from "bun:test" |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@testing/petri/happyPath.test.ts` at line 11, Remove the unused import
"beforeEach" from the import statement that currently imports describe, expect,
test, mock, beforeEach (e.g., the import line referencing bun:test) so the file
only imports the symbols actually used (describe, expect, test, mock); update
the import to omit beforeEach to avoid the unused-import lint/error.
| function computeDeltaHash( | ||
| gcrEdits: Array<{ | ||
| type: string | ||
| operation: string | ||
| account: string | ||
| amount?: number | bigint | string | ||
| }>, | ||
| ): string { | ||
| const editsForHashing = gcrEdits.map(edit => ({ | ||
| type: edit.type, | ||
| operation: edit.operation, | ||
| account: edit.account, | ||
| amount: | ||
| typeof edit.amount === "bigint" | ||
| ? edit.amount.toString() | ||
| : String(edit.amount ?? ""), | ||
| })) | ||
|
|
||
| const canonicalEdits = canonicalJson(editsForHashing) | ||
| return Hashing.sha256(canonicalEdits) | ||
| } |
There was a problem hiding this comment.
Test helper doesn't match production fallback handling.
The computeDeltaHash helper has required operation and account fields, but production code (speculativeExecutor.ts lines 104-109) applies ?? "" fallbacks for potentially undefined values:
// Production:
operation: e.operation ?? "",
account: e.account ?? "",Consider aligning the test helper to match production exactly, or add test cases with undefined operation/account to validate fallback behavior.
🔧 Proposed fix to match production
function computeDeltaHash(
gcrEdits: Array<{
type: string
- operation: string
- account: string
+ operation?: string
+ account?: string
amount?: number | bigint | string
}>,
): string {
const editsForHashing = gcrEdits.map(edit => ({
type: edit.type,
- operation: edit.operation,
- account: edit.account,
+ operation: edit.operation ?? "",
+ account: edit.account ?? "",
amount:
typeof edit.amount === "bigint"
? edit.amount.toString()
: String(edit.amount ?? ""),
}))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@testing/petri/speculativeExecutor.test.ts` around lines 12 - 32, The test
helper computeDeltaHash should mirror production fallback behavior: when mapping
gcrEdits to editsForHashing use e.operation ?? "" and e.account ?? "" for
operation and account (keep the existing bigint/string handling for amount),
then pass that through canonicalJson and Hashing.sha256 as before so the test
helper matches speculativeExecutor.ts behavior exactly.
Add PETRI_AGREEMENT_THRESHOLD, PETRI_PROBLEMATIC_TTL_ROUNDS, and PETRI_SHARD_SIZE with their defaults so operators can discover all tuning knobs. Closes mycelium #148 (Q-6).
The hash is computed over normalized edits ({type, operation, account,
amount}), not the full GCREdit. Closes mycelium #146 (CR-9).
The ok boolean required hardFinalityObserved but not softFinalityObserved, despite the scenario claiming to verify softFinalityAt. Closes mycelium #145 (CR-19).
Reject invalid Petri config values (non-positive intervals, shardSize, agreementThreshold out of range, negative TTL) at startup instead of silently breaking consensus. Closes mycelium #147 (CR-2).
blockNumber=0 (genesis) is a valid reference but was filtered out by truthy check. Change to !== undefined in both getAll() and getByClassification(). Closes mycelium #142 (Q-5).
trackedCount is a getter property, not a function — the typeof check for "function" always failed and petri_tracker_tx_count was never set. Add getTrackerCount() on ContinuousForge and use it in MetricsCollector instead of the unsafe (forge as any).tracker cast. Closes mycelium #143 (Q-4/CR-3).
A resolved TX from arbitration could still be in the mempool, causing duplicates in the block. Use a hash-keyed Map to deduplicate before ordering. Closes mycelium #144 (CR-4).
If getCommonValidatorSeed() or getShard() threw before entering petriConsensusRoutine, startingConsensus stayed true permanently, blocking all future consensus rounds. Wrap in try/finally. Closes mycelium #136 (CR-17).
Mempool cleanup used transactionEntities (all TXs) instead of committedTxHashes (only successfully inserted). Skipped TXs were silently removed from mempool despite not being persisted. Closes mycelium #138 (Q-3).
Unhandled exceptions from getTransactionFinality() propagated without a structured error response, unlike neighboring RPC cases. Closes mycelium #139 (CR-16).
If a round had no TO_APPROVE txs and returned early, stale deltas from the previous round were served via getCurrentDeltas(). Closes mycelium #141 (CR-10).
There was a problem hiding this comment.
Actionable comments posted: 8
♻️ Duplicate comments (1)
src/libs/consensus/petri/forge/continuousForge.ts (1)
154-173:⚠️ Potential issue | 🟠 MajorReset
currentRoundDeltasbefore any early return.If a round finds no
TO_APPROVEtxs or aborts before Line 203,getCurrentDeltas()keeps serving the previous round's hashes. Thepetri_exchangeDeltashandler currently reads that map without a round guard, so peers can record stale deltas against the new round.🔧 Suggested fix
async runForgeRound(): Promise<void> { this.state.currentRound++ this.state.lastRoundStartedAt = Date.now() const round = this.state.currentRound + this.currentRoundDeltas = {} log.debug(`[ContinuousForge] Round ${round} starting`)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libs/consensus/petri/forge/continuousForge.ts` around lines 154 - 173, The issue is stale per-round deltas persisting when runForgeRound returns early; in runForgeRound (continuousForge.ts) reset/clear the currentRoundDeltas map at the start of the round (or immediately before any early return) so getCurrentDeltas() and the petri_exchangeDeltas handler never serve previous-round hashes; update the method that maintains currentRoundDeltas (referenced as currentRoundDeltas and getCurrentDeltas()) inside runForgeRound to clear it (e.g., new Map() or clear()) before checking TO_APPROVE txs or aborting.
🧹 Nitpick comments (4)
testing/loadgen/src/features/consensus/petri_tx_inclusion.ts (3)
201-206: Latency calculation may be affected by clock skew.
softFinalityAtandhardFinalityAtare server-side timestamps from the blockchain, whiletxSubmittedAtis a client-sideDate.now(). Clock drift between client and server could produce misleading or negative latency values.For devnet testing this is likely acceptable, but worth noting if these metrics are used for performance analysis.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@testing/loadgen/src/features/consensus/petri_tx_inclusion.ts` around lines 201 - 206, Latency fields softFinalityLatencyMs and hardFinalityLatencyMs use client-side txSubmittedAt which can be skewed vs server finality timestamps (finalityResult.finality.softFinalityAt/hardFinalityAt), causing misleading/negative values; update the calculation in the code that builds these fields (refer to softFinalityLatencyMs, hardFinalityLatencyMs, finalityResult, txSubmittedAt) to either (a) compute latency using a server-side submission timestamp if available (e.g., use finalityResult.submittedAt or a serverTimestamp included in finalityResult) or (b) derive a clock offset by comparing a server time value from the same response and adjust txSubmittedAt before subtraction, and as a safeguard clamp negative results to null (or 0) to avoid reporting negative latencies.
94-94: Consider extracting helper functions to reduce cognitive complexity.SonarCloud reports cognitive complexity of 23, exceeding the allowed threshold of 15. The function handles setup, transaction submission, and multiple verification phases sequentially.
Consider extracting logical sections into helper functions:
submitTransaction(demos, recipientAddress, nonce, amount)verifyInclusion(rpcUrls, txHash, senderAddress, expectedNonce, timeoutSec, pollMs)This would improve readability and testability while reducing complexity.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@testing/loadgen/src/features/consensus/petri_tx_inclusion.ts` at line 94, The runPetriTxInclusion function is too cognitively complex; split its setup, submission, and multi-phase verification logic into clear helpers: implement submitTransaction(demos, recipientAddress, nonce, amount) to encapsulate creating and sending the transaction and returning the txHash and any submission metadata, and implement verifyInclusion(rpcUrls, txHash, senderAddress, expectedNonce, timeoutSec, pollMs) to encapsulate the polling/validation across RPC endpoints until the transaction and nonce are observed; refactor runPetriTxInclusion to call these helpers (and additional small helpers for initial setup or teardown if needed) so each helper has a single responsibility and the main function just orchestrates the steps.
133-137: Theas anycasts are unnecessary—confirmandbroadcastmethods are properly typed in the SDK.The same
Demosimport from@kynesyslabs/demosdk/websdkis used intesting/loadgen/src/transfer_loadgen.tswithout any type casting (lines 280 and 284), which proves these methods are available in the SDK's public type definitions. Remove the casts to improve type safety.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@testing/loadgen/src/features/consensus/petri_tx_inclusion.ts` around lines 133 - 137, Remove the unnecessary any casts by calling the strongly-typed SDK methods directly: replace (demos as any).confirm(signedTx) with demos.confirm(signedTx) and (demos as any).broadcast(validity) with demos.broadcast(validity); keep the existing variables (signedTx, validity) and existing error check but use demos.confirm and demos.broadcast to restore type safety and rely on the SDK's public definitions.src/libs/blockchain/mempool_v2.ts (1)
297-300: Preserve the existing mempool tie-breakers in classification queries.
getMempool()already orders bytimestamp,reference_block, thenhash, butgetByClassification()only sorts bytimestamp. Reusing the same tie-breakers avoids nondeterministic iteration when two txs share a timestamp, especially for callers likegetPreApproved()that can feed consensus flows.♻️ Suggested change
return await this.repo.find({ where, - order: { timestamp: "ASC" }, + order: { + timestamp: "ASC", + reference_block: "ASC", + hash: "ASC", + }, })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libs/blockchain/mempool_v2.ts` around lines 297 - 300, getByClassification() currently orders only by timestamp which can produce nondeterministic ordering when timestamps tie; update the repository query in getByClassification() to reuse the same tie-breaker ordering used by getMempool() (i.e., order by timestamp, then reference_block, then hash, all ASC) so callers like getPreApproved() see deterministic iteration. Locate getByClassification() in mempool_v2.ts and change its this.repo.find(...) options to include the additional order fields timestamp -> reference_block -> hash matching getMempool().
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.env.example:
- Around line 88-93: The PETRI_... keys are triggering dotenv-linter
UnorderedKey warnings because they are not in alphabetical order; reorder the
block so the env keys are sorted lexicographically (e.g.,
PETRI_AGREEMENT_THRESHOLD, PETRI_BLOCK_INTERVAL_MS, PETRI_CONSENSUS,
PETRI_FORGE_INTERVAL_MS, PETRI_PROBLEMATIC_TTL_ROUNDS, PETRI_SHARD_SIZE) and
update the .env.example block containing PETRI_CONSENSUS,
PETRI_FORGE_INTERVAL_MS, PETRI_BLOCK_INTERVAL_MS, PETRI_AGREEMENT_THRESHOLD,
PETRI_PROBLEMATIC_TTL_ROUNDS, PETRI_SHARD_SIZE accordingly to resolve the linter
errors.
- Around line 36-38: Reorder the OMNI variables in .env.example so they follow
the dotenv-linter expected alphabetical order: move OMNI_MODE to come
immediately after OMNI_ENABLED and before OMNI_PORT; specifically adjust the
block containing OMNI_ENABLED, OMNI_MODE, and OMNI_PORT to read OMNI_ENABLED,
OMNI_MODE, OMNI_PORT to resolve the UnorderedKey warning.
In `@src/features/metrics/MetricsCollector.ts`:
- Around line 757-761: collectPetriMetrics() is reading
state.pendingTransactions.size which ContinuousForge never populates so
petri_pending_tx_count stays 0; replace this dead source by querying a live
pending-tx count (either add/expose a method like
ContinuousForge.getPendingTransactionCount() that returns the mempool count or
call the existing mempool/mempoolManager API directly) and use that value for
the "petri_pending_tx_count" gauge, or alternatively update
runForgeRound()/ContinuousForge to maintain state.pendingTransactions correctly
if you prefer keeping the gauge tied to the forge state.
In `@src/libs/blockchain/mempool_v2.ts`:
- Around line 117-125: The current logic leaves transactions as
TransactionClassification.TO_APPROVE when executeSpeculatively(...) returns
success: false, causing them to never reach delta agreement; update the handling
in the block that calls executeSpeculatively (the code that reads
result.classification === TransactionClassification.TO_APPROVE and assigns
deltaHash from specResult.delta) so that when specResult.success is false you do
not persist a stuck TO_APPROVE—either reject the transaction or mark it with a
terminal failure classification (e.g., TransactionClassification.TO_REJECT or a
FAILED_SIMULATION status), clear any delta hash, and persist that updated
classification immediately; ensure this mirrors how
ContinuousForge.runForgeRound() gates recording on success && delta so the
mempool won’t retain non-progressing entries.
In `@src/libs/consensus/petri/forge/continuousForge.ts`:
- Around line 166-200: The code only fetches txs already marked
TransactionClassification.TO_APPROVE via Mempool.getByClassification so
transactions inserted without a persisted classification are never processed;
update the flow to also handle unclassified mempool rows by fetching
TransactionClassification.UNCLASSIFIED (or a combined query) alongside
TO_APPROVE, run classifyTransaction for those entries, persist the derived
classification and any computed delta via Mempool.updateClassification, and then
continue with the existing speculative execution path (symbols to edit:
Mempool.getByClassification, TransactionClassification.TO_APPROVE/UNCLASSIFIED,
classifyTransaction, executeSpeculatively, and Mempool.updateClassification).
- Around line 93-95: pause() currently only sets state.isPaused and doesn't wait
for an active round; ensure in ContinuousForge you track the currently running
round promise (e.g., add a member like currentRoundPromise or
activeRoundPromise) and assign it at the start of runForgeRound() and clear it
on completion. Update pause() to set state.isPaused then await the tracked
currentRoundPromise (if present) so it drains the in-flight round before
returning. Also update runBlockPeriod() to check/await the same tracked promise
before proceeding to arbitration/block compilation to prevent mempool rewrites
during an active round. Ensure runForgeRound() uses finally to clear the promise
so pause() and runBlockPeriod() never await a dangling promise.
In `@testing/devnet/.env.example`:
- Around line 24-29: Reorder the PETRI_* entries in the devnet .env example to
satisfy dotenv-linter's UnorderedKey rule by sorting them consistently (e.g.,
alphabetically or matching the project's canonical env order); specifically
locate the block containing PETRI_CONSENSUS, PETRI_FORGE_INTERVAL_MS,
PETRI_BLOCK_INTERVAL_MS, PETRI_AGREEMENT_THRESHOLD,
PETRI_PROBLEMATIC_TTL_ROUNDS, and PETRI_SHARD_SIZE and rearrange those keys into
the agreed ordering (for example: PETRI_AGREEMENT_THRESHOLD,
PETRI_BLOCK_INTERVAL_MS, PETRI_CONSENSUS, PETRI_FORGE_INTERVAL_MS,
PETRI_PROBLEMATIC_TTL_ROUNDS, PETRI_SHARD_SIZE) so dotenv-linter stops reporting
UnorderedKey.
In `@testing/loadgen/src/features/consensus/petri_tx_inclusion.ts`:
- Around line 217-222: The thrown error in petri_tx_inclusion omits a reason
when soft finality isn't observed; update the error-reasons collection logic to
push a message when finalityResult?.softFinalityObserved is false (e.g., add
"soft finality not observed" alongside the existing checks for nonceWait,
blockAdvance, txByHash and finalityResult?.hardFinalityObserved) so the Error
thrown from petri_tx_inclusion includes that failure cause.
---
Duplicate comments:
In `@src/libs/consensus/petri/forge/continuousForge.ts`:
- Around line 154-173: The issue is stale per-round deltas persisting when
runForgeRound returns early; in runForgeRound (continuousForge.ts) reset/clear
the currentRoundDeltas map at the start of the round (or immediately before any
early return) so getCurrentDeltas() and the petri_exchangeDeltas handler never
serve previous-round hashes; update the method that maintains currentRoundDeltas
(referenced as currentRoundDeltas and getCurrentDeltas()) inside runForgeRound
to clear it (e.g., new Map() or clear()) before checking TO_APPROVE txs or
aborting.
---
Nitpick comments:
In `@src/libs/blockchain/mempool_v2.ts`:
- Around line 297-300: getByClassification() currently orders only by timestamp
which can produce nondeterministic ordering when timestamps tie; update the
repository query in getByClassification() to reuse the same tie-breaker ordering
used by getMempool() (i.e., order by timestamp, then reference_block, then hash,
all ASC) so callers like getPreApproved() see deterministic iteration. Locate
getByClassification() in mempool_v2.ts and change its this.repo.find(...)
options to include the additional order fields timestamp -> reference_block ->
hash matching getMempool().
In `@testing/loadgen/src/features/consensus/petri_tx_inclusion.ts`:
- Around line 201-206: Latency fields softFinalityLatencyMs and
hardFinalityLatencyMs use client-side txSubmittedAt which can be skewed vs
server finality timestamps
(finalityResult.finality.softFinalityAt/hardFinalityAt), causing
misleading/negative values; update the calculation in the code that builds these
fields (refer to softFinalityLatencyMs, hardFinalityLatencyMs, finalityResult,
txSubmittedAt) to either (a) compute latency using a server-side submission
timestamp if available (e.g., use finalityResult.submittedAt or a
serverTimestamp included in finalityResult) or (b) derive a clock offset by
comparing a server time value from the same response and adjust txSubmittedAt
before subtraction, and as a safeguard clamp negative results to null (or 0) to
avoid reporting negative latencies.
- Line 94: The runPetriTxInclusion function is too cognitively complex; split
its setup, submission, and multi-phase verification logic into clear helpers:
implement submitTransaction(demos, recipientAddress, nonce, amount) to
encapsulate creating and sending the transaction and returning the txHash and
any submission metadata, and implement verifyInclusion(rpcUrls, txHash,
senderAddress, expectedNonce, timeoutSec, pollMs) to encapsulate the
polling/validation across RPC endpoints until the transaction and nonce are
observed; refactor runPetriTxInclusion to call these helpers (and additional
small helpers for initial setup or teardown if needed) so each helper has a
single responsibility and the main function just orchestrates the steps.
- Around line 133-137: Remove the unnecessary any casts by calling the
strongly-typed SDK methods directly: replace (demos as any).confirm(signedTx)
with demos.confirm(signedTx) and (demos as any).broadcast(validity) with
demos.broadcast(validity); keep the existing variables (signedTx, validity) and
existing error check but use demos.confirm and demos.broadcast to restore type
safety and rely on the SDK's public definitions.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 3167fdf1-44b0-44ff-907e-e01c2c6bb126
📒 Files selected for processing (9)
.env.examplesrc/config/loader.tssrc/features/metrics/MetricsCollector.tssrc/libs/blockchain/mempool_v2.tssrc/libs/consensus/petri/block/petriBlockCompiler.tssrc/libs/consensus/petri/forge/continuousForge.tssrc/libs/consensus/petri/types/stateDelta.tstesting/devnet/.env.exampletesting/loadgen/src/features/consensus/petri_tx_inclusion.ts
✅ Files skipped from review due to trivial changes (1)
- src/libs/consensus/petri/types/stateDelta.ts
🚧 Files skipped from review as they are similar to previous changes (2)
- src/libs/consensus/petri/block/petriBlockCompiler.ts
- src/config/loader.ts
| OMNI_ENABLED=true | ||
| OMNI_PORT=3001 | ||
| OMNI_MODE=OMNI_ONLY | ||
| OMNI_MODE=OMNI_PREFERRED |
There was a problem hiding this comment.
Reorder Omni keys to satisfy dotenv-linter.
OMNI_MODE is currently placed after OMNI_PORT, which triggers the configured UnorderedKey warning.
Proposed fix
OMNI_ENABLED=true
-OMNI_PORT=3001
OMNI_MODE=OMNI_PREFERRED
+OMNI_PORT=3001
OMNI_FATAL=false📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| OMNI_ENABLED=true | |
| OMNI_PORT=3001 | |
| OMNI_MODE=OMNI_ONLY | |
| OMNI_MODE=OMNI_PREFERRED | |
| OMNI_ENABLED=true | |
| OMNI_MODE=OMNI_PREFERRED | |
| OMNI_PORT=3001 |
🧰 Tools
🪛 dotenv-linter (4.0.0)
[warning] 38-38: [UnorderedKey] The OMNI_MODE key should go before the OMNI_PORT key
(UnorderedKey)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.env.example around lines 36 - 38, Reorder the OMNI variables in
.env.example so they follow the dotenv-linter expected alphabetical order: move
OMNI_MODE to come immediately after OMNI_ENABLED and before OMNI_PORT;
specifically adjust the block containing OMNI_ENABLED, OMNI_MODE, and OMNI_PORT
to read OMNI_ENABLED, OMNI_MODE, OMNI_PORT to resolve the UnorderedKey warning.
| PETRI_CONSENSUS=true | ||
| PETRI_FORGE_INTERVAL_MS=2000 | ||
| PETRI_BLOCK_INTERVAL_MS=10000 | ||
| PETRI_AGREEMENT_THRESHOLD=7 | ||
| PETRI_PROBLEMATIC_TTL_ROUNDS=5 | ||
| PETRI_SHARD_SIZE=10 |
There was a problem hiding this comment.
Petri env keys are valid, but ordering currently fails dotenv-linter.
Please reorder this block to resolve the UnorderedKey warnings.
Proposed fix
-PETRI_CONSENSUS=true
-PETRI_FORGE_INTERVAL_MS=2000
-PETRI_BLOCK_INTERVAL_MS=10000
-PETRI_AGREEMENT_THRESHOLD=7
+PETRI_AGREEMENT_THRESHOLD=7
+PETRI_BLOCK_INTERVAL_MS=10000
+PETRI_CONSENSUS=true
+PETRI_FORGE_INTERVAL_MS=2000
PETRI_PROBLEMATIC_TTL_ROUNDS=5
PETRI_SHARD_SIZE=10📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| PETRI_CONSENSUS=true | |
| PETRI_FORGE_INTERVAL_MS=2000 | |
| PETRI_BLOCK_INTERVAL_MS=10000 | |
| PETRI_AGREEMENT_THRESHOLD=7 | |
| PETRI_PROBLEMATIC_TTL_ROUNDS=5 | |
| PETRI_SHARD_SIZE=10 | |
| PETRI_AGREEMENT_THRESHOLD=7 | |
| PETRI_BLOCK_INTERVAL_MS=10000 | |
| PETRI_CONSENSUS=true | |
| PETRI_FORGE_INTERVAL_MS=2000 | |
| PETRI_PROBLEMATIC_TTL_ROUNDS=5 | |
| PETRI_SHARD_SIZE=10 |
🧰 Tools
🪛 dotenv-linter (4.0.0)
[warning] 90-90: [UnorderedKey] The PETRI_BLOCK_INTERVAL_MS key should go before the PETRI_CONSENSUS key
(UnorderedKey)
[warning] 91-91: [UnorderedKey] The PETRI_AGREEMENT_THRESHOLD key should go before the PETRI_BLOCK_INTERVAL_MS key
(UnorderedKey)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.env.example around lines 88 - 93, The PETRI_... keys are triggering
dotenv-linter UnorderedKey warnings because they are not in alphabetical order;
reorder the block so the env keys are sorted lexicographically (e.g.,
PETRI_AGREEMENT_THRESHOLD, PETRI_BLOCK_INTERVAL_MS, PETRI_CONSENSUS,
PETRI_FORGE_INTERVAL_MS, PETRI_PROBLEMATIC_TTL_ROUNDS, PETRI_SHARD_SIZE) and
update the .env.example block containing PETRI_CONSENSUS,
PETRI_FORGE_INTERVAL_MS, PETRI_BLOCK_INTERVAL_MS, PETRI_AGREEMENT_THRESHOLD,
PETRI_PROBLEMATIC_TTL_ROUNDS, PETRI_SHARD_SIZE accordingly to resolve the linter
errors.
| const state = forge.getState() | ||
| this.metricsService.setGauge("petri_forge_running", state.isRunning ? 1 : 0) | ||
| this.metricsService.setGauge("petri_forge_paused", state.isPaused ? 1 : 0) | ||
| this.metricsService.setGauge("petri_forge_round", state.currentRound) | ||
| this.metricsService.setGauge("petri_pending_tx_count", state.pendingTransactions.size) |
There was a problem hiding this comment.
petri_pending_tx_count is wired to dead forge state.
collectPetriMetrics() reads state.pendingTransactions.size, but ContinuousForge never populates that map during runForgeRound(). This gauge will stay at 0 even while Petri has work queued unless the forge maintains that state or the metric is derived from a live source.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/features/metrics/MetricsCollector.ts` around lines 757 - 761,
collectPetriMetrics() is reading state.pendingTransactions.size which
ContinuousForge never populates so petri_pending_tx_count stays 0; replace this
dead source by querying a live pending-tx count (either add/expose a method like
ContinuousForge.getPendingTransactionCount() that returns the mempool count or
call the existing mempool/mempoolManager API directly) and use that value for
the "petri_pending_tx_count" gauge, or alternatively update
runForgeRound()/ContinuousForge to maintain state.pendingTransactions correctly
if you prefer keeping the gauge tied to the forge state.
| if (result.classification === TransactionClassification.TO_APPROVE) { | ||
| const specResult = await executeSpeculatively( | ||
| transaction, | ||
| result.gcrEdits, | ||
| ) | ||
| if (specResult.success && specResult.delta) { | ||
| deltaHash = specResult.delta.hash | ||
| } | ||
| } |
There was a problem hiding this comment.
Handle speculative-execution failures instead of persisting a stuck TO_APPROVE tx.
If executeSpeculatively() returns success: false, this still saves the tx as TO_APPROVE with no delta_hash. ContinuousForge.runForgeRound() has the same success && delta gate before it records a hash, so these txs never enter delta agreement and can sit in mempool indefinitely. Reject them here, or persist a terminal classification on simulation failure.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/libs/blockchain/mempool_v2.ts` around lines 117 - 125, The current logic
leaves transactions as TransactionClassification.TO_APPROVE when
executeSpeculatively(...) returns success: false, causing them to never reach
delta agreement; update the handling in the block that calls
executeSpeculatively (the code that reads result.classification ===
TransactionClassification.TO_APPROVE and assigns deltaHash from
specResult.delta) so that when specResult.success is false you do not persist a
stuck TO_APPROVE—either reject the transaction or mark it with a terminal
failure classification (e.g., TransactionClassification.TO_REJECT or a
FAILED_SIMULATION status), clear any delta hash, and persist that updated
classification immediately; ensure this mirrors how
ContinuousForge.runForgeRound() gates recording on success && delta so the
mempool won’t retain non-progressing entries.
| pause(): void { | ||
| this.state.isPaused = true | ||
| log.debug("[ContinuousForge] Paused") |
There was a problem hiding this comment.
pause() doesn't drain an in-flight round.
runBlockPeriod() pauses the forge and immediately starts arbitration/block compilation, but pause() only flips a flag for the next timer tick. A round already inside runForgeRound() can still reach Lines 227-239 and rewrite mempool classifications while the block is being compiled. Please add a drain/await path for the active round before the block boundary proceeds.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/libs/consensus/petri/forge/continuousForge.ts` around lines 93 - 95,
pause() currently only sets state.isPaused and doesn't wait for an active round;
ensure in ContinuousForge you track the currently running round promise (e.g.,
add a member like currentRoundPromise or activeRoundPromise) and assign it at
the start of runForgeRound() and clear it on completion. Update pause() to set
state.isPaused then await the tracked currentRoundPromise (if present) so it
drains the in-flight round before returning. Also update runBlockPeriod() to
check/await the same tracked promise before proceeding to arbitration/block
compilation to prevent mempool rewrites during an active round. Ensure
runForgeRound() uses finally to clear the promise so pause() and
runBlockPeriod() never await a dangling promise.
| // Step 2: Get TO_APPROVE transactions | ||
| const toApproveTxs = await Mempool.getByClassification( | ||
| TransactionClassification.TO_APPROVE, | ||
| ) | ||
|
|
||
| if (toApproveTxs.length === 0) { | ||
| log.debug(`[ContinuousForge] Round ${round}: no TO_APPROVE txs`) | ||
| return | ||
| } | ||
|
|
||
| // Step 3: Speculatively execute each and build local delta map | ||
| const localDeltas: Record<string, string> = {} | ||
|
|
||
| for (const mempoolTx of toApproveTxs) { | ||
| // Use existing delta_hash if already computed at insertion | ||
| if (mempoolTx.delta_hash) { | ||
| localDeltas[mempoolTx.hash] = mempoolTx.delta_hash | ||
| continue | ||
| } | ||
|
|
||
| // Otherwise compute now (for txs received via merge without classification) | ||
| const tx = mempoolTx as unknown as import("@kynesyslabs/demosdk/types").Transaction | ||
| const classResult = await classifyTransaction(tx) | ||
| if (classResult.classification === TransactionClassification.TO_APPROVE) { | ||
| const specResult = await executeSpeculatively(tx, classResult.gcrEdits) | ||
| if (specResult.success && specResult.delta) { | ||
| localDeltas[mempoolTx.hash] = specResult.delta.hash | ||
| // Update mempool with computed delta | ||
| await Mempool.updateClassification( | ||
| mempoolTx.hash, | ||
| TransactionClassification.TO_APPROVE, | ||
| specResult.delta.hash, | ||
| ) | ||
| } | ||
| } |
There was a problem hiding this comment.
Handle unclassified mempool rows before filtering to TO_APPROVE.
The fallback at Line 186 only helps rows that already have classification = TO_APPROVE. Transactions merged into the mempool without a persisted classification—for example via the direct repo.save() path in src/libs/blockchain/mempool_v2.ts, Lines 222-224—are excluded by Line 167 and will never be classified, promoted, or flagged. Either classify on ingest or include unclassified rows here and persist the derived classification first.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/libs/consensus/petri/forge/continuousForge.ts` around lines 166 - 200,
The code only fetches txs already marked TransactionClassification.TO_APPROVE
via Mempool.getByClassification so transactions inserted without a persisted
classification are never processed; update the flow to also handle unclassified
mempool rows by fetching TransactionClassification.UNCLASSIFIED (or a combined
query) alongside TO_APPROVE, run classifyTransaction for those entries, persist
the derived classification and any computed delta via
Mempool.updateClassification, and then continue with the existing speculative
execution path (symbols to edit: Mempool.getByClassification,
TransactionClassification.TO_APPROVE/UNCLASSIFIED, classifyTransaction,
executeSpeculatively, and Mempool.updateClassification).
| PETRI_CONSENSUS=true | ||
| PETRI_FORGE_INTERVAL_MS=2000 | ||
| PETRI_BLOCK_INTERVAL_MS=10000 | ||
| PETRI_AGREEMENT_THRESHOLD=7 | ||
| PETRI_PROBLEMATIC_TTL_ROUNDS=5 | ||
| PETRI_SHARD_SIZE=10 |
There was a problem hiding this comment.
Devnet Petri block should be reordered to pass dotenv-linter.
This section has the same UnorderedKey issues; reordering avoids lint noise and keeps env templates consistent.
Proposed fix
-PETRI_CONSENSUS=true
-PETRI_FORGE_INTERVAL_MS=2000
-PETRI_BLOCK_INTERVAL_MS=10000
-PETRI_AGREEMENT_THRESHOLD=7
+PETRI_AGREEMENT_THRESHOLD=7
+PETRI_BLOCK_INTERVAL_MS=10000
+PETRI_CONSENSUS=true
+PETRI_FORGE_INTERVAL_MS=2000
PETRI_PROBLEMATIC_TTL_ROUNDS=5
PETRI_SHARD_SIZE=10📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| PETRI_CONSENSUS=true | |
| PETRI_FORGE_INTERVAL_MS=2000 | |
| PETRI_BLOCK_INTERVAL_MS=10000 | |
| PETRI_AGREEMENT_THRESHOLD=7 | |
| PETRI_PROBLEMATIC_TTL_ROUNDS=5 | |
| PETRI_SHARD_SIZE=10 | |
| PETRI_AGREEMENT_THRESHOLD=7 | |
| PETRI_BLOCK_INTERVAL_MS=10000 | |
| PETRI_CONSENSUS=true | |
| PETRI_FORGE_INTERVAL_MS=2000 | |
| PETRI_PROBLEMATIC_TTL_ROUNDS=5 | |
| PETRI_SHARD_SIZE=10 |
🧰 Tools
🪛 dotenv-linter (4.0.0)
[warning] 26-26: [UnorderedKey] The PETRI_BLOCK_INTERVAL_MS key should go before the PETRI_CONSENSUS key
(UnorderedKey)
[warning] 27-27: [UnorderedKey] The PETRI_AGREEMENT_THRESHOLD key should go before the PETRI_BLOCK_INTERVAL_MS key
(UnorderedKey)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@testing/devnet/.env.example` around lines 24 - 29, Reorder the PETRI_*
entries in the devnet .env example to satisfy dotenv-linter's UnorderedKey rule
by sorting them consistently (e.g., alphabetically or matching the project's
canonical env order); specifically locate the block containing PETRI_CONSENSUS,
PETRI_FORGE_INTERVAL_MS, PETRI_BLOCK_INTERVAL_MS, PETRI_AGREEMENT_THRESHOLD,
PETRI_PROBLEMATIC_TTL_ROUNDS, and PETRI_SHARD_SIZE and rearrange those keys into
the agreed ordering (for example: PETRI_AGREEMENT_THRESHOLD,
PETRI_BLOCK_INTERVAL_MS, PETRI_CONSENSUS, PETRI_FORGE_INTERVAL_MS,
PETRI_PROBLEMATIC_TTL_ROUNDS, PETRI_SHARD_SIZE) so dotenv-linter stops reporting
UnorderedKey.
| const reasons: string[] = [] | ||
| if (!nonceWait.ok) reasons.push("nonce did not advance") | ||
| if (!blockAdvance.ok) reasons.push("block height did not advance") | ||
| if (txHash && !txByHash?.ok) reasons.push("tx not found by hash") | ||
| if (!finalityResult?.hardFinalityObserved) reasons.push("hard finality not observed via getTransactionFinality RPC") | ||
| throw new Error(`petri_tx_inclusion failed: ${reasons.join("; ")}`) |
There was a problem hiding this comment.
Missing failure reason when soft finality is not observed.
The ok check at line 181 requires softFinalityObserved, but the error reasons don't include a message for this case. If soft finality fails while other checks pass, the thrown error won't explain why the scenario failed.
Proposed fix
if (!nonceWait.ok) reasons.push("nonce did not advance")
if (!blockAdvance.ok) reasons.push("block height did not advance")
if (txHash && !txByHash?.ok) reasons.push("tx not found by hash")
+ if (!finalityResult?.softFinalityObserved) reasons.push("soft finality not observed via getTransactionFinality RPC")
if (!finalityResult?.hardFinalityObserved) reasons.push("hard finality not observed via getTransactionFinality RPC")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@testing/loadgen/src/features/consensus/petri_tx_inclusion.ts` around lines
217 - 222, The thrown error in petri_tx_inclusion omits a reason when soft
finality isn't observed; update the error-reasons collection logic to push a
message when finalityResult?.softFinalityObserved is false (e.g., add "soft
finality not observed" alongside the existing checks for nonceWait,
blockAdvance, txByHash and finalityResult?.hardFinalityObserved) so the Error
thrown from petri_tx_inclusion includes that failure cause.
…shes
Track processed pubkeys in a dedicated Set instead of checking only the
signatures map. Previously, mismatched or invalid-signature submissions
were never added to signatures{} so they got re-evaluated on every poll
iteration, inflating mismatchCount and potentially preventing the
threshold from being reached even when enough valid signatures existed.
When petriRelay() returns success=false, the TX was silently dropped — the originating node neither relayed it nor added it to its own mempool. Now falls back to Mempool.addTransaction() so the TX is preserved for inclusion in the next block compiled locally.
|
There was a problem hiding this comment.
Actionable comments posted: 6
🧹 Nitpick comments (3)
src/utilities/mainLoop.ts (1)
4-6: Consider aligning import path style.Line 4 uses the
@/alias while lines 5-6 (and most other imports in this file) use thesrc/prefix. For consistency, consider using the same path style throughout.Suggested diff
-import { petriConsensusRoutine } from "@/libs/consensus/petri" +import { petriConsensusRoutine } from "src/libs/consensus/petri"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utilities/mainLoop.ts` around lines 4 - 6, The imports in mainLoop.ts mix alias styles — change the import for petriConsensusRoutine to match the prevailing "src/" prefix (or conversely change other imports if you prefer "@/"), so update the import line referencing petriConsensusRoutine to use the same path style as getCommonValidatorSeed and getShard; locate the import for petriConsensusRoutine and replace "@/libs/consensus/petri" with the consistent "src/libs/consensus/petri" (or update the other two to "@/libs/consensus/v2/routines/...") ensuring all imports in this file use the same alias form.src/libs/network/endpointExecution.ts (2)
33-33: Inconsistent import path alias.This import uses
@/libs/...while all other local imports in this file usesrc/libs/...(e.g., lines 1, 2, 4, 5, 16, 18, 31). Consider using a consistent path style for maintainability.💡 Suggested fix for consistency
-import { relay as petriRelay } from "@/libs/consensus/petri/routing/petriRouter" +import { relay as petriRelay } from "src/libs/consensus/petri/routing/petriRouter"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libs/network/endpointExecution.ts` at line 33, The import for petriRelay currently uses the alias "@/libs/consensus/petri/routing/petriRouter" which is inconsistent with the other local imports in this file; update the import to use the same path style as the rest (e.g., change to "src/libs/consensus/petri/routing/petriRouter") so the symbol petriRelay is imported via the consistent path convention.
335-337: Response message is misleading after fallback.When
relaySuccessisfalse, the message says "relay pending", but the transaction has actually been added to the local mempool (not pending relay). Consider updating the message for accuracy.📝 Suggested message clarification
message: relaySuccess ? "Transaction routed to shard members" - : "Transaction accepted locally (relay pending)", + : "Transaction accepted locally (added to mempool)",🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libs/network/endpointExecution.ts` around lines 335 - 337, The response message uses relaySuccess to choose between "Transaction routed to shard members" and "Transaction accepted locally (relay pending)" which is inaccurate when relay fails; update the conditional that sets the message (the message property assigned based on relaySuccess in endpointExecution.ts) so the false branch accurately reflects local mempool acceptance — e.g., change to "Transaction accepted locally (added to local mempool)" or similar wording to replace "relay pending".
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/libs/consensus/petri/forge/continuousForge.ts`:
- Around line 228-240: The loop in continuousForge.ts calling
Mempool.updateClassification for each txHash (setting
TransactionClassification.PRE_APPROVED and PROBLEMATIC) can race with concurrent
forge rounds or block compilation because Mempool.updateClassification uses
repo.update() without transaction isolation; change the implementation to
perform these classification changes in a single atomic operation—either add a
batched repository method (e.g., Mempool.updateClassifications or
repo.updateMany) that wraps all txHash updates in a DB transaction, or implement
optimistic locking/version checks in Mempool.updateClassification
(compare-and-swap on a version/timestamp) so concurrent rounds cannot overwrite
each other; update continuousForge.ts to call the new batched/transactional API
instead of per-tx await calls.
- Around line 127-129: getState currently returns a shallow copy of this.state
which leaks the internal pendingTransactions Map; callers can mutate the Map via
the returned object. Fix by returning a new ForgeState where pendingTransactions
is cloned (e.g., construct a new Map from this.state.pendingTransactions) and
keep other fields copied (use the existing spread of this.state but replace
pendingTransactions), so the returned object contains a fresh Map instance and
does not expose internal mutable state; update getState to return that cloned
Map version.
- Around line 280-284: Validate and sanitize peer responses before trusting the
shape used by recordDelta: when handling response.response in continuousForge
(the block assigning to peerDeltas[peer.identity]), check that response.response
is an object, that it has a deltas property which is an object, and that each
key and value in deltas is a string (or otherwise matches the expected type)
before adding it to peerDeltas; if validation fails, log/warn and skip that
peer's deltas to avoid passing malformed data into recordDelta.
- Around line 266-294: The delta exchange in continuousForge.ts currently builds
promises from peers (using peer.longCall) and awaits Promise.all, which can hang
a round if any peer call never resolves; wrap each peer.longCall (or the entire
Promise.all) in a Promise.race that rejects/resolves after an absolute timeout
(e.g., 2-3s) so dangling calls won't block the round, ensure you only write into
peerDeltas from the timed/settled responses (the promises array where
peer.longCall is invoked), and keep existing error handling (log.warn) for
timeouts so late responses are ignored and the function returns promptly with
whatever peerDeltas were gathered.
- Around line 187-201: Remove the unnecessary double-cast of mempoolTx to
Transaction: pass the existing mempoolTx (already typed as MempoolTx which
implements Transaction) directly into classifyTransaction and
executeSpeculatively instead of using "as unknown as Transaction"; keep the
existing logic that checks classResult.classification ===
TransactionClassification.TO_APPROVE, stores specResult.delta.hash into
localDeltas[mempoolTx.hash], and calls
Mempool.updateClassification(mempoolTx.hash,
TransactionClassification.TO_APPROVE, specResult.delta.hash).
In `@src/libs/network/endpointExecution.ts`:
- Around line 326-329: The fallback call to Mempool.addTransaction (using
queriedTx and validatedData.data.reference_block) lacks error handling and can
cause silent loss while the function still returns success: wrap that await in a
try/catch, log the error (including context like the transaction id or
queriedTx), and set the response/result flag to failure (e.g., result.success =
false or equivalent) - mirror the error handling approach used in the default
mempool insertion path (lines that handle errors and set result.success = false)
so failures are propagated correctly.
---
Nitpick comments:
In `@src/libs/network/endpointExecution.ts`:
- Line 33: The import for petriRelay currently uses the alias
"@/libs/consensus/petri/routing/petriRouter" which is inconsistent with the
other local imports in this file; update the import to use the same path style
as the rest (e.g., change to "src/libs/consensus/petri/routing/petriRouter") so
the symbol petriRelay is imported via the consistent path convention.
- Around line 335-337: The response message uses relaySuccess to choose between
"Transaction routed to shard members" and "Transaction accepted locally (relay
pending)" which is inaccurate when relay fails; update the conditional that sets
the message (the message property assigned based on relaySuccess in
endpointExecution.ts) so the false branch accurately reflects local mempool
acceptance — e.g., change to "Transaction accepted locally (added to local
mempool)" or similar wording to replace "relay pending".
In `@src/utilities/mainLoop.ts`:
- Around line 4-6: The imports in mainLoop.ts mix alias styles — change the
import for petriConsensusRoutine to match the prevailing "src/" prefix (or
conversely change other imports if you prefer "@/"), so update the import line
referencing petriConsensusRoutine to use the same path style as
getCommonValidatorSeed and getShard; locate the import for petriConsensusRoutine
and replace "@/libs/consensus/petri" with the consistent
"src/libs/consensus/petri" (or update the other two to
"@/libs/consensus/v2/routines/...") ensuring all imports in this file use the
same alias form.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 2066d833-ec81-4566-9067-912f2120ee9f
📒 Files selected for processing (6)
src/libs/blockchain/chainBlocks.tssrc/libs/consensus/petri/coordination/petriSecretary.tssrc/libs/consensus/petri/forge/continuousForge.tssrc/libs/network/endpointExecution.tssrc/libs/network/rpcDispatch.tssrc/utilities/mainLoop.ts
✅ Files skipped from review due to trivial changes (1)
- src/libs/consensus/petri/coordination/petriSecretary.ts
🚧 Files skipped from review as they are similar to previous changes (2)
- src/libs/network/rpcDispatch.ts
- src/libs/blockchain/chainBlocks.ts
| getState(): Readonly<ForgeState> { | ||
| return { ...this.state } | ||
| } |
There was a problem hiding this comment.
Shallow copy leaks mutable pendingTransactions Map reference.
The spread operator creates a shallow copy, but pendingTransactions is a Map. Callers receive a reference to the internal Map and can mutate it.
🔧 Proposed fix
getState(): Readonly<ForgeState> {
- return { ...this.state }
+ return {
+ ...this.state,
+ pendingTransactions: new Map(this.state.pendingTransactions),
+ }
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/libs/consensus/petri/forge/continuousForge.ts` around lines 127 - 129,
getState currently returns a shallow copy of this.state which leaks the internal
pendingTransactions Map; callers can mutate the Map via the returned object. Fix
by returning a new ForgeState where pendingTransactions is cloned (e.g.,
construct a new Map from this.state.pendingTransactions) and keep other fields
copied (use the existing spread of this.state but replace pendingTransactions),
so the returned object contains a fresh Map instance and does not expose
internal mutable state; update getState to return that cloned Map version.
| // Otherwise compute now (for txs received via merge without classification) | ||
| const tx = mempoolTx as unknown as import("@kynesyslabs/demosdk/types").Transaction | ||
| const classResult = await classifyTransaction(tx) | ||
| if (classResult.classification === TransactionClassification.TO_APPROVE) { | ||
| const specResult = await executeSpeculatively(tx, classResult.gcrEdits) | ||
| if (specResult.success && specResult.delta) { | ||
| localDeltas[mempoolTx.hash] = specResult.delta.hash | ||
| // Update mempool with computed delta | ||
| await Mempool.updateClassification( | ||
| mempoolTx.hash, | ||
| TransactionClassification.TO_APPROVE, | ||
| specResult.delta.hash, | ||
| ) | ||
| } | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check the Transaction type definition and mempool entity to compare shapes
echo "=== Transaction type from demosdk ==="
rg -n "export (interface|type) Transaction" --type=ts -A 20 | head -50
echo "=== Mempool entity definition ==="
ast-grep --pattern $'class MempoolEntity {
$$$
}'Repository: kynesyslabs/node
Length of output: 3805
🏁 Script executed:
# Find mempool-related definitions
echo "=== Searching for mempool-related types/classes ==="
fd -e ts -e tsx | xargs rg "mempoolTx|MempoolTx|Mempool" --type=ts -B2 -A2 | grep -E "(class|interface|type|const|let|var).*[Mm]empool" | head -30
echo -e "\n=== Searching for classifyTransaction function ==="
rg -n "function classifyTransaction|const classifyTransaction|async.*classifyTransaction" --type=ts
echo -e "\n=== Searching for executeSpeculatively function ==="
rg -n "function executeSpeculatively|const executeSpeculatively|async.*executeSpeculatively" --type=tsRepository: kynesyslabs/node
Length of output: 3728
🏁 Script executed:
# Look for the file containing continuousForge.ts to understand context
echo "=== File structure around continuousForge.ts ==="
ls -la src/libs/consensus/petri/forge/
echo -e "\n=== Search for MempoolTx type or class ==="
rg "type MempoolTx|interface MempoolTx|class MempoolTx" --type=ts -A 10
echo -e "\n=== Look at imports in continuousForge.ts ==="
head -50 src/libs/consensus/petri/forge/continuousForge.ts | grep -E "^import|^export|from"Repository: kynesyslabs/node
Length of output: 1982
🏁 Script executed:
echo "=== classifyTransaction function signature ==="
sed -n '28,50p' src/libs/consensus/petri/classifier/transactionClassifier.ts
echo -e "\n=== executeSpeculatively function signature ==="
sed -n '42,60p' src/libs/consensus/petri/execution/speculativeExecutor.ts
echo -e "\n=== Check if Transaction type is used elsewhere ==="
rg "Transaction\b" tests/mocks/demosdk-types.ts -A 1Repository: kynesyslabs/node
Length of output: 1912
🏁 Script executed:
echo "=== Full MempoolTx definition ==="
sed -n '1,50p' src/model/entities/Mempool.ts
echo -e "\n=== Check where Transaction interface is defined ==="
rg "interface Transaction|type Transaction" --type=ts -B 2 -A 5Repository: kynesyslabs/node
Length of output: 6435
🏁 Script executed:
echo "=== Context around lines 187-201 in continuousForge.ts ==="
sed -n '180,210p' src/libs/consensus/petri/forge/continuousForge.tsRepository: kynesyslabs/node
Length of output: 1685
🏁 Script executed:
echo "=== Check loop or context that assigns mempoolTx ==="
sed -n '170,205p' src/libs/consensus/petri/forge/continuousForge.tsRepository: kynesyslabs/node
Length of output: 1717
🏁 Script executed:
echo "=== Find toApproveTxs definition ==="
sed -n '160,185p' src/libs/consensus/petri/forge/continuousForge.tsRepository: kynesyslabs/node
Length of output: 1115
🏁 Script executed:
echo "=== Search for toApproveTxs in the file ==="
grep -n "toApproveTxs" src/libs/consensus/petri/forge/continuousForge.tsRepository: kynesyslabs/node
Length of output: 280
🏁 Script executed:
echo "=== Find Mempool.getByClassification method ==="
rg -n "getByClassification" --type=ts -B 2 -A 5
echo -e "\n=== Check Mempool imports and class definition ==="
head -100 src/model/entities/Mempool.ts | tail -50Repository: kynesyslabs/node
Length of output: 4369
Remove unnecessary type cast.
mempoolTx is already typed as MempoolTx (from Mempool.getByClassification), which implements the Transaction interface. The double cast as unknown as Transaction is redundant. Simply pass mempoolTx directly to classifyTransaction and executeSpeculatively.
- const tx = mempoolTx as unknown as import("@kynesyslabs/demosdk/types").Transaction
- const classResult = await classifyTransaction(tx)
+ const classResult = await classifyTransaction(mempoolTx)Also pass mempoolTx directly to executeSpeculatively on line 193.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/libs/consensus/petri/forge/continuousForge.ts` around lines 187 - 201,
Remove the unnecessary double-cast of mempoolTx to Transaction: pass the
existing mempoolTx (already typed as MempoolTx which implements Transaction)
directly into classifyTransaction and executeSpeculatively instead of using "as
unknown as Transaction"; keep the existing logic that checks
classResult.classification === TransactionClassification.TO_APPROVE, stores
specResult.delta.hash into localDeltas[mempoolTx.hash], and calls
Mempool.updateClassification(mempoolTx.hash,
TransactionClassification.TO_APPROVE, specResult.delta.hash).
| for (const txHash of promoted) { | ||
| await Mempool.updateClassification( | ||
| txHash, | ||
| TransactionClassification.PRE_APPROVED, | ||
| ) | ||
| } | ||
|
|
||
| for (const txHash of flagged) { | ||
| await Mempool.updateClassification( | ||
| txHash, | ||
| TransactionClassification.PROBLEMATIC, | ||
| ) | ||
| } |
There was a problem hiding this comment.
Sequential mempool updates may race with concurrent forge rounds or block compilation.
Per the Mempool.updateClassification implementation (relevant snippet 3), updates call repo.update() without transaction isolation. If a round overlaps with block compilation (due to the pause() drain issue) or another round on a different node, classification states could become inconsistent.
Consider batching updates within a single transaction or adding optimistic locking.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/libs/consensus/petri/forge/continuousForge.ts` around lines 228 - 240,
The loop in continuousForge.ts calling Mempool.updateClassification for each
txHash (setting TransactionClassification.PRE_APPROVED and PROBLEMATIC) can race
with concurrent forge rounds or block compilation because
Mempool.updateClassification uses repo.update() without transaction isolation;
change the implementation to perform these classification changes in a single
atomic operation—either add a batched repository method (e.g.,
Mempool.updateClassifications or repo.updateMany) that wraps all txHash updates
in a DB transaction, or implement optimistic locking/version checks in
Mempool.updateClassification (compare-and-swap on a version/timestamp) so
concurrent rounds cannot overwrite each other; update continuousForge.ts to call
the new batched/transactional API instead of per-tx await calls.
| const promises = peers.map(async peer => { | ||
| try { | ||
| const response = await peer.longCall( | ||
| { | ||
| method: "consensus_routine", | ||
| params: [{ | ||
| method: "petri_exchangeDeltas", | ||
| params: [{ roundNumber, deltas: localDeltas }], | ||
| }], | ||
| }, | ||
| true, | ||
| { sleepTime: 250, retries: 2 }, | ||
| ) | ||
|
|
||
| if (response.result === 200 && response.response) { | ||
| const data = response.response as { deltas?: Record<string, string> } | ||
| if (data.deltas) { | ||
| peerDeltas[peer.identity] = data.deltas | ||
| } | ||
| } | ||
| } catch (error) { | ||
| log.warn( | ||
| `[ContinuousForge] Delta exchange failed with ${peer.identity.substring(0, 16)}...: ${error}`, | ||
| ) | ||
| } | ||
| }) | ||
|
|
||
| await Promise.all(promises) | ||
| return peerDeltas |
There was a problem hiding this comment.
No overall timeout on delta exchange; round can stall indefinitely.
Promise.all waits for all peer calls. If longCall lacks an absolute timeout (retries only help transient failures), a hanging peer blocks the entire round. This could cascade into missed forge cycles.
🔧 Proposed fix: wrap with Promise.race timeout
- await Promise.all(promises)
+ const EXCHANGE_TIMEOUT_MS = this.config.forgeIntervalMs * 0.8
+ await Promise.race([
+ Promise.all(promises),
+ new Promise<void>((_, reject) =>
+ setTimeout(() => reject(new Error("Delta exchange timeout")), EXCHANGE_TIMEOUT_MS)
+ ),
+ ]).catch(err => {
+ log.warn(`[ContinuousForge] Delta exchange incomplete: ${err}`)
+ })
return peerDeltas🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/libs/consensus/petri/forge/continuousForge.ts` around lines 266 - 294,
The delta exchange in continuousForge.ts currently builds promises from peers
(using peer.longCall) and awaits Promise.all, which can hang a round if any peer
call never resolves; wrap each peer.longCall (or the entire Promise.all) in a
Promise.race that rejects/resolves after an absolute timeout (e.g., 2-3s) so
dangling calls won't block the round, ensure you only write into peerDeltas from
the timed/settled responses (the promises array where peer.longCall is invoked),
and keep existing error handling (log.warn) for timeouts so late responses are
ignored and the function returns promptly with whatever peerDeltas were
gathered.
| if (response.result === 200 && response.response) { | ||
| const data = response.response as { deltas?: Record<string, string> } | ||
| if (data.deltas) { | ||
| peerDeltas[peer.identity] = data.deltas | ||
| } |
There was a problem hiding this comment.
Peer response structure is not validated.
The code trusts that response.response has the expected shape { deltas?: Record<string, string> }. A malicious or buggy peer could send arbitrary data, potentially causing unexpected behavior when iterating over deltas in recordDelta.
🔧 Proposed fix
if (response.result === 200 && response.response) {
const data = response.response as { deltas?: Record<string, string> }
- if (data.deltas) {
+ if (data.deltas && typeof data.deltas === "object") {
peerDeltas[peer.identity] = data.deltas
}
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (response.result === 200 && response.response) { | |
| const data = response.response as { deltas?: Record<string, string> } | |
| if (data.deltas) { | |
| peerDeltas[peer.identity] = data.deltas | |
| } | |
| if (response.result === 200 && response.response) { | |
| const data = response.response as { deltas?: Record<string, string> } | |
| if (data.deltas && typeof data.deltas === "object") { | |
| peerDeltas[peer.identity] = data.deltas | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/libs/consensus/petri/forge/continuousForge.ts` around lines 280 - 284,
Validate and sanitize peer responses before trusting the shape used by
recordDelta: when handling response.response in continuousForge (the block
assigning to peerDeltas[peer.identity]), check that response.response is an
object, that it has a deltas property which is an object, and that each key and
value in deltas is a string (or otherwise matches the expected type) before
adding it to peerDeltas; if validation fails, log/warn and skip that peer's
deltas to avoid passing malformed data into recordDelta.
| await Mempool.addTransaction({ | ||
| ...queriedTx, | ||
| reference_block: validatedData.data.reference_block, | ||
| }) |
There was a problem hiding this comment.
Missing error handling on fallback mempool insertion.
If Mempool.addTransaction throws or fails during the fallback path, the transaction is silently lost and the function still returns success: true. The default mempool insertion path (lines 408-442) handles errors and sets result.success = false on failure.
🛡️ Suggested fix to handle mempool errors
if (!relaySuccess) {
// Fallback: add to local mempool so the TX is not lost
log.warn(
`[handleExecuteTransaction] Petri relay failed for ${queriedTx.hash}, adding to local mempool`,
)
- await Mempool.addTransaction({
- ...queriedTx,
- reference_block: validatedData.data.reference_block,
- })
+ try {
+ const { error } = await Mempool.addTransaction({
+ ...queriedTx,
+ reference_block: validatedData.data.reference_block,
+ })
+ if (error) {
+ log.error(`[handleExecuteTransaction] Fallback mempool add failed: ${error}`)
+ return {
+ success: false,
+ response: { message: "Failed to add transaction to mempool" },
+ extra: { error, routing: "petri" },
+ require_reply: false,
+ }
+ }
+ } catch (e) {
+ log.error(`[handleExecuteTransaction] Fallback mempool add threw: ${e}`)
+ return {
+ success: false,
+ response: { message: "Failed to add transaction to mempool" },
+ extra: { error: String(e), routing: "petri" },
+ require_reply: false,
+ }
+ }
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| await Mempool.addTransaction({ | |
| ...queriedTx, | |
| reference_block: validatedData.data.reference_block, | |
| }) | |
| if (!relaySuccess) { | |
| // Fallback: add to local mempool so the TX is not lost | |
| log.warn( | |
| `[handleExecuteTransaction] Petri relay failed for ${queriedTx.hash}, adding to local mempool`, | |
| ) | |
| try { | |
| const { error } = await Mempool.addTransaction({ | |
| ...queriedTx, | |
| reference_block: validatedData.data.reference_block, | |
| }) | |
| if (error) { | |
| log.error(`[handleExecuteTransaction] Fallback mempool add failed: ${error}`) | |
| return { | |
| success: false, | |
| response: { message: "Failed to add transaction to mempool" }, | |
| extra: { error, routing: "petri" }, | |
| require_reply: false, | |
| } | |
| } | |
| } catch (e) { | |
| log.error(`[handleExecuteTransaction] Fallback mempool add threw: ${e}`) | |
| return { | |
| success: false, | |
| response: { message: "Failed to add transaction to mempool" }, | |
| extra: { error: String(e), routing: "petri" }, | |
| require_reply: false, | |
| } | |
| } | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/libs/network/endpointExecution.ts` around lines 326 - 329, The fallback
call to Mempool.addTransaction (using queriedTx and
validatedData.data.reference_block) lacks error handling and can cause silent
loss while the function still returns success: wrap that await in a try/catch,
log the error (including context like the transaction id or queriedTx), and set
the response/result flag to failure (e.g., result.success = false or equivalent)
- mirror the error handling approach used in the default mempool insertion path
(lines that handle errors and set result.success = false) so failures are
propagated correctly.



Summary by CodeRabbit
New Features
Documentation
Tests
Chores