Skip to content

Added Petri Consensus#692

Open
tcsenpai wants to merge 54 commits intostabilisationfrom
petri
Open

Added Petri Consensus#692
tcsenpai wants to merge 54 commits intostabilisationfrom
petri

Conversation

@tcsenpai
Copy link
Contributor

@tcsenpai tcsenpai commented Mar 23, 2026

Summary by CodeRabbit

  • New Features

    • Petri Consensus enabled by default with configurable intervals, thresholds and shard sizing; OmniProtocol TCP enabled and set to preferred; new transaction finality API (soft/hard finality + confirmed).
  • Documentation

    • Comprehensive Petri design, plans, risks, operational guides and quick references added.
  • Tests

    • Large Petri-focused unit and devnet test suites (block production, relay, soak, inclusion, fault and performance scenarios).
  • Chores

    • Devnet env/docker and test scripts updated to expose Petri/Omni settings and new test commands.

tcsenpai added 30 commits March 16, 2026 17:18
…lysis

- 00-codebase-mapping.md: maps existing PoRBFT v2 to Petri concepts
- 01-implementation-plan.md: 8-phase implementation plan
- 02-risks-and-considerations.md: risks, mitigations, open questions
- petri.md: pitch document in markdown
- Epic #9 created in mycelium with 37 atomic tasks
- Team Mode activated and persisted to AGENTS.md
- Update codebase mapping (v2) with post-stabilisation file paths
- Rewrite implementation plan (v2) with corrected paths and finalized design decisions
- Replace open questions in risks doc with finalized decisions table
- Add VADEMECUM operational guide (team mode, testing, guardrails)
- Update mycelium tasks with test phases and dependency links
New phase for exposing soft finality to SDK consumers.
Marked as requiring user coordination before touching ../sdks/.
- Create src/libs/consensus/petri/ directory structure
- Define TransactionClassification enum (PRE_APPROVED, TO_APPROVE, PROBLEMATIC)
- Define StateDelta, PeerDelta interfaces
- Define ContinuousForgeRound, ForgeConfig, ForgeState interfaces
- Define PetriConfig with defaults (2s forge, 10s block, 7/10 threshold, 5-round TTL)
- Define DeltaComparison, RoundDeltaResult interfaces
- Add petriConsensus feature flag (default: false) to SharedState
- Create index.ts entry point stub with type re-exports
- Update VADEMECUM with diagram agent requirement and Phase 8 tests
- Create TransactionClassifier: classify txs as PRE_APPROVED (read-only)
  or TO_APPROVE (state-changing) based on GCR edit generation
- Create SpeculativeExecutor: simulate GCR edits without mutating state,
  produce deterministic delta hash via canonical JSON + SHA-256
- Create canonicalJson utility: deterministic serialization with sorted
  keys, BigInt handling, Map/Set support
- Extend MempoolTx entity with classification and delta_hash columns
- Add classification queries to Mempool: getByClassification(),
  getPreApproved(), updateClassification()
- Wire classifier into endpointValidation.ts (gated by petriConsensus flag)
- Add 28 unit tests: canonicalJson determinism, classifier logic,
  delta hash determinism — all passing
- Add Phase 0 architecture diagram
Add Phase 1 data flow, new module boxes (classifier, speculativeExecutor,
canonicalJson), feature flag gate visualization, and modified module entries
(Mempool entity, mempool_v2). Phase 0 content preserved.
… validation

Moved Petri classification from endpointValidation (where tx doesn't exist
in mempool yet) to Mempool.addTransaction() (where the row is created).
Classification and delta_hash are now saved atomically with the mempool entry.

Also removed unused IsNull/Not imports from mempool_v2.
- Create DeltaAgreementTracker: tracks per-tx delta agreement across
  forge rounds, promotes at 7/10 threshold, flags after 5-round TTL
- Create ContinuousForge: 2s forge loop with mempool sync, speculative
  execution, all-to-all delta exchange, agreement evaluation, and
  classification updates (TO_APPROVE → PRE_APPROVED or PROBLEMATIC)
- Create forgeInstance singleton for RPC handler access
- Add petri_exchangeDeltas RPC handler to manageConsensusRoutines
  (gated by petriConsensus flag)
- Wire petriConsensusRoutine() in index.ts to create and start forge
- Add 22 new tests: DeltaAgreementTracker (15 tests covering promotion,
  flagging, TTL, mixed scenarios, boundary cases) + ContinuousForge
  state lifecycle (7 tests)
- Total: 50 tests passing across 5 files
Add Continuous Forge cyclic flow diagram, DeltaAgreementTracker detail,
forgeInstance/RPC bridge, and complete round data flow. All P0/P1/P2
components shown with phase annotations.
- PetriBlockCompiler: compiles PRE_APPROVED txs into blocks at 10s boundary
  using existing orderTransactions() + createBlock() infrastructure
- BFTArbitrator: one BFT round per PROBLEMATIC tx — resolve or reject
  (chain never stalls, rejection is fail-safe)
- PetriBlockFinalizer: broadcastBlockHash() + isBlockValid() threshold +
  insertBlock() + BroadcastManager propagation
- petriConsensusRoutine() full lifecycle: forge → pause → arbitrate →
  compile → finalize → reset → resume
- Consensus dispatch switching in mainLoop.ts and manageConsensusRoutines.ts
  (petriConsensus flag gates which routine is called)
- 21 new tests (BFT threshold, result structures, lifecycle, dispatch)
- All 71 Petri tests passing
- ShardMapper: getShardForAddress() — single-shard testnet (always 'default'),
  interface ready for multi-shard expansion
- PetriRouter: selectMembers() uses Alea PRNG seeded with tx hash for
  deterministic routing to exactly 2 shard members
- PetriRouter.relay(): sends ValidityData to selected members via existing
  nodeCall/RELAY_TX RPC pattern (same as DTR but targeted)
- endpointExecution.ts: when petriConsensus flag is on, routes via
  PetriRouter.relay() instead of DTR (early return before validator check)
- 16 new tests (ShardMapper, selectMembers determinism/count/uniqueness,
  routing flag gating, response shapes)
- All 87 Petri tests passing
- BFTArbitrator: document roundNumber: -1 sentinel reuse of petri_exchangeDeltas
- endpointExecution: flag Petri routing early-return mempool flow for P6 verification
- Add soft_finality_at column to MempoolTx and Transactions entities
- Set soft_finality_at in updateClassification() on PRE_APPROVED
- Add getTransactionFinality() function (chain-first, then mempool)
- Add getTransactionFinality RPC endpoint in rpcDispatch
- Re-export Phase 5 components from petri/index.ts
- Add finality.test.ts (15 tests covering structure, transitions, RPC shape)
Only set soft_finality_at on first PRE_APPROVED classification,
not on subsequent updates. Prevents timestamp overwrite if a tx
is re-classified.
- happyPath.test.ts: full lifecycle (classify → agree → compile → finalize)
- conflictPath.test.ts: double-spend → PROBLEMATIC → BFT resolution/rejection
- byzantineFault.test.ts: Byzantine minority tolerance (f < n/3)
- liveness.test.ts: chain never stalls (empty blocks, bounded PROBLEMATIC TTL)
- featureFlagRollback.test.ts: clean ON/OFF/ON toggle with state isolation
- benchmark.test.ts: DeltaTracker throughput (5K txs), routing (10K calls), BFT O(1)
- Add getPetriForgeInstance() getter to forgeInstance.ts

84 new tests (186 total), 0 failures
- Add @deprecated JSDoc to SecretaryManager class (1018 lines)
- Add @deprecated to Secretary RPC handlers (setValidatorPhase, greenlight,
  getValidatorPhase, getBlockTimestamp) in manageConsensusRoutines.ts
- Add @deprecated to OmniProtocol consensus handlers (opcodes 0x35-0x38)
- Add @deprecated to ValidationPhase types in validationStatusTypes.ts
- No deletions — kept for PoRBFT v2 fallback until testnet validation
- Task #119 (flag removal) deferred to post-testnet validation
Reflexion finding: the getBlockTimestamp case block lacked its own
deprecation comment — only covered indirectly by the getValidatorPhase
group comment. Added direct marker for clarity.
- Add test:petri script to package.json (bun run test:petri)
- Register Petri consensus in TESTING_MAP.md coverage map
TR1: Configurable Petri params
- Add PETRI_CONSENSUS, PETRI_FORGE_INTERVAL_MS, PETRI_BLOCK_INTERVAL_MS,
  PETRI_AGREEMENT_THRESHOLD, PETRI_PROBLEMATIC_TTL_ROUNDS, PETRI_SHARD_SIZE
  to envKeys.ts
- Add PetriConsensusConfig interface to config/types.ts
- Add petri defaults to config/defaults.ts
- Wire through config/loader.ts with envBool/envInt
- Load into getSharedState.petriConsensus and petriConfig in index.ts

TR2: Docker devnet wiring
- Add PETRI_CONSENSUS env var to all 4 nodes in docker-compose.yml
- Add PETRI_CONSENSUS to devnet/.env.example (default: false)
- Set PETRI_CONSENSUS=true in .env to enable Petri on devnet
Register petri_block_production and petri_tx_inclusion scenarios in
loadgen main.ts so they are discoverable via run-scenario.sh and the
petri suite in run-suite.ts.
Submits TX to node-1 and verifies propagation to all other nodes
via getTx and getTransactionFinality, ensuring Petri consensus
relay layer works across the full cluster.
Registers and collects petri_enabled, petri_forge_running,
petri_forge_paused, petri_forge_round, petri_pending_tx_count,
and petri_tracker_tx_count gauges for Prometheus observability.
Sends sustained load over configurable rounds, measuring TX
throughput, soft/hard finality latency (p50/p95/p99), block
production rate, and error rate. Outputs baseline JSON summary.
tcsenpai added 5 commits March 21, 2026 15:44
When Petri consensus is active, receiving nodes no longer compile
their own candidate block to compare hashes. Instead, they sign
the proposer's block hash directly and return the signature.

This fixes BFT vote failures caused by independent block compilation
producing different timestamps/hashes across nodes. In Petri, only
the proposer compiles the block — peers validate and vote.
… update docs

Petri consensus is now the default consensus engine (PETRI_CONSENSUS=true).
Soak tested on 4-node devnet: 10/10 TXs, 0% error, blocks advancing continuously.

Bugs fixed:
- chainBlocks: savepoint-based error isolation prevents DB transaction poisoning
  when individual TX inserts fail (e.g., duplicate hashes)
- petriBlockCompiler: TX cutoff now uses milliseconds (was comparing ms timestamps
  against second-granularity cutoff, filtering out all TXs)
- petri/index: startingConsensus flag reset in finally block
- petriSecretary: election includes self in sorted identity list
- broadcastBlockHash: Promise.allSettled + sequential sig verification
- manageProposeBlockHash: accept-and-sign model for Petri consensus
- orderTransactions: hash tiebreaker for deterministic ordering
- broadcastManager: removed signer filter so members receive finalized block

Config changes:
- defaults.ts: petri.enabled=true, omni.mode=OMNI_PREFERRED
- .env.example: OMNI_ENABLED=true, OMNI_MODE=OMNI_PREFERRED, PETRI_CONSENSUS=true
- devnet: PETRI_CONSENSUS default true, OMNI_MODE=OMNI_PREFERRED on all nodes

Docs updated:
- petri/01-implementation-plan.md: status table, P7 completed, fixes documented
- petri/VADEMECUM.md: feature flag default, modified files list, coordination dir
- petri/03-secretary-coordinated-signing.md: P9 design doc (future work)

Mycelium: P7 closed, P8/P9 remaining tasks added (#133-#135)
…anager

- petri/consensus.md: source-level explanation of how Petri consensus
  actually works, with file paths, line numbers, flow diagram, invariants,
  and soak test results
- PoRBFT.ts: mark consensusRoutine() as @deprecated
- l2ps_mempool.ts: guard SecretaryManager access behind petriConsensus
  flag — when Petri is active, use Chain.getLastBlockNumber() directly
  instead of stale SecretaryManager state

Soak test re-run: ok=true, 10/10 TXs, 25 blocks, zero PoRBFT activity.
# Conflicts:
#	src/libs/blockchain/chainBlocks.ts
#	src/libs/consensus/v2/routines/broadcastBlockHash.ts
#	testing/loadgen/src/features/consensus/petri_block_production.ts
#	testing/loadgen/src/features/consensus/petri_relay_flow.ts
#	testing/loadgen/src/features/consensus/petri_soak.ts
#	testing/loadgen/src/features/consensus/petri_tx_inclusion.ts
@tcsenpai tcsenpai requested a review from cwilvx March 23, 2026 12:32
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 23, 2026

Walkthrough

Adds a new Petri Consensus implementation and feature-flag plumbing across config, runtime, mempool, consensus, RPCs, DB schema, tests, devnet tooling, and docs; also flips Omni defaults and introduces Team Mode/AGENTS.md changes. Petri is feature-flagged and wired into the main loop.

Changes

Cohort / File(s) Summary
Env & Devnet config
\.env.example, testing/devnet/.env.example, testing/devnet/docker-compose.yml
Enable/prefer OmniProtocol by default; add Petri env vars and defaults; adjust docker OMNI container ports and map host ports.
App config types & loader
src/config/defaults.ts, src/config/envKeys.ts, src/config/types.ts, src/config/index.ts, src/config/loader.ts
Add petri config defaults, env keys, new PetriConsensusConfig type, Config.petri getter, loader reading/validation for Petri params.
Shared state & startup wiring
src/utilities/sharedState.ts, src/index.ts, src/utilities/mainLoop.ts, src/libs/network/manageConsensusRoutines.ts
Add sharedState.petri flags/config, wire warmup to populate getSharedState.petriConfig, and dispatch petriConsensusRoutine when enabled; add Petri RPC methods to consensus dispatcher.
Core Petri modules
src/libs/consensus/petri/... (many files, e.g. index.ts, forge/continuousForge.ts, forge/deltaAgreementTracker.ts, execution/speculativeExecutor.ts, arbitration/bftArbitrator.ts, block/petriBlockCompiler.ts, block/petriBlockFinalizer.ts, coordination/petriSecretary.ts, routing/petriRouter.ts, routing/shardMapper.ts, classifier/transactionClassifier.ts, finality/transactionFinality.ts, utils/canonicalJson.ts, types/*)
New Petri implementation: continuous forge loop, delta tracker, classifier, speculative executor, arbitration, block compile/finalize, secretary coordination, routing, finality API, canonical JSON utility, types, and orchestration (petriConsensusRoutine).
Mempool & blockchain integration
src/libs/blockchain/mempool_v2.ts, src/libs/blockchain/l2ps_mempool.ts, src/libs/blockchain/chainBlocks.ts, src/model/entities/Mempool.ts, src/model/entities/Transactions.ts
Mempool stores classification/delta_hash/soft_finality_at and adds query/update APIs; l2ps block-number path gated by Petri; insertBlock now uses QueryRunner with per-transaction savepoints.
RPC / network / endpoint changes
src/libs/network/rpcDispatch.ts, src/libs/network/endpointExecution.ts, src/libs/communications/broadcastManager.ts, src/libs/omniprotocol/protocol/handlers/consensus.ts
Add getTransactionFinality RPC, Petri relay path in execute flow that may bypass PoRBFT mempool logic, change block sync peer selection to broadcast to all, mark legacy secretary opcodes deprecated.
PoRBFT v2 adjustments
src/libs/consensus/v2/... (e.g. PoRBFT.ts, routines/*, types/*)
Add deprecation JSDoc, feature-flag gating to coexist with Petri, deterministic tie-breaker in ordering, sequential signature verification in broadcast flow.
Metrics
src/features/metrics/MetricsCollector.ts
Register Petri gauges and add conditional collection with lazy imports (safe on missing forge instance).
Tests & tooling
testing/petri/*, testing/loadgen/src/features/consensus/*, testing/loadgen/src/main.ts, testing/scripts/run-suite.ts, testing/TESTING_MAP.md, package.json
Add extensive Petri unit/integration tests, devnet scenarios and loadgen features, register petri suite and test:petri script; update testing map.
Docs & team files
petri/*, AGENTS.md, TEAM.md, .gitignore
Add Petri design/plan/risk/operations docs; replace AGENTS.md block with TEAM.md directive and Team Mode; add /testing/runs to .gitignore.
Genesis
data/genesis.json
Append four genesis balance entries.

Sequence Diagram(s)

sequenceDiagram
    autonumber
    participant Client as Client/SDK
    participant RPC as RPC Endpoint
    participant Validator as Validation Layer
    participant Router as Petri Router
    participant Members as Shard Members
    participant Forge as ContinuousForge
    participant Arb as BFT Arbitrator
    participant Compiler as Block Compiler
    participant Secretary as Secretary
    participant Chain as Blockchain

    Client->>RPC: submit transaction
    RPC->>Validator: validate & compute GCR edits
    Validator->>Router: classify & relay (if petri)
    Router->>Members: relay to selected members
    Members->>Forge: add to local mempool
    Forge->>Forge: periodic rounds (speculative exec, exchange deltas)
    Forge->>Arb: on block boundary -> arbitrate problematic txs
    Arb->>Compiler: return resolved/rejected lists
    Compiler->>Secretary: compile candidate block
    Secretary->>Members: collect signatures (petri_submitBlockHash)
    Members->>Secretary: return signatures
    Secretary->>Chain: insert finalized block (on threshold)
Loading
sequenceDiagram
    autonumber
    participant MainLoop as mainLoop
    participant Petri as PetriConsensus
    participant PoRBFT as PoRBFTv2

    MainLoop->>MainLoop: check getSharedState.petriConsensus
    alt petri enabled
        MainLoop->>Petri: petriConsensusRoutine(shard)
    else
        MainLoop->>PoRBFT: consensusRoutine()
    end
Loading

Estimated code review effort

🎯 5 (Critical) | ⏱️ ~120 minutes

Possibly related PRs

Suggested labels

Review effort 5/5

Suggested reviewers

  • cwilvx

Poem

🐰
I hopped into code with a curious nose,
Petri seeds sprouted where consensus grows.
Deltas whisper, forges hum two-by-two,
Signatures gather, blocks fall through.
Carrot-toast to nodes — a fresh ledger for you!

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch petri

@qodo-code-review
Copy link
Contributor

ⓘ You are approaching your monthly quota for Qodo. Upgrade your plan

Review Summary by Qodo

Petri Consensus Protocol: Complete Implementation with Byzantine Fault Tolerance and Dual Finality

✨ Enhancement 🧪 Tests 📝 Documentation

Grey Divider

Walkthroughs

Description
• **Petri Consensus Protocol Implementation**: Complete implementation of a leaderless, Byzantine
  fault-tolerant consensus protocol with 9 phases spanning transaction classification, continuous
  forge loop, delta agreement tracking, BFT arbitration, block compilation, and secretary-coordinated
  finalization
• **Core Consensus Components**:
  - ContinuousForge (Phase 2): 2-second forge loop with speculative execution and delta exchange
  - DeltaAgreementTracker: Per-transaction agreement tracking with threshold-based promotion and
  TTL-based flagging
  - BFTArbitrator: Resolves delta disagreements via 2/3+1 member consensus
  - PetriBlockCompiler: Deterministic block compilation at 10-second boundaries
  - PetriBlockFinalizer: Secretary-coordinated block signing with verify-then-sign model
• **Transaction Classification**: Dual-path routing based on GCR edit analysis (PRE_APPROVED for
  read-only, TO_APPROVE for state-changing)
• **Routing & Coordination**: Deterministic shard member selection via Alea PRNG, secretary election
  as first peer in shard
• **Mempool Integration**: Classification at insertion time, soft/hard finality tracking,
  speculative execution support
• **RPC Endpoints**: getTransactionFinality for dual finality queries, petri_exchangeDeltas for
  Phase 2, petri_submitBlockHash for Phase 9
• **Comprehensive Test Suite**: 186 unit and integration tests covering happy path, conflict
  resolution, Byzantine fault tolerance, liveness guarantees, performance benchmarks, and feature flag
  rollback
• **Devnet Scenarios**: 4 E2E test scenarios (block production, TX inclusion, relay flow, soak test)
  for performance validation
• **Configuration System**: Environment-based Petri config with sensible defaults (forge interval
  2s, block interval 10s, agreement threshold 7, shard size 10)
• **Database Schema**: Mempool and Transaction entities extended with classification, delta_hash,
  and soft_finality_at columns
• **Documentation**: Implementation plan, codebase mapping, operational guide, and consensus flow
  reference
• **Backward Compatibility**: Feature flag gating (petriConsensus) with PoRBFT v2 fallback,
  deprecation notices for replaced components
Diagram
flowchart LR
  TX["Transaction Submitted"]
  CLASSIFY["Transaction Classifier"]
  ROUTE["Petri Router"]
  FORGE["Continuous Forge<br/>2s Loop"]
  DELTA["Delta Agreement<br/>Tracker"]
  BFT["BFT Arbitrator<br/>2/3+1 Consensus"]
  COMPILE["Block Compiler<br/>10s Boundary"]
  FINALIZE["Block Finalizer<br/>Secretary Coordination"]
  CHAIN["Chain Insertion"]
  
  TX --> CLASSIFY
  CLASSIFY -->|PRE_APPROVED| ROUTE
  CLASSIFY -->|TO_APPROVE| ROUTE
  ROUTE --> FORGE
  FORGE --> DELTA
  DELTA -->|Agreement| COMPILE
  DELTA -->|Disagreement| BFT
  BFT --> COMPILE
  COMPILE --> FINALIZE
  FINALIZE --> CHAIN
Loading

Grey Divider

File Changes

1. better_testing/petri/secretaryCoordination.test.ts 🧪 Tests +451/-0

Secretary-Coordinated Block Signing Test Suite

• Comprehensive test suite for Petri Consensus Phase 9 secretary-coordinated block signing
• Tests secretary election logic (deterministic, first peer in shard)
• Validates BFT threshold calculations and collection agreement logic
• Tests hash matching/mismatching, submission receipt, and secretary failover scenarios
• Verifies verify-then-sign model and block finalization flows

better_testing/petri/secretaryCoordination.test.ts


2. src/libs/consensus/petri/coordination/petriSecretary.ts ✨ Enhancement +363/-0

Secretary-Coordinated Block Signing Implementation

• Implements secretary-coordinated block signing for Petri Consensus Phase 9
• Provides collectBlockHashes() for secretary to gather and verify member signatures
• Implements submitBlockHash() for non-secretaries to submit their block hashes
• Includes secretary election via electSecretary() and isWeSecretary() functions
• Handles secretary failover and mempool re-sync on hash mismatches

src/libs/consensus/petri/coordination/petriSecretary.ts


3. testing/loadgen/src/features/consensus/petri_soak.ts 🧪 Tests +291/-0

Petri Consensus Soak Test with Performance Baseline

• Devnet soak test scenario for Petri Consensus measuring sustained load performance
• Tracks TX submission throughput, soft/hard finality latencies, and block production rates
• Polls transaction finality status and computes percentile latencies (p50, p95, p99)
• Generates JSON baseline summary for performance comparison across runs

testing/loadgen/src/features/consensus/petri_soak.ts


View more (80)
4. better_testing/petri/happyPath.test.ts 🧪 Tests +305/-0

Happy-Path Integration Test for Petri Consensus

• Integration test for Petri Consensus Phase 6 happy-path lifecycle
• Simulates TX flow from TO_APPROVE → PRE_APPROVED (soft finality) → confirmed (hard finality)
• Tests delta agreement tracker, BFT voting, and finality result shapes
• Validates ContinuousForge state machine lifecycle (start, pause, resume, stop)

better_testing/petri/happyPath.test.ts


5. better_testing/petri/conflictPath.test.ts 🧪 Tests +275/-0

Conflict Path and BFT Arbitration Test Suite

• Tests Petri Consensus conflict resolution for double-spend and delta disagreement scenarios
• Validates PROBLEMATIC TX flagging after TTL expiration and BFT arbitration
• Verifies mempool cleanup and chain liveness even with conflicting transactions
• Ensures block production continues regardless of conflicts

better_testing/petri/conflictPath.test.ts


6. better_testing/petri/byzantineFault.test.ts 🧪 Tests +266/-0

Byzantine Fault Tolerance Test Suite

• Tests Byzantine fault tolerance with up to f < n/3 malicious nodes
• Validates that honest majority (7/10) reaches agreement despite Byzantine interference
• Tests coordinated attacks, eclipse attempts, and omission faults
• Verifies BFT arbitration resolves PROBLEMATIC TXs when honest nodes agree

better_testing/petri/byzantineFault.test.ts


7. better_testing/petri/benchmark.test.ts 🧪 Tests +287/-0

Petri Consensus Performance Benchmarking Suite

• Performance benchmarks for Petri Consensus components (DeltaAgreementTracker, routing, BFT)
• Validates throughput targets: 1000+ TXs per round, <1ms routing decisions
• Tests finality latency design targets (soft <2s, hard <12s)
• Measures memory efficiency and cleanup behavior

better_testing/petri/benchmark.test.ts


8. src/libs/consensus/petri/forge/continuousForge.ts ✨ Enhancement +288/-0

Continuous Forge Loop Implementation for Petri

• Implements ContinuousForge: the 2-second forge loop for Petri Consensus Phase 2
• Syncs mempools, executes transactions speculatively, and exchanges delta hashes with shard
• Records deltas in DeltaAgreementTracker and promotes/flags TXs based on agreement
• Provides state management (start, stop, pause, resume, reset) and RPC delta exposure

src/libs/consensus/petri/forge/continuousForge.ts


9. testing/loadgen/src/features/consensus/petri_relay_flow.ts 🧪 Tests +248/-0

Petri Consensus Relay Flow E2E Test

• E2E devnet scenario validating Petri consensus relay propagation across all nodes
• Submits TX to bootstrap node and verifies visibility on all other nodes
• Checks nonce advancement and transaction finality consistency across cluster
• Generates JSON summary of relay propagation results

testing/loadgen/src/features/consensus/petri_relay_flow.ts


10. better_testing/petri/liveness.test.ts 🧪 Tests +240/-0

Liveness Guarantee Test Suite for Petri

• Tests Petri Consensus liveness guarantee: chain never stalls
• Validates empty block production when no TXs exist
• Verifies PROBLEMATIC TXs have bounded lifetime (TTL-based cleanup)
• Tests block production continues on schedule regardless of conflicts

better_testing/petri/liveness.test.ts


11. src/libs/blockchain/mempool_v2.ts ✨ Enhancement +81/-0

Mempool Integration with Petri Classification

• Adds Petri Consensus classification at TX insertion time (gated by feature flag)
• Implements classifyTransaction() and speculative execution for TO_APPROVE TXs
• Adds getByClassification(), getPreApproved(), and updateClassification() methods
• Records soft finality timestamp on first PRE_APPROVED classification

src/libs/blockchain/mempool_v2.ts


12. better_testing/petri/blockCompiler.test.ts 🧪 Tests +244/-0

Petri Consensus Phase 3 block compilation and finalization tests

• Comprehensive test suite for BFT threshold calculation with various shard sizes
• Tests for CompilationResult, ArbitrationResult, and FinalizationResult data structures
• Validates consensus dispatch switching logic based on petriConsensus flag
• Tests block period lifecycle including forge pause/resume pattern and empty block handling

better_testing/petri/blockCompiler.test.ts


13. better_testing/petri/deltaTracker.test.ts 🧪 Tests +224/-0

DeltaAgreementTracker agreement and flagging mechanism tests

• Unit tests for DeltaAgreementTracker promotion and flagging logic
• Tests threshold-based promotion when members agree on delta hashes
• Tests TTL expiry flagging when no agreement is reached
• Tests edge cases including late-arriving deltas, majority wins, and state cleanup

better_testing/petri/deltaTracker.test.ts


14. testing/loadgen/src/features/consensus/petri_tx_inclusion.ts 🧪 Tests +227/-0

Petri Consensus TX inclusion and finality devnet scenario

• Devnet scenario testing TX inclusion and finality in Petri Consensus
• Verifies soft finality (softFinalityAt) and hard finality (hardFinalityAt) via RPC
• Tests transaction nonce advancement and block production
• Measures finality latency and validates getTransactionFinality RPC response format

testing/loadgen/src/features/consensus/petri_tx_inclusion.ts


15. better_testing/petri/featureFlagRollback.test.ts 🧪 Tests +227/-0

Feature flag rollback and state isolation for Petri toggle

• Tests clean switching between PoRBFT v2 and Petri Consensus via feature flag
• Validates forge instance lifecycle (creation, deregistration, state isolation)
• Tests concurrent safety (double stop, reset after stop, null instance handling)
• Verifies mempool compatibility and classification enum consistency across toggles

better_testing/petri/featureFlagRollback.test.ts


16. better_testing/petri/finality.test.ts 🧪 Tests +216/-0

Transaction finality result structure and RPC response tests

• Tests TransactionFinalityResult structure and field types for various classification states
• Validates finality state transitions (TO_APPROVE → PRE_APPROVED with soft finality timestamp)
• Tests RPC response shape for getTransactionFinality endpoint
• Verifies soft_finality_at timestamp behavior and validation logic

better_testing/petri/finality.test.ts


17. better_testing/petri/routing.test.ts 🧪 Tests +194/-0

Petri routing and shard member selection tests

• Tests ShardMapper returning 'default' shard in single-shard testnet mode
• Tests selectMembers determinism, count validation, and uniqueness guarantees
• Tests routing flag gating logic for Petri vs DTR paths
• Validates response shapes for both Petri and DTR routing modes

better_testing/petri/routing.test.ts


18. src/libs/consensus/petri/block/petriBlockFinalizer.ts ✨ Enhancement +197/-0

Petri block finalization with secretary coordination

• Implements secretary-driven block finalization using broadcast model
• Secretary broadcasts block hash to shard peers for verify-then-sign verification
• Collects signatures and checks BFT threshold before block insertion
• Non-secretary members wait for finalized block via broadcast/sync

src/libs/consensus/petri/block/petriBlockFinalizer.ts


19. src/libs/consensus/petri/index.ts ✨ Enhancement +180/-0

Petri Consensus protocol entry point and orchestration

• Entry point for Petri Consensus protocol implementation
• Orchestrates continuous forge loop with periodic block finalization at 10s boundaries
• Implements block period lifecycle: forge → arbitrate → compile → finalize → reset
• Re-exports all Petri phases (2-5) and coordinates consensus routine dispatch

src/libs/consensus/petri/index.ts


20. src/libs/network/manageConsensusRoutines.ts ✨ Enhancement +118/-4

Consensus routine dispatch and Petri RPC handlers

• Adds Petri Consensus dispatch logic gated by petriConsensus feature flag
• Implements petri_exchangeDeltas RPC handler for Phase 2 delta exchange
• Implements petri_submitBlockHash RPC handler for Phase 9 secretary-coordinated block signing
• Marks deprecated Secretary RPCs as no-ops when Petri is active

src/libs/network/manageConsensusRoutines.ts


21. src/libs/consensus/v2/routines/broadcastBlockHash.ts ✨ Enhancement +63/-89

Block hash broadcast refactoring with improved error handling

• Refactors to use Promise.allSettled for parallel RPC calls with better error handling
• Improves signature verification with inline error handling instead of nested promises
• Simplifies logging and response tracking for block hash broadcast
• Returns signature count and vote tallies for BFT threshold checking

src/libs/consensus/v2/routines/broadcastBlockHash.ts


22. src/libs/consensus/petri/block/petriBlockCompiler.ts ✨ Enhancement +151/-0

Petri block compilation with deterministic TX selection

• Compiles mempool transactions into candidate blocks at 10s boundaries
• Applies deterministic timestamp cutoff to ensure consistent TX set across nodes
• Reuses existing orderTransactions and createBlock infrastructure
• Includes cleanup function to remove rejected PROBLEMATIC transactions from mempool

src/libs/consensus/petri/block/petriBlockCompiler.ts


23. better_testing/petri/classifier.test.ts 🧪 Tests +192/-0

Transaction classification logic unit tests

• Tests transaction classification logic (PRE_APPROVED vs TO_APPROVE)
• Validates that read-only transactions with only fee/nonce edits are PRE_APPROVED
• Tests that transactions with state-changing edits are classified as TO_APPROVE
• Covers edge cases including identity edits, storage program edits, and different accounts

better_testing/petri/classifier.test.ts


24. src/libs/consensus/petri/arbitration/bftArbitrator.ts ✨ Enhancement +149/-0

BFT arbitration for delta disagreement resolution

• Implements BFT arbitration for PROBLEMATIC transactions with delta disagreement
• Re-executes transactions speculatively and exchanges deltas with shard members
• Promotes transactions to PRE_APPROVED if 2/3+1 members agree on delta hash
• Rejects transactions if agreement not reached, ensuring chain never stalls

src/libs/consensus/petri/arbitration/bftArbitrator.ts


25. src/libs/consensus/petri/forge/deltaAgreementTracker.ts ✨ Enhancement +171/-0

Delta agreement tracking and promotion/flagging mechanism

• Tracks per-transaction delta agreement across forge rounds within a shard
• Promotes transactions to PRE_APPROVED when threshold agreement is reached
• Flags transactions as PROBLEMATIC after TTL rounds without agreement
• Provides detailed comparison data for diagnostics and round results

src/libs/consensus/petri/forge/deltaAgreementTracker.ts


26. better_testing/petri/speculativeExecutor.test.ts 🧪 Tests +130/-0

Speculative executor delta determinism tests

• Tests deterministic delta hashing for speculative transaction execution
• Validates that same GCR edits produce same canonical hash regardless of key order
• Tests that different amounts, accounts, and edit orders produce different hashes
• Verifies BigInt and number amount handling consistency

better_testing/petri/speculativeExecutor.test.ts


27. src/libs/blockchain/chainBlocks.ts ✨ Enhancement +89/-70

Block insertion with savepoint-based transaction isolation

• Refactors insertBlock to use QueryRunner with savepoint support
• Prevents single transaction insert failure from poisoning entire PostgreSQL transaction
• Improves error handling with per-transaction savepoints and rollback capability
• Maintains committed transaction tracking for identity commitment updates

src/libs/blockchain/chainBlocks.ts


28. src/libs/consensus/petri/execution/speculativeExecutor.ts ✨ Enhancement +130/-0

Speculative transaction execution for delta generation

• Executes transaction GCR edits in simulation mode without mutating state
• Produces deterministic StateDelta with canonical hash for cross-node agreement
• Handles balance, nonce, identity, and other GCR edit types
• Ensures two honest nodes produce identical delta hashes for same transaction

src/libs/consensus/petri/execution/speculativeExecutor.ts


29. src/libs/consensus/petri/routing/petriRouter.ts ✨ Enhancement +129/-0

Petri transaction routing to shard members

• Routes validated transactions to exactly 2 shard members deterministically
• Uses Alea PRNG seeded with transaction hash for consistent member selection
• Provides getCurrentShard helper and relay function for transaction distribution
• Ensures all nodes agree on which members handle a given transaction

src/libs/consensus/petri/routing/petriRouter.ts


30. better_testing/petri/canonicalJson.test.ts 🧪 Tests +107/-0

Canonical JSON serialization determinism test suite

• Comprehensive unit tests for deterministic JSON serialization with sorted keys
• Validates BigInt handling (string with 'n' suffix), Map/Set sorting, and nested object key
 ordering
• Tests determinism guarantee: identical logical data produces identical strings regardless of key
 insertion order
• Covers edge cases: null/undefined, empty collections, special characters, deeply nested structures

better_testing/petri/canonicalJson.test.ts


31. src/features/metrics/MetricsCollector.ts ✨ Enhancement +53/-0

Petri consensus observability metrics integration

• Added 6 new Petri consensus metrics gauges (enabled, forge running/paused, round number,
 pending/tracked TX counts)
• Implemented collectPetriMetrics() method to gather forge state and transaction tracking data
• Integrated Petri metrics collection into main metrics cycle with error handling
• Metrics gated by petriConsensus feature flag with graceful degradation

src/features/metrics/MetricsCollector.ts


32. testing/loadgen/src/features/consensus/petri_block_production.ts 🧪 Tests +84/-0

Petri block production devnet scenario test

• New devnet scenario validating block production when PETRI_CONSENSUS=true
• Checks Petri RPC availability via getTransactionFinality endpoint probe
• Verifies block height advancement with configurable thresholds and timeouts
• Outputs summary JSON with Petri-specific metrics (enabled node count, block deltas)

testing/loadgen/src/features/consensus/petri_block_production.ts


33. src/libs/blockchain/l2ps_mempool.ts ✨ Enhancement +19/-8

L2PS mempool Petri consensus routing integration

• Added conditional logic to skip SecretaryManager when Petri consensus is active
• Block number determination now branches: Petri uses chain directly, PoRBFT v2 uses secretary shard
 reference
• Minor formatting fixes (trailing commas in function signatures)
• Maintains backward compatibility with PoRBFT v2 fallback via feature flag

src/libs/blockchain/l2ps_mempool.ts


34. src/libs/consensus/petri/finality/transactionFinality.ts ✨ Enhancement +86/-0

Transaction finality query RPC endpoint implementation

• New RPC endpoint implementation for dual finality model (soft/hard timestamps)
• Queries both mempool (pending) and chain (confirmed) for transaction status
• Returns classification, soft finality (PRE_APPROVED timestamp), hard finality (block inclusion
 timestamp)
• Includes error handling and logging for transaction lookup failures

src/libs/consensus/petri/finality/transactionFinality.ts


35. better_testing/petri/continuousForge.test.ts 🧪 Tests +85/-0

Continuous forge state machine lifecycle tests

• State lifecycle tests for ContinuousForge state machine (start, stop, pause, resume, reset)
• Validates initial state, running/paused flags, round counter, and delta tracking
• Tests idempotency (double start ignored) and object reference isolation (getCurrentDeltas returns
 copies)
• Does not test actual forge rounds (requires DB/network) — integration tests cover that

better_testing/petri/continuousForge.test.ts


36. src/libs/consensus/petri/classifier/transactionClassifier.ts ✨ Enhancement +79/-0

Transaction classifier for Petri consensus phases

• Transaction classification logic based on GCR edit generation (empty edits = PRE_APPROVED,
 non-empty = TO_APPROVE)
• Filters out fee-only and nonce-only edits to detect true state changes
• Supports pre-computed edits to avoid redundant GCR generation
• Returns classification result with full edits array for downstream use

src/libs/consensus/petri/classifier/transactionClassifier.ts


37. src/libs/consensus/v2/routines/manageProposeBlockHash.ts ✨ Enhancement +30/-1

Accept-and-sign block hash signing for Petri consensus

• Added Petri consensus accept-and-sign model: members sign secretary's block hash directly without
 verification
• Dual-mode implementation: Petri trusts deterministic secretary, PoRBFT v2 verifies-then-signs
• Imports uint8ArrayToHex for signature encoding
• Gated by petriConsensus feature flag with early return before PoRBFT v2 path

src/libs/consensus/v2/routines/manageProposeBlockHash.ts


38. testing/loadgen/src/main.ts 🧪 Tests +8/-0

Petri consensus test scenario registration

• Registered 4 new Petri consensus test scenarios: petri_block_production, petri_tx_inclusion,
 petri_relay_flow, petri_soak
• Added imports for Petri scenario functions
• Scenarios available for test suite execution and devnet validation

testing/loadgen/src/main.ts


39. src/libs/consensus/petri/types/continuousForgeTypes.ts ✨ Enhancement +43/-0

Continuous forge type definitions and interfaces

• Defines ContinuousForgeRound interface: round metadata, transactions, deltas,
 promoted/problematic TX hashes
• Defines ForgeConfig interface: forge interval, agreement threshold, problematic TTL
• Defines ForgeState interface: running/paused flags, round counter, pending transaction map
• Provides type contracts for forge loop coordination

src/libs/consensus/petri/types/continuousForgeTypes.ts


40. src/libs/network/endpointExecution.ts ✨ Enhancement +23/-0

Petri consensus transaction routing integration

• Added Petri consensus routing via petriRelay() before DTR/mempool addition
• Originating node does NOT add TX to its own mempool — shard members receive via RELAY_TX
• Early return with routing metadata when Petri flag is active
• Maintains backward compatibility with DTR flow when Petri is disabled

src/libs/network/endpointExecution.ts


41. src/libs/consensus/petri/utils/canonicalJson.ts ✨ Enhancement +45/-0

Deterministic canonical JSON serialization utility

• Deterministic JSON serialization with sorted object keys and custom replacer function
• Handles BigInt (converts to string with 'n' suffix), Maps (sorted entries), Sets (sorted values)
• Critical for delta hashing: identical objects produce identical strings regardless of key order
• Used by state delta comparison across shard members

src/libs/consensus/petri/utils/canonicalJson.ts


42. src/libs/consensus/v2/routines/ensureCandidateBlockFormed.ts ✨ Enhancement +20/-6

Petri block compilation wait logic in candidate block formation

• Added Petri consensus path: waits for Petri block compilation instead of falling back to PoRBFT v2
 consensus routine
• Polls for candidate block with 100ms intervals up to 5 seconds when Petri is active
• Preserves PoRBFT v2 path when Petri flag is disabled
• Prevents unnecessary consensus routine invocation during Petri block compilation

src/libs/consensus/v2/routines/ensureCandidateBlockFormed.ts


43. src/libs/network/rpcDispatch.ts ✨ Enhancement +21/-0

Transaction finality RPC endpoint dispatch handler

• Added getTransactionFinality RPC method handler for Petri consensus finality queries
• Validates transaction hash parameter and returns finality result with classification and
 timestamps
• Integrated into main RPC dispatch switch statement
• Provides public API for transaction status queries

src/libs/network/rpcDispatch.ts


44. src/config/loader.ts ⚙️ Configuration changes +9/-0

Petri consensus configuration loading

• Added Petri consensus configuration loading from environment variables
• Loads 5 Petri config parameters: enabled flag, forge interval, block interval, agreement
 threshold, problematic TTL, shard size
• Integrates with existing config loader pattern using envBool() and envInt() helpers
• Defaults applied from defaults.ts

src/config/loader.ts


45. src/utilities/mainLoop.ts ✨ Enhancement +11/-1

Petri consensus routine dispatch in main loop

• Added Petri consensus routine dispatch: computes shard and calls petriConsensusRoutine() when
 flag is active
• Maintains PoRBFT v2 fallback via consensusRoutine() when Petri is disabled
• Imports shard/seed computation functions for Petri path
• Gated by getSharedState.petriConsensus feature flag

src/utilities/mainLoop.ts


46. src/config/envKeys.ts ⚙️ Configuration changes +8/-0

Petri consensus environment variable keys

• Added 6 new environment variable keys for Petri consensus configuration
• Keys: PETRI_CONSENSUS, PETRI_FORGE_INTERVAL_MS, PETRI_BLOCK_INTERVAL_MS,
 PETRI_AGREEMENT_THRESHOLD, PETRI_PROBLEMATIC_TTL_ROUNDS, PETRI_SHARD_SIZE
• Enables environment-based Petri configuration without code changes

src/config/envKeys.ts


47. src/config/index.ts ✨ Enhancement +6/-0

Petri consensus config accessor and type export

• Added petri config getter to Config class
• Exports PetriConsensusConfig type for type safety
• Maintains read-only access pattern consistent with other config sections

src/config/index.ts


48. src/libs/consensus/petri/types/classificationTypes.ts ✨ Enhancement +27/-0

Transaction classification type definitions

• Defines TransactionClassification enum: PRE_APPROVED, TO_APPROVE, PROBLEMATIC
• Defines ClassifiedTransaction interface with hash, classification, timestamps, forge round,
 delta hash, promotion/rejection tracking
• Provides type contracts for transaction state tracking through forge cycles

src/libs/consensus/petri/types/classificationTypes.ts


49. src/model/entities/Mempool.ts ✨ Enhancement +12/-0

Mempool entity Petri consensus columns

• Added classification column (text, nullable) with index for Petri transaction classification
• Added delta_hash column (text, nullable) for state delta tracking
• Added soft_finality_at column (bigint, nullable) for Petri finality timestamps
• Enables mempool to track Petri-specific transaction metadata

src/model/entities/Mempool.ts


50. testing/scripts/run-suite.ts 🧪 Tests +7/-1

Petri consensus test suite registration

• Added petri test suite type with 4 scenarios: block production, TX inclusion, relay flow, soak
 test
• Integrated into suite registry for test orchestration
• Enables coordinated Petri consensus testing across multiple scenarios

testing/scripts/run-suite.ts


51. src/libs/consensus/petri/types/stateDelta.ts ✨ Enhancement +27/-0

State delta type definitions for Petri consensus

• Defines StateDelta interface: transaction hash, GCR edits, canonical JSON hash, execution
 timestamp, block reference
• Defines PeerDelta interface: peer public key, transaction hash, delta hash, receipt timestamp
• Provides type contracts for state delta exchange and agreement tracking

src/libs/consensus/petri/types/stateDelta.ts


52. src/index.ts ✨ Enhancement +11/-0

Petri consensus initialization in application warmup

• Initialized Petri consensus configuration in warmup phase
• Sets getSharedState.petriConsensus flag and petriConfig from loaded configuration
• Integrates Petri config into shared state for runtime access

src/index.ts


53. src/libs/consensus/petri/types/deltaComparison.ts ✨ Enhancement +25/-0

Delta comparison and agreement type definitions

• Defines DeltaComparison interface: per-transaction delta agreement tracking with peer hash map,
 agreement counts
• Defines RoundDeltaResult interface: aggregated round results with promoted/problematic
 transaction lists
• Provides type contracts for delta agreement evaluation

src/libs/consensus/petri/types/deltaComparison.ts


54. src/config/defaults.ts ⚙️ Configuration changes +10/-1

Petri consensus default configuration

• Added Petri consensus default configuration: enabled=true, forge interval=2000ms, block
 interval=10000ms, agreement threshold=7, TTL=5 rounds, shard size=10
• Changed OMNI server mode default from empty string to OMNI_PREFERRED
• Provides sensible defaults for Petri testnet deployment

src/config/defaults.ts


55. src/config/types.ts ✨ Enhancement +12/-0

Petri consensus configuration type definitions

• Added PetriConsensusConfig interface with 6 configuration fields
• Added petri field to AppConfig interface
• Provides type safety for Petri configuration throughout application

src/config/types.ts


56. src/libs/communications/broadcastManager.ts ✨ Enhancement +5/-5

Block broadcast to all peers for Petri consensus

• Removed signer filter from block broadcast: now sends to ALL peers instead of filtering by
 non-signers
• Petri shard members need finalized block with all signatures, not just unsigned block
• Deduplication handled by receiving side via Chain.getBlockByHash()
• Maintains backward compatibility with PoRBFT v2

src/libs/communications/broadcastManager.ts


57. src/libs/consensus/petri/types/petriConfig.ts ✨ Enhancement +23/-0

Petri consensus configuration interface and defaults

• Defines PetriConfig interface extending ForgeConfig with enabled flag, block interval, shard
 size
• Provides DEFAULT_PETRI_CONFIG constant with conservative testnet values
• Centralizes Petri configuration contract

src/libs/consensus/petri/types/petriConfig.ts


58. src/utilities/sharedState.ts ✨ Enhancement +5/-0

Petri consensus feature flag and config in shared state

• Added petriConsensus boolean flag (default false) for feature flag gating
• Added petriConfig object with default Petri configuration
• Imports DEFAULT_PETRI_CONFIG from Petri types
• Enables runtime Petri consensus control via shared state

src/utilities/sharedState.ts


59. src/libs/consensus/petri/forge/forgeInstance.ts ✨ Enhancement +23/-0

Petri continuous forge singleton instance manager

• Singleton instance management for ContinuousForge
• Exports getPetriForgeInstance() and setPetriForgeInstance() for forge lifecycle management
• Shared between forge loop and RPC handlers for delta exchange queries

src/libs/consensus/petri/forge/forgeInstance.ts


60. src/libs/consensus/petri/routing/shardMapper.ts ✨ Enhancement +20/-0

Shard mapper for transaction routing

• Maps account addresses to shard IDs for transaction routing
• Single-shard testnet implementation: always returns 'default'
• Interface designed for future multi-shard expansion

src/libs/consensus/petri/routing/shardMapper.ts


61. src/libs/consensus/v2/routines/orderTransactions.ts ✨ Enhancement +4/-1

Deterministic transaction ordering with hash tiebreaker

• Added hash-based tiebreaker for deterministic transaction ordering when timestamps are equal
• Sorts by timestamp ascending, then by hash lexicographic comparison
• Ensures all nodes produce identical transaction order from same mempool state

src/libs/consensus/v2/routines/orderTransactions.ts


62. src/libs/consensus/v2/types/secretaryManager.ts 📝 Documentation +5/-0

Secretary manager deprecation notice

• Added deprecation notice: SecretaryManager replaced by Petri Consensus leaderless coordination
• Kept for PoRBFT v2 fallback via feature flag
• Will be removed after Petri validation on testnet

src/libs/consensus/v2/types/secretaryManager.ts


63. src/libs/omniprotocol/protocol/handlers/consensus.ts 📝 Documentation +3/-0

OmniProtocol consensus handlers deprecation notice

• Added deprecation notice for secretary-related handlers (opcodes 0x35-0x38)
• Handlers replaced by Petri Consensus, kept for PoRBFT v2 fallback
• Will be removed after Petri validation

src/libs/omniprotocol/protocol/handlers/consensus.ts


64. src/libs/consensus/v2/PoRBFT.ts 📝 Documentation +2/-0

PoRBFT v2 consensus deprecation notice

• Added deprecation notice: PoRBFT v2 superseded by Petri consensus
• Retained as fallback via feature flag, will be removed after testnet validation

src/libs/consensus/v2/PoRBFT.ts


65. src/model/entities/Transactions.ts ✨ Enhancement +4/-0

Transaction entity Petri soft finality column

• Added soft_finality_at column (bigint, nullable) for Petri finality timestamp tracking
• Enables transaction entities to record soft finality timestamps

src/model/entities/Transactions.ts


66. src/libs/consensus/v2/types/validationStatusTypes.ts 📝 Documentation +3/-0

Validation status types deprecation notice

• Added deprecation notice: replaced by Petri Consensus TransactionClassification types
• Kept for PoRBFT v2 fallback via feature flag

src/libs/consensus/v2/types/validationStatusTypes.ts


67. src/libs/network/endpointValidation.ts Formatting +1/-0

Endpoint validation formatting cleanup

• Minor formatting fix: added blank line before catch block

src/libs/network/endpointValidation.ts


68. petri/01-implementation-plan.md 📝 Documentation +494/-0

Petri consensus implementation plan and phase documentation

• Comprehensive 494-line implementation plan for Petri Consensus phases 0-9
• Documents status, design decisions, current codebase structure, guiding principles
• Details all 9 phases with tasks, dependencies, acceptance criteria, and risk assessments
• Includes file structure, dependency graph, and phase execution guidance

petri/01-implementation-plan.md


69. petri/00-codebase-mapping.md 📝 Documentation +240/-0

Petri consensus codebase mapping and architecture reference

• 240-line codebase mapping document correlating PoRBFT v2 code to Petri Consensus concepts
• Maps shard formation, RPC layer, transaction classification, continuous forge, block finalization
• Documents GCR state management, secretary transition, P2P communication, L2PS integration
• Includes impact assessment and file-by-file status (KEEP/REFACTOR/NEW/REPLACE)

petri/00-codebase-mapping.md


70. petri/consensus.md 📝 Documentation +222/-0

Petri consensus implementation reference and flow documentation

• 222-line source-level reference for Petri consensus implementation
• Step-by-step flow from main loop dispatch through block finalization
• Documents secretary election, block compilation, deterministic ordering, accept-and-sign model
• Includes key invariants, configuration, and soak test results

petri/consensus.md


71. petri/VADEMECUM.md 📝 Documentation +273/-0

Petri consensus operational guide and team mode handbook

• 273-line operational guide for building Petri Consensus
• Covers team mode operation, testing approach, reporting via Mycelium, safety guardrails
• Documents design decisions, file path references, autonomy/transparency expectations
• Includes phase execution checklist and architecture diagram update protocol

petri/VADEMECUM.md


72. TEAM.md 📝 Documentation +154/-0

Team mode behavioral overlay for multi-agent collaboration

• 154-line team mode behavioral overlay for multi-agent engineering
• Defines Tech Lead role, agent tiers (You/Senior/Junior), dispatch protocols, verification workflow
• Documents anti-patterns, user interaction guidelines, edge cases
• Includes file-based persistence mechanism for compaction safety

TEAM.md


73. testing/TESTING_MAP.md 🧪 Tests +1/-0

Petri consensus test suite registration in testing map

• Added Petri consensus to active/implemented/counted test categories
• References 186 unit tests with bun run test:petri command

testing/TESTING_MAP.md


74. .env.example Additional files +10/-2

...

.env.example


75. AGENTS.md Additional files +8/-135

...

AGENTS.md


76. data/genesis.json Additional files +16/-0

...

data/genesis.json


77. package.json Additional files +1/-0

...

package.json


78. petri/02-risks-and-considerations.md Additional files +146/-0

...

petri/02-risks-and-considerations.md


79. petri/03-secretary-coordinated-signing.md Additional files +209/-0

...

petri/03-secretary-coordinated-signing.md


80. petri/architecture-diagram.md Additional files +1172/-0

...

petri/architecture-diagram.md


81. petri/petri.md Additional files +171/-0

...

petri/petri.md


82. testing/devnet/.env.example Additional files +3/-0

...

testing/devnet/.env.example


83. testing/devnet/docker-compose.yml Additional files +22/-8

...

testing/devnet/docker-compose.yml


Grey Divider

Qodo Logo

@qodo-code-review
Copy link
Contributor

qodo-code-review bot commented Mar 23, 2026

Code Review by Qodo

🐞 Bugs (2) 📘 Rule violations (0) 📎 Requirement gaps (0) 📐 Spec deviations (0)

Grey Divider


Action required

1. Unverified hash signing 🐞 Bug ⛨ Security
Description
manageProposeBlockHash() signs any received blockHash when Petri is enabled, without
forming/verifying a local candidate block or comparing hashes. This allows the finalizer to count
signatures for a hash that signers did not verify against block contents.
Code

src/libs/consensus/v2/routines/manageProposeBlockHash.ts[R49-70]

+    if (getSharedState.petriConsensus) {
+        log.info(
+            "[manageProposeBlockHash] Petri active — accept-and-sign (secretary model)",
+        )
+
+        const blockSignature = await ucrypto.sign(
+            getSharedState.signingAlgorithm,
+            new TextEncoder().encode(blockHash),
+        )
+
+        log.info(
+            "[manageProposeBlockHash] Petri: signing secretary's block hash",
+        )
+        response.result = 200
+        response.response = getSharedState.publicKeyHex
+        response.extra = {
+            signatures: {
+                [getSharedState.publicKeyHex]: uint8ArrayToHex(blockSignature.signature),
+            },
+        }
+        return response
+    }
Evidence
The Petri branch returns after signing blockHash, skipping the PoRBFT v2 verification path
(ensureCandidateBlockFormed() and ourCandidateHash === blockHash). Petri finalization relies on
signature count on the block hash, and broadcastBlockHash only verifies signatures cryptographically
over the hash, not content agreement.

src/libs/consensus/v2/routines/manageProposeBlockHash.ts[45-90]
src/libs/consensus/petri/block/petriBlockFinalizer.ts[88-116]
src/libs/consensus/v2/routines/broadcastBlockHash.ts[49-66]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
When `getSharedState.petriConsensus` is true, `manageProposeBlockHash()` signs any incoming `blockHash` immediately (accept-and-sign). This bypasses the existing logic that forms a local candidate block and compares hashes before voting.
### Issue Context
Petri finalization relies on signatures collected via `broadcastBlockHash()`. If validators sign without verifying the proposed hash matches their locally compiled block, the signature threshold can be met without content agreement.
### Fix Focus Areas
- src/libs/consensus/v2/routines/manageProposeBlockHash.ts[45-70]
- src/libs/consensus/v2/routines/ensureCandidateBlockFormed.ts[6-47]
- src/libs/consensus/petri/block/petriBlockFinalizer.ts[80-115]
### Implementation notes
- In Petri mode, compile/form (or otherwise deterministically derive) the local candidate block and only sign if the locally-derived hash matches the proposed `blockHash`.
- If Petri intends secretary-coordinated signing instead, route to that mechanism rather than reusing `proposeBlockHash` for accept-and-sign.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


2. Petri relay drops transactions🐞 Bug ⛯ Reliability
Description
handleExecuteTransaction() returns success in Petri mode before adding the tx to local
mempool/cache; if relay fails, the tx can be acknowledged but lost. There is no retry/durability
mechanism analogous to the DTR path.
Code

src/libs/network/endpointExecution.ts[R314-333]

+        // REVIEW: Petri Consensus routing — relay to 2 shard members instead of DTR.
+        // Note: This early-returns before mempool addition. The originating node does NOT
+        // add the tx to its own mempool — shard members receive it via RELAY_TX and add it
+        // to theirs. Verify this flow works end-to-end in Phase 6 integration testing.
+        if (getSharedState.petriConsensus) {
+            const { success: relaySuccess } = await petriRelay(validatedData)
+            return {
+                success: true,
+                response: {
+                    message: relaySuccess
+                        ? "Transaction routed to shard members"
+                        : "Transaction accepted locally (relay pending)",
+                },
+                extra: {
+                    confirmationBlock: getSharedState.lastBlockNumber + 1,
+                    routing: "petri",
+                },
+                require_reply: false,
+            }
+        }
Evidence
The Petri branch returns immediately after calling petriRelay(validatedData) and returns `success:
true even when relaySuccess` is false. The non-Petri flow below uses DTR relay/caching patterns;
Petri routing only attempts two peers and does not persist locally.

src/libs/network/endpointExecution.ts[293-345]
src/libs/consensus/petri/routing/petriRouter.ts[74-129]
src/libs/network/dtr/dtrmanager.ts[580-623]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
In Petri mode, `handleExecuteTransaction()` returns `success: true` even if routing/relay fails, and it does so before storing the transaction locally (mempool or retry cache). This can lead to silent transaction loss.
### Fix Focus Areas
- src/libs/network/endpointExecution.ts[314-333]
- src/libs/consensus/petri/routing/petriRouter.ts[74-129]
### Implementation notes
- Persist the transaction locally before returning success (e.g., add to mempool_v2 and/or store validityData in a retry queue).
- If relay fails to all targets, return an error (or return an explicit “pending relay” status and ensure background retry is implemented).

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


3. Skipped txs removed🐞 Bug ✓ Correctness
Description
insertBlock() skips failed transaction inserts via savepoints but still removes all block tx hashes
from the mempool, including those not inserted. Skipped transactions are dropped even though they
were not persisted as confirmed.
Code

src/libs/blockchain/chainBlocks.ts[R251-279]

+                        await queryRunner.query(`RELEASE SAVEPOINT ${savepointName}`)
+                        committedTxHashes.push(tx.hash)
+                    } catch (error) {
+                        // Roll back only this savepoint — outer transaction stays valid
+                        await queryRunner.query(`ROLLBACK TO SAVEPOINT ${savepointName}`)
+
+                        if (error instanceof QueryFailedError) {
                          log.error(
-                                "Unexpected error while inserting tx: " +
-                                    tx.hash,
+                                `[ChainDB] [ ERROR ]: Failed to insert transaction ${tx.hash}. Skipping it ...`,
                          )
-                            handleError(error, "CHAIN", { source: "transaction insertion" })
-                            throw error
+                            log.error("Message: " + error.message)
+                            continue
                      }
-                    }
-                    if (cleanMempool) {
-                        await Mempool.removeTransactionsByHashes(
-                            transactionEntities.map(tx => tx.hash),
-                            transactionalEntityManager,
+                        log.error(
+                            "Unexpected error while inserting tx: " +
+                                tx.hash,
                      )
+                        handleError(error, "CHAIN", { source: "transaction insertion" })
+                        throw error
                  }
+                }
-                    const committedTxHashes = transactionEntities.map(
-                        tx => tx.hash,
+                if (cleanMempool) {
+                    await Mempool.removeTransactionsByHashes(
+                        transactionEntities.map(tx => tx.hash),
+                        queryRunner.manager,
                  )
-                    if (committedTxHashes.length > 0) {
-                        await transactionalEntityManager
-                            .createQueryBuilder()
-                            .update(IdentityCommitment)
-                            .set({ blockNumber: block.number })
-                            .where("transaction_hash IN (:...hashes)", {
-                                hashes: committedTxHashes,
-                            })
-                            .andWhere("leaf_index = :leafIndex", {
-                                leafIndex: -1,
-                            })
-                            .execute()
-                    }
-
-                    const commitmentsAdded =
-                        await updateMerkleTreeAfterBlock(
-                            dataSource,
-                            block.number,
-                            transactionalEntityManager,
-                        )
-                    if (commitmentsAdded > 0) {
-                        log.info(
-                            `[ZK] Added ${commitmentsAdded} commitment(s) to Merkle tree for block ${block.number}`,
-                        )
-                    }
+                }
Evidence
committedTxHashes is only appended after a successful savepoint release, but mempool cleanup uses
transactionEntities.map(tx => tx.hash) (all txs) rather than committedTxHashes. This removes
skipped transactions from mempool despite them not being inserted into the transactions table.

src/libs/blockchain/chainBlocks.ts[229-279]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`insertBlock()` uses savepoints to skip individual tx insert failures, but still removes all block txs from the mempool. This drops txs that were not persisted as confirmed.
### Fix Focus Areas
- src/libs/blockchain/chainBlocks.ts[229-279]
### Implementation notes
- Change mempool cleanup to remove only `committedTxHashes`.
- Consider logging committed vs skipped counts for debugging.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools



Remediation recommended

4. Petri tracker metric broken🐞 Bug ✓ Correctness
Description
MetricsCollector checks DeltaAgreementTracker.trackedCount as a function; since it's a getter
property, the gauge is never set and Petri tracker metrics are wrong.
Code

src/features/metrics/MetricsCollector.ts[R764-767]

+            const tracker = (forge as any).tracker
+            if (tracker && typeof tracker.trackedCount === "function") {
+                this.metricsService.setGauge("petri_tracker_tx_count", tracker.trackedCount())
+            }
Evidence
DeltaAgreementTracker defines trackedCount as a getter property (get trackedCount(): number).
MetricsCollector’s typeof tracker.trackedCount === "function" will be false, so the gauge is never
set.

src/features/metrics/MetricsCollector.ts[732-767]
src/libs/consensus/petri/forge/deltaAgreementTracker.ts[164-170]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`collectPetriMetrics()` never sets `petri_tracker_tx_count` because it checks `trackedCount` as if it were a function.
### Fix Focus Areas
- src/features/metrics/MetricsCollector.ts[763-767]
### Implementation notes
- Set gauge using `tracker.trackedCount` (number).
- Optionally guard with `typeof tracker?.trackedCount === 'number'`.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


5. BlockNumber filter falsey🐞 Bug ✓ Correctness
Description
Mempool.getByClassification() ignores blockNumber=0 due to a truthy check, causing unbounded queries
when 0 is a valid reference. This can return incorrect results and increase DB load.
Code

src/libs/blockchain/mempool_v2.ts[R293-296]

+        const where: Record<string, unknown> = { classification }
+        if (blockNumber) {
+            where.blockNumber = LessThanOrEqual(blockNumber)
+        }
Evidence
The new method uses if (blockNumber) to apply LessThanOrEqual(blockNumber). If a caller passes
0, the filter is skipped even though other code treats 0 as a valid block reference.

src/libs/blockchain/mempool_v2.ts[289-301]
src/libs/blockchain/l2ps_mempool.ts[159-176]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`getByClassification()` uses a truthy check for `blockNumber`, so `0` is treated as unset.
### Fix Focus Areas
- src/libs/blockchain/mempool_v2.ts[289-301]
### Implementation notes
- Replace `if (blockNumber)` with `if (blockNumber !== undefined && blockNumber !== null)` (or `if (typeof blockNumber === 'number')`).

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools



Advisory comments

6. Env example missing Petri keys 🐞 Bug ⚙ Maintainability
Description
.env.example documents only three PETRI_* keys but the loader supports additional Petri controls;
operators won’t discover/tune them from the example config and may unintentionally run defaults.
Code

.env.example[R84-90]

+# ===========================================
+# Petri Consensus Configuration
+# ===========================================
+# Petri is enabled by default. Set to false to fall back to PoRBFT v2.
+PETRI_CONSENSUS=true
+PETRI_FORGE_INTERVAL_MS=2000
+PETRI_BLOCK_INTERVAL_MS=10000
Evidence
Config loader reads PETRI_AGREEMENT_THRESHOLD, PETRI_PROBLEMATIC_TTL_ROUNDS, and PETRI_SHARD_SIZE,
but .env.example includes only PETRI_CONSENSUS and the two interval keys.

.env.example[84-90]
src/config/loader.ts[213-220]
src/config/envKeys.ts[77-84]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`.env.example` omits supported Petri env keys, making it harder to configure Petri consensus intentionally.
### Fix Focus Areas
- .env.example[84-90]
- src/config/loader.ts[213-220]
### Implementation notes
- Add example values and brief comments for:
- `PETRI_AGREEMENT_THRESHOLD`
- `PETRI_PROBLEMATIC_TTL_ROUNDS`
- `PETRI_SHARD_SIZE`

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


Grey Divider

ⓘ The new review experience is currently in Beta. Learn more

Grey Divider

Qodo Logo

Move 15 Petri test files from better_testing/petri/ to testing/petri/
to complete the directory consolidation started in c549a6f. Update all
references in package.json, CLAUDE.md, and petri/*.md docs.
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 19

Note

Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.

🟡 Minor comments (13)
TEAM.md-24-29 (1)

24-29: ⚠️ Potential issue | 🟡 Minor

Add language identifiers to fenced code blocks

These fenced blocks are missing language tags and are flagged by markdownlint (MD040).

Proposed doc fix
-```
+```text
 [TEAM MODE: ON]
 Operating as Tech Lead. I'll decompose, delegate, and verify.
 Persisted to AGENTS.md — compaction-safe.
 Standing by for a task or objective.

- +text
[TEAM MODE: OFF]
Removed from AGENTS.md. Back to direct execution mode.


-```
+```text
`@junior` TASK: [imperative steps]
FILES: [exact paths]
PATTERN: [code to mimic]
CONSTRAINT: [explicit boundaries]
DONE_WHEN: [observable criteria]

- +text
@senior OBJECTIVE: [what and why]
SCOPE: [files/modules in play]
CONTEXT: [architectural decisions, constraints]
APPROACH: [suggested direction, not prescriptive]
ACCEPTANCE: [what done looks like]


-```
+```text
[PLAN] 3 subtasks: 1 Junior (scaffolding), 1 Senior (implementation), 1 Lead (integration)
[DISPATCHED] `@junior` — scaffold endpoint files
[DISPATCHED] `@senior` — implement auth middleware
[VERIFIED] junior output ✓
[VERIFIED] senior output — requested revision on error handling
[INTEGRATING] assembling and running tests
[DONE] feature complete, tests passing
</details>


Also applies to: 34-37, 74-80, 83-89, 112-120

<details>
<summary>🤖 Prompt for AI Agents</summary>

Verify each finding against the current code and only fix it if needed.

In @TEAM.md around lines 24 - 29, The fenced code blocks in TEAM.md lack
language identifiers and trigger markdownlint MD040; update each triple-backtick
block (the ones containing the TEAM MODE snippets, the @junior/@senior blocks,
the PLAN/dispatch/verification block and the other ranges noted) to include a
language tag (e.g., change totext) so every fenced block begins with

ON/OFF snippets, the `@junior/`@senior templates, and the PLAN block) to make
markdownlint happy.
AGENTS.md-3-3 (1)

3-3: ⚠️ Potential issue | 🟡 Minor

Use a proper file reference for TEAM.md

@TEAM.md is ambiguous in Markdown rendering and may not resolve as a file link. Prefer TEAM.md or [TEAM.md](TEAM.md) for reliable navigation.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@AGENTS.md` at line 3, In AGENTS.md replace the ambiguous reference string
"@TEAM.md" with a proper file reference such as "TEAM.md" or a Markdown link
"[TEAM.md](TEAM.md)" so the link renders and resolves correctly; update the
occurrence of the literal "@TEAM.md" in that file to the chosen form to ensure
reliable navigation.
src/libs/consensus/petri/utils/canonicalJson.ts-32-34 (1)

32-34: ⚠️ Potential issue | 🟡 Minor

Set sorting may be non-deterministic for numeric values.

Array.from(value).sort() uses lexicographic string comparison by default. For numeric Sets, this produces [1, 10, 2] instead of [1, 2, 10]. If Sets containing numbers are ever serialized, different insertion orders could yield different canonical strings—breaking the determinism guarantee.

🛡️ Suggested fix for numeric stability
     if (value instanceof Set) {
-        return Array.from(value).sort()
+        return Array.from(value).sort((a, b) => {
+            const strA = String(a)
+            const strB = String(b)
+            return strA.localeCompare(strB)
+        })
     }

Using localeCompare ensures consistent string-based ordering. Alternatively, if numeric ordering is preferred:

return Array.from(value).sort((a, b) => {
    if (typeof a === "number" && typeof b === "number") return a - b
    return String(a).localeCompare(String(b))
})
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/libs/consensus/petri/utils/canonicalJson.ts` around lines 32 - 34, The
current Set handling in canonicalJson (the branch with "if (value instanceof
Set) return Array.from(value).sort()") uses default lexicographic sort which is
non-deterministic for numeric values; update this to use a comparator that
numerically compares when both elements are numbers (a - b) and otherwise falls
back to string comparison (String(a).localeCompare(String(b))) so Sets of mixed
types serialize deterministically.
src/libs/omniprotocol/protocol/handlers/consensus.ts-2-4 (1)

2-4: ⚠️ Potential issue | 🟡 Minor

Swap the 0x36/0x37 labels in the deprecation header.

The handlers below still document getValidatorPhase as 0x36 and greenlight as 0x37, so the new header is reversed.

Suggested fix
-// `@deprecated` — Secretary-related handlers (0x35 setValidatorPhase, 0x36 greenlight,
-// 0x37 getValidatorPhase, 0x38 getBlockTimestamp) replaced by Petri Consensus.
+// `@deprecated` — Secretary-related handlers (0x35 setValidatorPhase, 0x36 getValidatorPhase,
+// 0x37 greenlight, 0x38 getBlockTimestamp) replaced by Petri Consensus.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/libs/omniprotocol/protocol/handlers/consensus.ts` around lines 2 - 4, The
deprecation header comment labels for the Secretary-related handlers are
reversed; update the header so 0x36 is labeled "greenlight" and 0x37 is labeled
"getValidatorPhase" to match the actual handler documentation below (referencing
the getValidatorPhase and greenlight handler names in this file). Edit the
header comment at the top of consensus.ts to swap the `0x36` and `0x37` labels
accordingly so they align with the implemented handlers.
petri/02-risks-and-considerations.md-142-142 (1)

142-142: ⚠️ Potential issue | 🟡 Minor

Fix sentence fragment in the rationale.

Line [142] reads as a fragment; add a subject for clarity.

📝 Suggested wording
-| 1 | **Forge interval** | 2 seconds | Conservative start. Gives ample time for delta exchange even on high-latency networks. Can be optimized to 1s later once benchmarked. |
+| 1 | **Forge interval** | 2 seconds | Conservative start. Gives ample time for delta exchange even on high-latency networks. It can be optimized to 1s later once benchmarked. |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@petri/02-risks-and-considerations.md` at line 142, The rationale for the
"Forge interval" row is a sentence fragment; edit the table cell for "Forge
interval" (the row with subject "Forge interval" and value "2 seconds") to add a
clear subject to the rationale—e.g., change "Can be optimized to 1s later once
benchmarked." to "We can optimize this to 1s later once benchmarked."—so the
rationale reads as a complete sentence.
petri/02-risks-and-considerations.md-79-79 (1)

79-79: ⚠️ Potential issue | 🟡 Minor

The L2PSMempool “no change needed” claim is now outdated.

Line [79] conflicts with this PR’s actual changes in src/libs/blockchain/l2ps_mempool.ts. Please soften to reflect that core execution logic is unchanged, but integration/timing/block-ref plumbing did change.

📝 Suggested wording
-**No change needed** to `L2PSMempool`, `L2PSTransactionExecutor`, or `L2PSConsensus` core logic. Only the timing of `applyPendingProofs()` changes slightly.
+**No core execution-model change needed** to `L2PSTransactionExecutor` or `L2PSConsensus`. `L2PSMempool` integration paths were adjusted for Petri-aware block reference handling.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@petri/02-risks-and-considerations.md` at line 79, Update the doc line
claiming "No change needed" for L2PSMempool to reflect that while core execution
logic in L2PSMempool, L2PSTransactionExecutor, and L2PSConsensus remains
unchanged, integration/timing and block-reference plumbing did change (notably
the timing of applyPendingProofs() in L2PSMempool). Replace the absolute "No
change needed" phrasing with a softened statement specifying that core logic is
unchanged but integration/timing/block-ref plumbing (see
L2PSMempool.applyPendingProofs()) was modified to accommodate this PR.
petri/petri.md-1-2 (1)

1-2: ⚠️ Potential issue | 🟡 Minor

Remove leftover conversion artifact.

Line 1 appears to be a remnant from converting this document from another format. It should be removed as it's not part of the specification content.

🧹 Proposed fix
-I'll convert this pitch deck into a clean text-only document for you.
-
 ---
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@petri/petri.md` around lines 1 - 2, Remove the stray conversion artifact
"I'll convert this pitch deck into a clean text-only document for you." from the
top of petri.md; delete that lone line so the document starts with the actual
specification content and contains no leftover conversion notes.
petri/petri.md-89-95 (1)

89-95: ⚠️ Potential issue | 🟡 Minor

Add language identifier to fenced code block.

The ASCII diagram should have a language identifier for proper rendering and to satisfy markdown linting (MD040).

🧹 Proposed fix
-```
+```text
 CLIENT → RPC → SHARD (Verify) → MEMPOOL SYNC → [STATE DELTA]
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@petri/petri.md` around lines 89 - 95, The fenced ASCII diagram in petri.md is
missing a language identifier which triggers MD040; update the opening fence for
the block containing "CLIENT → RPC → SHARD (Verify) → MEMPOOL SYNC → [STATE
DELTA]" to include a language label (e.g., use ```text) so the diagram renders
correctly and satisfies the markdown linter; ensure the closing fence remains
``` and do not alter the diagram content itself.
src/libs/consensus/petri/classifier/transactionClassifier.ts-11-12 (1)

11-12: ⚠️ Potential issue | 🟡 Minor

GCRGeneration is an undocumented export from the SDK.

The @kynesyslabs/demosdk/websdk subpath is correctly configured and exported. However, GCRGeneration does not appear in the SDK's public API documentation and is likely an internal or undocumented export. Verify with the SDK maintainers that this API is stable and intended for external use, or consider using an alternative documented export if available.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/libs/consensus/petri/classifier/transactionClassifier.ts` around lines 11
- 12, The import of GCRGeneration from "@kynesyslabs/demosdk/websdk" in
transactionClassifier.ts references an undocumented SDK export; confirm with the
SDK maintainers whether GCRGeneration is a stable public API or replace its
usage with a documented alternative (or a supported factory/function) from the
SDK, and if unavailable remove the GCRGeneration import and refactor any code in
transactionClassifier.ts that calls GCRGeneration to use the supported API (or
add feature-flag/guarding and a TODO noting maintainer confirmation) so the file
no longer depends on an undocumented symbol.
petri/00-codebase-mapping.md-99-99 (1)

99-99: ⚠️ Potential issue | 🟡 Minor

Table formatting issue — pipe characters in inline code break column structure.

The | characters inside the backtick code block are interpreted as table column separators, causing the table to render incorrectly.

🔧 Proposed fix — escape or restructure
-| Classification result type | NEW | `{ status: 'PRE-APPROVED' | 'TO-APPROVE' | 'PROBLEMATIC', delta?: StateDelta }` |
+| Classification result type | NEW | `{ status: 'PRE-APPROVED' \| 'TO-APPROVE' \| 'PROBLEMATIC', delta?: StateDelta }` |

Or move the type definition outside the table:

-| Classification result type | NEW | `{ status: 'PRE-APPROVED' | 'TO-APPROVE' | 'PROBLEMATIC', delta?: StateDelta }` |
+| Classification result type | NEW | See type definition below |
+
+```typescript
+{ status: 'PRE-APPROVED' | 'TO-APPROVE' | 'PROBLEMATIC', delta?: StateDelta }
+```
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@petri/00-codebase-mapping.md` at line 99, The table cell containing the type
for "Classification result type" breaks the Markdown table because the inline
code contains pipe characters; fix it by either escaping the pipes (replace '|'
with '\|' inside the backticks) or move the entire type definition out of the
table into a fenced code block beneath the table (use the type { status:
'PRE-APPROVED' | 'TO-APPROVE' | 'PROBLEMATIC', delta?: StateDelta } and
reference it in the table cell), updating the table row for the "Classification
result type" header to point to the external/fenced code instead of embedding
pipes inline.
src/libs/consensus/petri/finality/transactionFinality.ts-54-54 (1)

54-54: ⚠️ Potential issue | 🟡 Minor

Consider returning null instead of 0 when timestamp is missing.

Using 0 for a missing timestamp can be misleading since 0 represents a valid epoch timestamp (1970-01-01). Returning null would be more accurate for "timestamp not available".

🔧 Proposed fix
-            result.hardFinalityAt = Number(confirmedTx.content?.timestamp ?? 0)
+            result.hardFinalityAt = confirmedTx.content?.timestamp 
+                ? Number(confirmedTx.content.timestamp) 
+                : null
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/libs/consensus/petri/finality/transactionFinality.ts` at line 54, The
assignment to result.hardFinalityAt currently coerces a missing
confirmedTx.content?.timestamp to 0; change this so that when timestamp is
absent you assign null instead (i.e., set result.hardFinalityAt =
confirmedTx.content?.timestamp ?? null) and update the type/signature of
result.hardFinalityAt (and any related interfaces or return types in
transactionFinality.ts) from number to number | null so callers using
result.hardFinalityAt handle the null case appropriately.
testing/loadgen/src/features/consensus/petri_relay_flow.ts-137-144 (1)

137-144: ⚠️ Potential issue | 🟡 Minor

Remove unnecessary as any casts for confirm() and broadcast() methods.

The SDK properly types these methods—other files in the codebase (e.g., transfer_loadgen.ts) call them directly without casts. The as any casts only serve to suppress type checking and should be removed:

const validity = await demos.confirm(signedTx)
const broadcast = await demos.broadcast(validity)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@testing/loadgen/src/features/consensus/petri_relay_flow.ts` around lines 137
- 144, The code is using unnecessary "as any" casts when calling demos.confirm
and demos.broadcast in petri_relay_flow; remove the casts so the calls use the
SDK types directly (replace "(demos as any).confirm" with "demos.confirm" and
"(demos as any).broadcast" with "demos.broadcast"), keeping the existing await,
validity/result checks and error messages intact so TypeScript properly
type-checks confirm and broadcast.
src/libs/consensus/petri/forge/deltaAgreementTracker.ts-103-110 (1)

103-110: ⚠️ Potential issue | 🟡 Minor

Edge case: Math.max(...hashCounts.values()) can return -Infinity for empty maps.

If hashCounts is empty (theoretically possible if a state exists but memberHashes is empty), Math.max() with no arguments returns -Infinity, producing a confusing log message.

🛡️ Suggested defensive fix
             log.warn(
                 `[DeltaTracker] TX ${txHash} FLAGGED: no agreement after ${state.roundsTracked} rounds ` +
-                `(best: ${Math.max(...hashCounts.values())}/${this.threshold} needed)`,
+                `(best: ${hashCounts.size > 0 ? Math.max(...hashCounts.values()) : 0}/${this.threshold} needed)`,
             )
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/libs/consensus/petri/forge/deltaAgreementTracker.ts` around lines 103 -
110, When checking TTL expiry in deltaAgreementTracker (variables:
state.roundsTracked, this.ttlRounds), defensively handle empty hashCounts so
Math.max(...hashCounts.values()) doesn't yield -Infinity; compute a safe
maxCount from hashCounts (e.g., default to 0 when hashCounts is empty or use
Math.max(0, ...values)) and use that maxCount in the log message instead of
calling Math.max directly on possibly-empty hashCounts.values(); keep the rest
of the logic (flagged.push(txHash)) unchanged.
🧹 Nitpick comments (27)
.gitignore (1)

293-293: Remove duplicate ignore entry for /testing/runs.

This pattern already exists at Line 289, so this addition is redundant.

Suggested cleanup
-/testing/runs
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.gitignore at line 293, Remove the duplicate ignore entry "/testing/runs"
from .gitignore (it already appears earlier in the file); locate the redundant
line containing the pattern "/testing/runs" and delete it so the file contains
only the single existing ignore entry.
src/libs/consensus/petri/execution/speculativeExecutor.ts (1)

78-82: Consider logging or validating unknown edit types.

The default case silently passes through for unrecognized edit types (storage, tls, etc.), which is documented. However, if a malformed or typo'd edit type arrives (e.g., "balancee"), it will succeed with "passthrough" rather than being caught.

Adding a debug log or maintaining an explicit allowlist would help surface unexpected edit types during development.

♻️ Suggested improvement
             default:
-                // For other GCR edit types (storage, tls, etc.), we still produce a delta
-                // but skip simulation — the edit presence itself is the state change signal
+                // Known passthrough types: storage, tls, etc.
+                log.debug(
+                    `[PetriSpecExec] Passthrough for edit type=${edit.type} on TX ${tx.hash}`,
+                )
                 result = { success: true, message: "passthrough" }
                 break
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/libs/consensus/petri/execution/speculativeExecutor.ts` around lines 78 -
82, The default branch in speculativeExecutor.ts currently returns a silent
passthrough ({ success: true, message: "passthrough" }) for any unrecognized GCR
edit type; change this to validate the edit type against an explicit allowlist
of supported GCR edit types (e.g., storage, tls, etc.) and, if the incoming edit
type is not in the allowlist, emit a debug/warn log (using the module's logger)
and return a failure or a clearly flagged result instead of silently succeeding;
update the code around the switch/default handling in the speculative execution
function (the block assigning result = { success: true, message: "passthrough"
}) to perform the allowlist check and logging so malformed or typo'd edit types
are surfaced during development.
src/libs/consensus/petri/utils/canonicalJson.ts (1)

16-18: Consider handling circular references.

JSON.stringify throws on circular structures. If any state delta inadvertently contains a cycle, the entire consensus path will crash. Depending on trust in upstream data, you may want to catch this or use a cycle-safe serializer.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/libs/consensus/petri/utils/canonicalJson.ts` around lines 16 - 18, The
canonicalJson function currently calls JSON.stringify(value, replacer, 0) which
will throw on circular structures; update canonicalJson to handle cycles by
using a cycle-safe serialization strategy (e.g., track seen objects with a
WeakSet inside the replacer or wrap the JSON.stringify call in try/catch and
fall back to a safe serializer) so circular references don't crash the consensus
path; reference the canonicalJson function and the replacer to add the WeakSet
tracking (or a fallback serializer) and ensure the output remains deterministic
for the same structural content.
src/libs/consensus/v2/routines/orderTransactions.ts (1)

31-33: Remove identity map—this is a no-op.

.map(transaction => transaction) returns the same array elements unchanged. This appears to be dead code, possibly a remnant from a previous implementation that performed transformation or stringification.

♻️ Suggested simplification
-    // Stringify the transactions
-    const orderedTransactions = orderedTransactionsObjects.map(
-        transaction => transaction,
-    )
-    return orderedTransactions
+    return orderedTransactionsObjects
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/libs/consensus/v2/routines/orderTransactions.ts` around lines 31 - 33,
The map call that creates orderedTransactions is a no-op—replace the identity
mapping of orderedTransactionsObjects with either a direct reference to
orderedTransactionsObjects or, if an independent copy is required, use a shallow
copy (e.g., spread) instead; update the code where orderedTransactions is used
so it consumes orderedTransactionsObjects (or the shallow copy) and remove the
redundant .map(transaction => transaction) to simplify the logic.
src/features/metrics/MetricsCollector.ts (1)

739-742: Improve readability of petriEnabled check.

The variable petriEnabled is assigned 1 or 0 for the gauge, then used in a boolean check (!petriEnabled). While this works due to JavaScript's truthiness rules, it's clearer to use an explicit boolean check.

♻️ Clearer boolean handling
-            const petriEnabled = getSharedState.petriConsensus ? 1 : 0
-            this.metricsService.setGauge("petri_enabled", petriEnabled)
-
-            if (!petriEnabled) return
+            const isPetriEnabled = !!getSharedState.petriConsensus
+            this.metricsService.setGauge("petri_enabled", isPetriEnabled ? 1 : 0)
+
+            if (!isPetriEnabled) return
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/features/metrics/MetricsCollector.ts` around lines 739 - 742, The current
code mixes a numeric flag and boolean check for petriEnabled; change it to an
explicit boolean then derive the gauge value from it: create a boolean (e.g.,
petriEnabled = !!getSharedState.petriConsensus) and pass petriEnabled ? 1 : 0 to
this.metricsService.setGauge("petri_enabled", ...), then continue using the
boolean in the if (!petriEnabled) return check so the intent is clear; update
references around petriEnabled in MetricsCollector to match.
src/model/entities/Mempool.ts (1)

49-51: Consider using bigint type for soft_finality_at property.

The column is defined as "bigint" in the database, but the TypeScript type is number | null. If soft_finality_at stores Unix timestamps in milliseconds or other large values, JavaScript number loses precision beyond Number.MAX_SAFE_INTEGER (9,007,199,254,740,991). Consider using TypeScript's bigint type for type safety.

♻️ Proposed type alignment
     // REVIEW: Petri Consensus soft finality timestamp (Phase 5)
     `@Column`("bigint", { name: "soft_finality_at", nullable: true })
-    soft_finality_at: number | null
+    soft_finality_at: bigint | null
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/model/entities/Mempool.ts` around lines 49 - 51, The Mempool entity
defines the DB column as "bigint" but types the property as number | null;
change the TypeScript type to bigint | null for soft_finality_at and add a value
transformer on the `@Column` to serialize/deserialize between DB string and JS
BigInt (or store as string if your ORM expects it) so reads return BigInt and
writes convert BigInt to the DB format; update the property declaration
soft_finality_at and the `@Column`(...) decorator in src/model/entities/Mempool.ts
accordingly to ensure safe handling of large integer timestamps.
testing/scripts/run-suite.ts (1)

272-279: Consider adding petri suite to resolveLocalTargets for --local mode.

The resolveLocalTargets function handles special cases for several suites (startup-cold-boot, gcr-routine, cluster-health, etc.) but doesn't include petri. If the Petri suite requires specific target handling in local mode, add it here.

🔧 Potential addition if needed
 async function resolveLocalTargets(suite: string, explicitTargets: string | null): Promise<string | null> {
   if (explicitTargets) return explicitTargets
   if (suite === "startup-cold-boot") return defaultLocalTargets
   if (suite === "gcr-routine") return defaultLocalTargets
+  if (suite === "petri") return defaultLocalTargets
   if (suite === "cluster-health" || suite === "gcr-focus" || suite === "prod-gate") {
     return process.env.TARGETS ?? defaultLocalTargets
   }
   return null
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@testing/scripts/run-suite.ts` around lines 272 - 279, The resolveLocalTargets
function omits the "petri" suite, so update resolveLocalTargets to handle petri
in local mode by adding "petri" to the appropriate conditional branch (either
the branch that returns defaultLocalTargets alongside "startup-cold-boot" and
"gcr-routine" or the branch that returns process.env.TARGETS alongside
"cluster-health"/"gcr-focus"/"prod-gate" depending on which behavior petri
needs); locate the resolveLocalTargets function and modify the conditional
checks to include "petri" and ensure defaultLocalTargets and process.env.TARGETS
behavior remains consistent.
better_testing/petri/speculativeExecutor.test.ts (1)

11-32: Avoid duplicating hashing logic inside the test helper.

Keeping a local replica of the production algorithm can give false confidence when logic changes. Prefer extracting a shared computeDeltaHash utility from runtime code and importing it here.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@better_testing/petri/speculativeExecutor.test.ts` around lines 11 - 32, The
test helper duplicates the production hashing logic (computeDeltaHash) which
risks divergence; instead extract the canonical hashing routine from the
runtime/speculativeExecutor module into a shared utility and import it in the
test. Update the test to remove the local computeDeltaHash and import the
exported function (e.g., computeDeltaHash or computeDeltaHashForTests) from the
runtime code (where speculativeExecutor performs canonicalJson + Hashing.sha256)
so the test uses the same implementation; ensure the shared utility handles
bigint/string normalization the same way as speculativeExecutor.
src/libs/consensus/petri/forge/forgeInstance.ts (1)

15-23: Avoid exporting mutable singleton state directly.

Expose only getPetriForgeInstance/setPetriForgeInstance and keep the backing variable module-private to prevent accidental external mutation.

🔧 Suggested refactor
-export let petriForgeInstance: ContinuousForge | null = null
+let petriForgeInstance: ContinuousForge | null = null
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/libs/consensus/petri/forge/forgeInstance.ts` around lines 15 - 23, The
exported mutable singleton petriForgeInstance should be module-private: remove
the export from the petriForgeInstance binding and keep only
setPetriForgeInstance and getPetriForgeInstance exported so external code can
only access it via the setter/getter; update the declaration of
petriForgeInstance (type ContinuousForge | null) to be non-exported while
leaving exported functions setPetriForgeInstance and getPetriForgeInstance
unchanged so callers use those accessors rather than mutating the variable
directly.
better_testing/petri/happyPath.test.ts (1)

18-29: Minor: Mock peer structure has unusual property name.

Line 22 uses connection: { string: ... } which appears to be a typo or placeholder. While harmless in tests, consider using a more realistic property name for clarity:

-        connection: { string: `localhost:${3000 + i}` },
+        connectionString: `localhost:${3000 + i}`,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@better_testing/petri/happyPath.test.ts` around lines 18 - 29, The mockPeers
helper creates peer objects in mockPeers with an unrealistic property
connection.string; change that to a clearer property name (e.g.,
connection.address or connection.url) inside the mockPeers function so the peer
shape reads connection: { address: `localhost:${3000 + i}` }, and update any
tests that consume connection.string to use the new property (references:
mockPeers function and any assertions using connection.string).
src/libs/consensus/v2/routines/broadcastBlockHash.ts (1)

93-102: Clarify return value semantics.

The return value [signatureCount, shard.length - signatureCount] may not accurately represent pro/con votes:

  • signatureCount includes all signatures in block.validation_data.signatures, potentially including pre-existing ones before this broadcast
  • The second value assumes each shard member maps to exactly one signature, but signatures are keyed by identity (public key), not peer

The pro counter (line 79) tracks RPC successes, but isn't used in the return. If the caller expects [pro, con], consider:

🔧 Suggested clarification
-    return [signatureCount, shard.length - signatureCount]
+    // Return actual signature count for BFT threshold validation
+    // Note: signatureCount may exceed shard.length if signatures accumulate across rounds
+    return [signatureCount, Math.max(0, shard.length - signatureCount)]
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/libs/consensus/v2/routines/broadcastBlockHash.ts` around lines 93 - 102,
The return currently uses signatureCount and shard.length - signatureCount which
misrepresents pro/con; update broadcastBlockHash to return accurate pro and con:
use the existing pro counter for the first element and compute con as
shard.length - pro; alternatively, if you must derive pro from signatures, map
block.validation_data.signatures keys (identities) to shard peers to count
unique shard signers and set pro to that count and con to shard.length - pro;
ensure you reference the pro variable, shard array, and
block.validation_data.signatures when making the change.
better_testing/petri/blockCompiler.test.ts (1)

173-195: Tests are trivial — consider removing or documenting intent.

These tests verify JavaScript ternary operator behavior rather than actual consensus dispatch logic. If the intent is to document expected behavior for future integration tests, consider adding a comment explaining this is a placeholder. Otherwise, these provide little value.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@better_testing/petri/blockCompiler.test.ts` around lines 173 - 195, These
tests in the "Consensus dispatch switching" describe block (tests
"petriConsensus flag gates dispatch" and "dispatch function selection is
deterministic") only assert basic ternary behavior rather than exercising the
actual dispatch logic; either delete these trivial tests or replace them with
meaningful checks that import and invoke the real dispatch/select function from
mainLoop.ts (or add a clear comment that they are placeholders documenting
expected behavior until integration tests exist). Specifically, remove or update
the tests named "petriConsensus flag gates dispatch" and "dispatch function
selection is deterministic" in blockCompiler.test.ts, and if keeping them as
placeholders add a comment referencing the real selector (e.g., the
dispatch/chooseConsensus function in mainLoop.ts) and why the tests are present.
petri/consensus.md (1)

212-222: Consider moving soak test results to a separate file or CI artifact.

Embedding specific test results (dated 2026-03-22) in documentation may become stale. Consider referencing CI artifacts or a separate test results file that can be updated independently.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@petri/consensus.md` around lines 212 - 222, The "Soak Test Results
(2026-03-22)" section in petri/consensus.md (the heading and the results table)
should be removed from the static docs and instead referenced from a separate,
updatable artifact; extract the table and heading into a new file (e.g., a
soak-results markdown or CI artifact) and replace the block under the heading
"Soak Test Results (2026-03-22)" with a short pointer like "See latest soak test
results: [link to artifact]" or a CI badge; update any references in
petri/consensus.md that mention the soak results to point to the new file/CI
artifact and ensure the unique heading "Soak Test Results (2026-03-22)" and the
table rows (Devnet size, TXs submitted, Blocks produced, Block rate, Hard
finality observed, PoRBFT activity, Test verdict) are moved to the new artifact
so they can be updated independently.
better_testing/petri/benchmark.test.ts (2)

206-236: These tests assert design constants, not actual measurements.

The "finality latency design targets" tests verify arithmetic on hardcoded values rather than measuring actual latency. This documents design expectations but doesn't validate runtime behavior. Consider renaming to "Finality Design Constants" to clarify intent.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@better_testing/petri/benchmark.test.ts` around lines 206 - 236, The tests
under describe("Benchmark — Finality Latency Design Targets") are asserting
hardcoded design constants rather than measuring runtime behavior; rename the
suite and/or individual tests to make intent explicit (e.g., describe("Finality
Design Constants") and test names like "soft finality design constant", "hard
finality design constant", "finality gap design constant") so readers know these
are static expectations, not live measurements; update the strings in the
describe and the three test(...) calls to the new names (referencing the
existing describe and the three test blocks) and keep the assertions unchanged.

268-286: Use dynamic import() instead of require() for consistency.

The test uses CommonJS require() while the rest of the file uses ESM imports. For consistency and better TypeScript support, use dynamic import():

♻️ Proposed fix
 test("forge getCurrentDeltas returns copy (no reference leak)", async () => {
     const config = {
         forgeIntervalMs: 60000,
         blockIntervalMs: 10000,
         agreementThreshold: 7,
         problematicTTLRounds: 5,
     }
-    const forge = new (require("@/libs/consensus/petri/forge/continuousForge").ContinuousForge)(config)
+    const { ContinuousForge } = await import("@/libs/consensus/petri/forge/continuousForge")
+    const forge = new ContinuousForge(config)
     forge.start(mockPeers(3))
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@better_testing/petri/benchmark.test.ts` around lines 268 - 286, Replace the
CommonJS require in the test with a dynamic ESM import: make the test async,
await import("@/libs/consensus/petri/forge/continuousForge"), pull
ContinuousForge from the imported module, then instantiate it (new
ContinuousForge(config)), call forge.start(...), call getCurrentDeltas twice and
assert as before, and finally call forge.stop(); ensure the test function
signature is updated to async so the dynamic import can be awaited and teardown
still runs.
petri/VADEMECUM.md (2)

32-34: Add language specifier to fenced code block.

The code block at line 32 is missing a language specifier. Since this shows a directory path, use an empty specifier or text:

📝 Proposed fix
-```
+```text
 better_testing/petri/
</details>

<details>
<summary>🤖 Prompt for AI Agents</summary>

Verify each finding against the current code and only fix it if needed.

In @petri/VADEMECUM.md around lines 32 - 34, The fenced code block containing
the directory path better_testing/petri/ lacks a language specifier; update the
opening fence from ``` to include a neutral language such as text (e.g.,

consistently in the VADEMECUM.md markdown.

149-197: Add language specifiers to file path code blocks.

Multiple code blocks showing file paths are missing language specifiers (lines 149, 167, 179, 189). Add text to silence linter warnings and improve rendering consistency.

📝 Proposed fix for all path blocks
-```
+```text
 src/libs/consensus/petri/
   index.ts                          # petriConsensusRoutine()

Apply the same change to code blocks at lines 167, 179, and 189.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@petri/VADEMECUM.md` around lines 149 - 197, The fenced code blocks that list
repository files (the blocks containing entries like index.ts, types/*.ts,
canonicalJson.ts, petriSecretary.ts and the other filenames shown) are missing a
language specifier; update each such triple-backtick block to use ```text so the
path listings render consistently and linter warnings are silenced, ensuring you
change all similar blocks in the document that show file/path listings.
better_testing/petri/routing.test.ts (1)

151-162: Flag gating tests are trivial and don't verify actual module behavior.

These tests only verify JavaScript ternary operator behavior, not the actual feature flag routing implementation. Consider importing and testing the actual routing logic from the implementation modules.

💡 Example of testing actual module behavior
// Instead of testing local ternary logic:
// const path = petriConsensus ? "petri" : "dtr"

// Consider importing and testing the actual routing function:
// import { getRoutingPath } from "@/libs/consensus/petri/routing/..."
// expect(getRoutingPath({ petriConsensus: true })).toBe("petri")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@better_testing/petri/routing.test.ts` around lines 151 - 162, Replace the
trivial ternary check with tests that call the real routing logic: import the
routing function (e.g., getRoutingPath or whichever function implements
flag-based routing in the consensus/petri routing module) and assert its return
for both { petriConsensus: true } and { petriConsensus: false } scenarios; if
the routing logic reads flags from a config or feature-flag service, either
inject a test config/mock or call the function with a param object to control
the flag, and update the test block in routing.test.ts to use those imports and
assertions instead of the local const path = petriConsensus ? "petri" : "dtr".
better_testing/petri/byzantineFault.test.ts (1)

24-48: Remove redundant type annotations from default parameter values.

ESLint correctly identifies that the type annotations on ttlRounds and currentRound are redundant since TypeScript can infer them from the default values.

🔧 Proposed fix
 function simulateRound(
     shardSize: number,
     byzantineCount: number,
     txHash: string,
     correctDelta: string,
-    ttlRounds: number = 5,
-    currentRound: number = 1,
+    ttlRounds = 5,
+    currentRound = 1,
 ): { promoted: string[]; flagged: string[] } {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@better_testing/petri/byzantineFault.test.ts` around lines 24 - 48, The
function simulateRound has redundant TypeScript type annotations on parameters
with default values (ttlRounds: number = 5, currentRound: number = 1); remove
the explicit ": number" annotations so the defaults alone provide the inferred
types. Update the simulateRound signature to use ttlRounds = 5 and currentRound
= 1 (leave other params and body unchanged) so linting passes while preserving
behavior in the DeltaAgreementTracker/recordDelta/evaluate workflow.
better_testing/petri/finality.test.ts (1)

112-125: Test does not actually verify idempotency behavior.

The test claims to verify that "soft finality is only set once" but it only assigns the value once and then checks it wasn't changed. To test idempotency, you would need to simulate multiple promotion attempts or call the actual updateClassification function.

💡 Suggested improvement
test("soft finality is only set once (first PRE_APPROVED)", () => {
    const firstTime = 1700000000000
    const laterTime = 1700000002000

    // Simulate updateClassification behavior: only set if null
    let softFinalityAt: number | null = null
    
    // First promotion sets the timestamp
    if (softFinalityAt === null) {
        softFinalityAt = firstTime
    }
    
    // Second promotion attempt should NOT overwrite
    if (softFinalityAt === null) {
        softFinalityAt = laterTime
    }
    
    expect(softFinalityAt).toBe(firstTime)
})
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@better_testing/petri/finality.test.ts` around lines 112 - 125, The test "soft
finality is only set once (first PRE_APPROVED)" is not exercising idempotent
behavior; replace the manual single assignment with a simulation of multiple
promotion attempts (or call into updateClassification if available) so the first
assignment sets softFinalityAt and subsequent attempts do not overwrite it;
specifically, in the test body use the softFinalityAt variable and perform two
promotion steps (e.g., conditional assignments that only set softFinalityAt when
it is null) or invoke the updateClassification path that implements that logic,
then assert softFinalityAt equals firstTime and not laterTime.
testing/loadgen/src/features/consensus/petri_relay_flow.ts (1)

186-193: Consider making finality propagation delay configurable.

The hardcoded 2000ms sleep may not be appropriate for all environments. Consider using an environment variable consistent with other timing configurations in this file.

🔧 Proposed fix
     // Check finality consistency across all nodes
     const finalityChecks: NodeFinalityCheck[] = []
     if (txHash) {
         // Give finality a moment to propagate
-        await sleep(2000)
+        const finalityPropagationMs = envInt("CONSENSUS_FINALITY_PROPAGATION_MS", 2000)
+        await sleep(finalityPropagationMs)
         for (const url of rpcUrls) {
             finalityChecks.push(await checkFinalityOnNode(url, txHash))
         }
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@testing/loadgen/src/features/consensus/petri_relay_flow.ts` around lines 186
- 193, The hardcoded 2000ms delay before checking finality should be
configurable; replace the literal sleep(2000) used in the finality propagation
block (where txHash is present and you call checkFinalityOnNode for each
rpcUrls) with a configurable value read from an environment variable (e.g.,
FINALITY_PROPAGATION_MS) with a sensible default fallback (2000). Update the
code around the finalityChecks array and the await sleep(...) call to parse the
env var to a number and use that value so different environments can adjust the
delay without changing code.
better_testing/petri/liveness.test.ts (1)

214-227: Add explicit type annotation to avoid never[] inference.

The mempool array is initialized empty and then elements are pushed to it. Without a type annotation, TypeScript may infer never[] which can cause type issues.

🔧 Proposed fix
 test("high volume: many PRE_APPROVED with some PROBLEMATIC", () => {
-    const mempool = []
+    const mempool: Array<{ hash: string; classification: string }> = []
     for (let i = 0; i < 50; i++) {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@better_testing/petri/liveness.test.ts` around lines 214 - 227, The mempool
variable is declared as an empty array which can infer never[]; update the
declaration in the "high volume: many PRE_APPROVED with some PROBLEMATIC" test
by giving mempool an explicit type such as { hash: string; classification:
TransactionClassification }[] (or Array<{ hash: string; classification:
TransactionClassification }>), then keep pushing the same objects so TypeScript
correctly types mempool and TransactionClassification usages.
better_testing/petri/featureFlagRollback.test.ts (1)

34-62: Dispatch logic tests are trivial — same pattern as routing tests.

These tests only verify JavaScript ternary operator behavior rather than the actual feature flag dispatch implementation. The value would be in testing the actual routing code path.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@better_testing/petri/featureFlagRollback.test.ts` around lines 34 - 62, The
tests under the "Feature Flag — Dispatch Logic" describe block are only
verifying a ternary expression; replace them to call the real dispatch/routing
function (e.g., import and call your consensus dispatcher such as
chooseConsensusPath or dispatchConsensus) instead of in-test ternaries; update
each test (the "flag ON → petri path selected", "flag OFF → porbft path
selected", "flag toggle: ON → OFF → ON produces correct sequence", and "rapid
flag changes always resolve to current value") to set the feature flag, invoke
the actual dispatch function, and assert that the correct handler/path was
returned or the correct handler was invoked (use spies/mocks for the petri and
porbft handlers and verify calls) so the tests exercise real routing logic
rather than plain JS boolean expressions.
src/libs/blockchain/mempool_v2.ts (1)

318-335: Consider adding existence check or return value for updateClassification.

The method silently no-ops if the transaction hash doesn't exist. Consider returning a boolean indicating success, or logging when the update affects 0 rows to aid debugging.

Additionally, there's a minor TOCTOU window between findOne (line 329) and update (line 334) where another process could set soft_finality_at. In practice this is low-risk since the worst outcome is a slightly different timestamp, but if precision matters, consider using a conditional SQL update instead:

UPDATE mempool SET soft_finality_at = ? WHERE hash = ? AND soft_finality_at IS NULL
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/libs/blockchain/mempool_v2.ts` around lines 318 - 335, The
updateClassification method currently silently no-ops when the txHash doesn't
exist and has a TOCTOU between repo.findOne and repo.update; update it to return
a boolean (or throw) indicating whether the update affected any rows and log
when 0 rows are affected, and eliminate the TOCTOU by performing a conditional
update that sets soft_finality_at only when NULL (e.g., use repo.update with a
where clause that includes soft_finality_at IS NULL or execute a single
conditional SQL UPDATE) rather than relying on a separate findOne; reference
updateClassification, repo.findOne and repo.update when making the change.
src/libs/consensus/petri/arbitration/bftArbitrator.ts (1)

82-112: Consider a dedicated RPC method for BFT arbitration.

Using roundNumber: -1 as a sentinel to distinguish arbitration from regular forge exchange (as noted in the REVIEW comment) is pragmatic but couples arbitration behavior to the forge exchange handler's implementation details.

If the arbitration protocol diverges from forge exchange (e.g., different timeout, retry logic, or response format), a dedicated petri_arbitrateDeltas RPC method would make the intent clearer and avoid accidental breakage.

This is not blocking since the current approach works correctly, but worth tracking as technical debt.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/libs/consensus/petri/arbitration/bftArbitrator.ts` around lines 82 - 112,
The code reuses the petri_exchangeDeltas RPC with roundNumber: -1 as a sentinel
inside the consensus_routine longCall (see peers.map / longCall usage) which
couples BFT arbitration behavior to the forge exchange handler; add a dedicated
RPC (e.g., petri_arbitrateDeltas) and update the caller in bftArbitrator.ts to
invoke consensus_routine with params.method set to "petri_arbitrateDeltas" (and
define its contract/response shape), implement a corresponding handler that
returns deltas regardless of roundNumber and handles arbitration-specific
timeouts/retries/response format, and then replace the sentinel-based checks
around roundNumber and data.deltas[tx.hash] so arbitration logic is isolated
from petri_exchangeDeltas.
testing/loadgen/src/features/consensus/petri_soak.ts (1)

154-160: Type casts suggest SDK type definitions may be incomplete.

The casts to (demos as any).confirm and (demos as any).broadcast suggest these methods might not be properly typed in the SDK. While acceptable for test code, consider updating the SDK type definitions if these are valid public methods.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@testing/loadgen/src/features/consensus/petri_soak.ts` around lines 154 - 160,
The casts (demos as any).confirm and (demos as any).broadcast indicate missing
typings for the SDK methods; fix this by updating the demos client type instead
of using any — add confirm(...) and broadcast(...) signatures to the SDK's
client interface (or the local type used in petri_soak.ts) so the code can call
demos.confirm(signedTx) and demos.broadcast(result) with proper types, then
remove the casts in petri_soak.ts (referencing the demos variable and the
confirm/broadcast methods).
petri/03-secretary-coordinated-signing.md (1)

13-28: Add language specifier to fenced code blocks for lint compliance.

The markdownlint warnings flag code blocks without language specifiers. For ASCII diagrams and pseudocode, use text or plaintext:

📝 Suggested fixes

Line 13:

-```
+```text
 All 10 shard members compile block independently (deterministic)

Line 123:

-```
+```text
 if (isWeSecretary(shard)):

Line 180:

-```text
 Task 1 (petriSecretary.ts)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@petri/03-secretary-coordinated-signing.md` around lines 13 - 28, Add a
language specifier (use "text") to the fenced code blocks in
petri/03-secretary-coordinated-signing.md to satisfy markdownlint: update the
block containing the ASCII diagram starting "All 10 shard members compile block
independently (deterministic)" to ```text, the pseudocode block containing "if
(isWeSecretary(shard)):" to ```text, and the block around "Task 1
(petriSecretary.ts)" to ```text so all fenced code blocks include a language
tag.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 21dc0966-9680-4ca0-a3e2-928029a2bf1b

📥 Commits

Reviewing files that changed from the base of the PR and between 530111b and 5e9ded4.

⛔ Files ignored due to path filters (2)
  • .mycelium/mycelium.db is excluded by !**/*.db
  • petri/pitch.pdf is excluded by !**/*.pdf
📒 Files selected for processing (84)
  • .env.example
  • .gitignore
  • AGENTS.md
  • TEAM.md
  • better_testing/petri/benchmark.test.ts
  • better_testing/petri/blockCompiler.test.ts
  • better_testing/petri/byzantineFault.test.ts
  • better_testing/petri/canonicalJson.test.ts
  • better_testing/petri/classifier.test.ts
  • better_testing/petri/conflictPath.test.ts
  • better_testing/petri/continuousForge.test.ts
  • better_testing/petri/deltaTracker.test.ts
  • better_testing/petri/featureFlagRollback.test.ts
  • better_testing/petri/finality.test.ts
  • better_testing/petri/happyPath.test.ts
  • better_testing/petri/liveness.test.ts
  • better_testing/petri/routing.test.ts
  • better_testing/petri/secretaryCoordination.test.ts
  • better_testing/petri/speculativeExecutor.test.ts
  • data/genesis.json
  • package.json
  • petri/00-codebase-mapping.md
  • petri/01-implementation-plan.md
  • petri/02-risks-and-considerations.md
  • petri/03-secretary-coordinated-signing.md
  • petri/VADEMECUM.md
  • petri/architecture-diagram.md
  • petri/consensus.md
  • petri/petri.md
  • src/config/defaults.ts
  • src/config/envKeys.ts
  • src/config/index.ts
  • src/config/loader.ts
  • src/config/types.ts
  • src/features/metrics/MetricsCollector.ts
  • src/index.ts
  • src/libs/blockchain/chainBlocks.ts
  • src/libs/blockchain/l2ps_mempool.ts
  • src/libs/blockchain/mempool_v2.ts
  • src/libs/communications/broadcastManager.ts
  • src/libs/consensus/petri/arbitration/bftArbitrator.ts
  • src/libs/consensus/petri/block/petriBlockCompiler.ts
  • src/libs/consensus/petri/block/petriBlockFinalizer.ts
  • src/libs/consensus/petri/classifier/transactionClassifier.ts
  • src/libs/consensus/petri/coordination/petriSecretary.ts
  • src/libs/consensus/petri/execution/speculativeExecutor.ts
  • src/libs/consensus/petri/finality/transactionFinality.ts
  • src/libs/consensus/petri/forge/continuousForge.ts
  • src/libs/consensus/petri/forge/deltaAgreementTracker.ts
  • src/libs/consensus/petri/forge/forgeInstance.ts
  • src/libs/consensus/petri/index.ts
  • src/libs/consensus/petri/routing/petriRouter.ts
  • src/libs/consensus/petri/routing/shardMapper.ts
  • src/libs/consensus/petri/types/classificationTypes.ts
  • src/libs/consensus/petri/types/continuousForgeTypes.ts
  • src/libs/consensus/petri/types/deltaComparison.ts
  • src/libs/consensus/petri/types/petriConfig.ts
  • src/libs/consensus/petri/types/stateDelta.ts
  • src/libs/consensus/petri/utils/canonicalJson.ts
  • src/libs/consensus/v2/PoRBFT.ts
  • src/libs/consensus/v2/routines/broadcastBlockHash.ts
  • src/libs/consensus/v2/routines/ensureCandidateBlockFormed.ts
  • src/libs/consensus/v2/routines/manageProposeBlockHash.ts
  • src/libs/consensus/v2/routines/orderTransactions.ts
  • src/libs/consensus/v2/types/secretaryManager.ts
  • src/libs/consensus/v2/types/validationStatusTypes.ts
  • src/libs/network/endpointExecution.ts
  • src/libs/network/endpointValidation.ts
  • src/libs/network/manageConsensusRoutines.ts
  • src/libs/network/rpcDispatch.ts
  • src/libs/omniprotocol/protocol/handlers/consensus.ts
  • src/model/entities/Mempool.ts
  • src/model/entities/Transactions.ts
  • src/utilities/mainLoop.ts
  • src/utilities/sharedState.ts
  • testing/TESTING_MAP.md
  • testing/devnet/.env.example
  • testing/devnet/docker-compose.yml
  • testing/loadgen/src/features/consensus/petri_block_production.ts
  • testing/loadgen/src/features/consensus/petri_relay_flow.ts
  • testing/loadgen/src/features/consensus/petri_soak.ts
  • testing/loadgen/src/features/consensus/petri_tx_inclusion.ts
  • testing/loadgen/src/main.ts
  • testing/scripts/run-suite.ts

Comment on lines +17 to +19
function electSecretary<T>(shard: T[]): T {
return shard[0]
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Use the real election logic in this suite.

This helper just returns shard[0], but the production coordinator elects from a sorted identity set that also includes the local pubkey. The tests can pass while the shipped code elects a different secretary, so they aren't validating the behavior this PR actually introduces.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@better_testing/petri/secretaryCoordination.test.ts` around lines 17 - 19, The
test helper electSecretary currently returns shard[0]; replace it with the real
election logic used in production so tests reflect actual behavior: ensure
electSecretary<T> constructs the full identity set (including the local pubkey),
sorts the identities deterministically the same way the coordinator does, and
selects the secretary from that sorted set (or call the production election
function if available) so the test suite elects the same secretary as the
running coordinator.

…ring

Address S2871 lint rule — Array.prototype.sort() without a compare
function uses Unicode code-point order which can be locale-dependent.
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

♻️ Duplicate comments (4)
testing/petri/secretaryCoordination.test.ts (1)

17-19: ⚠️ Potential issue | 🟠 Major

Test helper doesn't match production election logic.

This electSecretary returns shard[0], but production (in petriSecretary.ts) sorts all identities including the local pubkey before selecting. Tests pass while the shipped code may elect a different secretary. Import and use the real electSecretary and isWeSecretary from the production module, or replicate the exact logic including local pubkey handling.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@testing/petri/secretaryCoordination.test.ts` around lines 17 - 19, The test
helper electSecretary in testing/petri/secretaryCoordination.test.ts is
inconsistent with production; replace the stubbed logic by importing and using
the real electSecretary (and isWeSecretary if needed) from the production module
(petriSecretary.ts), or update the helper to precisely replicate production:
include the local pubkey in the identity list, sort deterministically the full
set, then select the first element as production does; update tests to call the
imported electSecretary/isWeSecretary so test election outcomes match production
behavior.
src/libs/consensus/petri/coordination/petriSecretary.ts (3)

142-148: ⚠️ Potential issue | 🔴 Critical

resetCollection() at start wipes early-arriving submissions.

receiveBlockHashSubmission() buffers responses as they arrive via RPC. Calling resetCollection() at line 147 clears that buffer, dropping any submissions received before collectBlockHashes() starts. This creates a race condition where early responses become false timeouts.

🔧 Proposed fix — initialize per-collection state without clearing inbox
 export async function collectBlockHashes(
     shard: Peer[],
     block: Block,
     timeoutMs = 5000,
 ): Promise<CollectionResult> {
-    resetCollection()
+    // Don't clear pendingSubmissions — keep early-arriving responses
+    // Only reset the resolve callback for this collection round
+    collectionResolve = null
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/libs/consensus/petri/coordination/petriSecretary.ts` around lines 142 -
148, Calling resetCollection() at the start of collectBlockHashes wipes the
global inbox and drops early-arriving submissions from
receiveBlockHashSubmission; instead, stop clearing the shared buffer and scope
collection state per-invocation: remove the resetCollection() call from
collectBlockHashes and either (a) create a new per-call collection
map/identifier that collectBlockHashes uses to gather responses while
receiveBlockHashSubmission appends to the shared inbox, or (b) if using a shared
inbox data structure, filter/consume only entries matching the current block id
within collectBlockHashes rather than clearing everything; update
collectBlockHashes to initialize its own local result-tracking (e.g., a local
Map or collectionId) and leave resetCollection() for teardown or explicit global
resets.

112-118: ⚠️ Potential issue | 🟠 Major

electSecretary() fallback to shard[0] is problematic.

When the elected secretary is the local node, found is undefined (since local pubkey isn't in the shard peer list), and the function returns shard[0] — a different peer. This causes handleSecretaryOffline() to probe the wrong member. For a single-node shard, this returns undefined.

The comment says callers should use isWeSecretary, but the function signature promises to return a Peer, which is misleading.

🔧 Proposed fix
-export function electSecretary(shard: Peer[]): Peer {
+export function electSecretary(shard: Peer[]): Peer | null {
     const secretaryId = getSecretaryIdentity(shard)
     const found = shard.find(p => p.identity === secretaryId)
-    // If we are the secretary, return shard[0] as a fallback peer reference
-    // (the caller should use isWeSecretary to decide the code path)
-    return found ?? shard[0]
+    // Returns null if the secretary is the local node (not in shard list)
+    return found ?? null
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/libs/consensus/petri/coordination/petriSecretary.ts` around lines 112 -
118, The electSecretary function currently returns shard[0] as a fallback when
getSecretaryIdentity(...) resolves to the local node (found === undefined),
which misleads callers and causes probing the wrong peer (see
handleSecretaryOffline); change electSecretary to stop returning shard[0] and
instead return an optional Peer (e.g., Peer | undefined or null) when the
secretary identity is not present in shard, update the function signature and
callers (including places using isWeSecretary and handleSecretaryOffline) to
handle the undefined case explicitly, and ensure getSecretaryIdentity,
electSecretary, and any code assuming a guaranteed Peer are adjusted to check
for absence before proceeding.

175-218: ⚠️ Potential issue | 🔴 Critical

Submissions with wrong block number or invalid signatures are re-counted each iteration.

The loop at lines 177-218 only records matching valid submissions in signatures. Submissions with:

  • Wrong block number (line 180-186): skipped with continue, re-processed next iteration
  • Mismatched hash (line 210-216): increments mismatchCount but not marked processed
  • Invalid signature (line 204-208): increments mismatchCount but not marked processed

This causes mismatchCount to inflate on each poll iteration, potentially exceeding totalMembers and making the threshold calculation incorrect.

🔧 Proposed fix — track all processed senders
     const signatures: Record<string, string> = {
         [ourPubkey]: uint8ArrayToHex(ourSignature.signature),
     }
+    const processed = new Set<string>([ourPubkey])
     let matchCount = 1 // counting ourselves
     let mismatchCount = 0

     // ...

     while (Date.now() < deadline && matchCount < totalMembers) {
         for (const [pubkey, submission] of pendingSubmissions) {
-            if (signatures[pubkey]) continue // Already processed
+            if (processed.has(pubkey)) continue // Already processed

             if (submission.blockNumber !== block.number) {
                 log.warn(/* ... */)
+                processed.add(pubkey)
+                mismatchCount++
                 continue
             }

             if (submission.blockHash === expectedHash) {
                 // Verify signature...
                 if (isValid) {
                     signatures[pubkey] = submission.signature
+                    processed.add(pubkey)
                     matchCount++
                 } else {
+                    processed.add(pubkey)
                     mismatchCount++
                 }
             } else {
+                processed.add(pubkey)
                 mismatchCount++
             }
         }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/libs/consensus/petri/coordination/petriSecretary.ts` around lines 175 -
218, The loop in petriSecretary.ts re-processes the same pubkeys across
iterations because only valid matches are recorded in signatures, causing
mismatchCount to inflate; fix by marking a submission as processed regardless of
outcome: introduce a processedSenders Set (or remove entries from
pendingSubmissions) and, inside the pendingSubmissions for-loop, add pubkey to
processedSenders (or delete from pendingSubmissions) when you skip for wrong
blockNumber, on hash mismatch, or on invalid signature so each sender increments
mismatchCount at most once; keep signatures mapping only for valid signatures
and use processedSenders when deciding whether to skip already-handled senders.
🧹 Nitpick comments (15)
testing/petri/classifier.test.ts (1)

16-40: Unused mockTx helper function.

The mockTx function is defined but never used in any of the tests. All tests call classifyFromEdits directly with edit arrays, bypassing this helper. Consider removing it to reduce dead code, or add tests that use the full transaction mock.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@testing/petri/classifier.test.ts` around lines 16 - 40, The helper function
mockTx is dead code and not used by any tests that call classifyFromEdits;
either delete the mockTx function declaration to remove the unused helper, or
update tests that currently construct edit arrays directly to instead create a
full transaction via mockTx(...) and pass its content.gcr_edits (or equivalent)
into classifyFromEdits so tests exercise the full transaction shape; locate the
mockTx function and adjust or remove it and update any tests referencing edit
arrays to use mockTx when opting to keep the helper.
src/libs/consensus/petri/coordination/petriSecretary.ts (1)

55-56: Module-level mutable state creates test isolation challenges.

pendingSubmissions and collectionResolve are module-level variables. This works for runtime but makes unit testing difficult — tests must call resetCollection() in setup to avoid cross-test pollution. Consider exposing a test-only reset or using a class-based approach.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/libs/consensus/petri/coordination/petriSecretary.ts` around lines 55 -
56, The module-level mutable variables pendingSubmissions and collectionResolve
cause test pollution; encapsulate that state so tests don't need global cleanup.
Either (A) move pendingSubmissions and collectionResolve into an instance of a
PetriSecretary class (make existing exported functions become methods on that
class and update callers to use a singleton or new instance), or (B) keep the
module functions but add and export a test-only resetCollection() that clears
pendingSubmissions (pendingSubmissions.clear()) and sets collectionResolve =
null; reference the symbols pendingSubmissions, collectionResolve, and
resetCollection in your change so tests can call the reset or callers can use
the new class instance.
testing/petri/featureFlagRollback.test.ts (1)

34-62: Dispatch tests validate trivial logic rather than actual routing.

These tests verify that petriConsensus ? "petri" : "porbft" works correctly, but don't test the actual getSharedState.petriConsensus integration or the real dispatch paths in the codebase. Consider adding integration tests that mock getSharedState and verify actual routing behavior.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@testing/petri/featureFlagRollback.test.ts` around lines 34 - 62, Current
tests only assert the ternary expression; update them to mock the actual shared
state and exercise the real dispatch logic: import and spy/mock getSharedState
to return petriConsensus true/false (and sequences), then call the real
routing/dispatch function (the module that consumes getSharedState—e.g., the
dispatcher or router entrypoint used in production) and assert it chose the
"petri" or "porbft" path; also add an integration-style test that toggles the
mocked getSharedState values over time and verifies the dispatcher resolves to
the current value. Ensure you reference and spy on getSharedState and invoke the
actual dispatch function used in your codebase rather than asserting the ternary
expression directly.
petri/03-secretary-coordinated-signing.md (1)

13-28: Add language identifiers to fenced code blocks.

Per markdownlint, fenced code blocks should specify a language for proper syntax highlighting:

  • Line 13: Add text or plaintext
  • Line 123: Add typescript or text
  • Line 180: Add text or plaintext

Also applies to: 123-140, 180-192

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@petri/03-secretary-coordinated-signing.md` around lines 13 - 28, The fenced
code blocks in the document (the block starting with "All 10 shard members
compile block independently (deterministic)" and the other fenced blocks around
lines 123-140 and 180-192) lack language identifiers; update each
triple-backtick fence to include an appropriate language tag such as text or
plaintext (or typescript for the code-like block at ~123 if relevant) so
markdownlint passes and syntax highlighting is applied consistently.
testing/petri/happyPath.test.ts (2)

18-37: Extract shared test helpers to a common module.

mockPeers, bftThreshold, and isBlockValid are duplicated across multiple test files (happyPath.test.ts, blockCompiler.test.ts, routing.test.ts, conflictPath.test.ts, liveness.test.ts, byzantineFault.test.ts).

Consider creating a shared test utilities file:

// testing/petri/helpers.ts
export function mockPeers(count: number): any[] { ... }
export function bftThreshold(totalMembers: number): number { ... }
export function isBlockValid(pro: number, totalVotes: number): boolean { ... }

This would reduce duplication and ensure consistent behavior across all test suites.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@testing/petri/happyPath.test.ts` around lines 18 - 37, Extract the duplicated
helpers into a single module and update tests to import them: create
testing/petri/helpers.ts that exports mockPeers, bftThreshold, and isBlockValid
(copy the existing implementations from happyPath.test.ts), then replace the
local declarations in happyPath.test.ts, blockCompiler.test.ts, routing.test.ts,
conflictPath.test.ts, liveness.test.ts, and byzantineFault.test.ts with imports
from that module (import { mockPeers, bftThreshold, isBlockValid } from
"testing/petri/helpers"), ensuring the exported signatures and behavior remain
identical to the originals (including mockPeers return shape and bftThreshold
logic).

196-256: Duplicate coverage with continuousForge.test.ts.

The "Forge State Machine" tests here overlap significantly with the dedicated continuousForge.test.ts suite. Consider either:

  1. Removing these tests and relying on continuousForge.test.ts for forge lifecycle coverage
  2. Keeping only the happy-path-specific integration aspects here

This would reduce test maintenance burden and clarify test ownership.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@testing/petri/happyPath.test.ts` around lines 196 - 256, These tests
duplicate lifecycle coverage from continuousForge.test.ts; either delete the
overlapping "Happy Path — Forge State Machine" test block or trim it to only
keep higher-level integration assertions (e.g., only the end-to-end happy path
flows that exercise ContinuousForge.start/shard interaction and getState
transitions) while removing unit-like cases already covered (reset,
double-start, and lifecycle sub-assertions). Locate the tests referencing
ContinuousForge, mockPeers, and methods start/pause/resume/reset/stop in
testing/petri/happyPath.test.ts and remove or reduce them to a single
integration test that asserts state transitions for a real shard scenario,
leaving detailed lifecycle specs in continuousForge.test.ts.
testing/petri/blockCompiler.test.ts (3)

200-223: Lifecycle state sequence test has limited value.

This test constructs an array of strings and then asserts it equals the same hardcoded array. It validates the expected sequence as documentation but doesn't exercise any real code.

Consider either:

  1. Importing and testing the actual runBlockPeriod() implementation with mocks
  2. Moving this to inline documentation in the production code
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@testing/petri/blockCompiler.test.ts` around lines 200 - 223, The test
currently only asserts a hardcoded array and doesn't exercise behavior; either
replace it with an integration/unit test that invokes runBlockPeriod() and
verifies the actual pause/compile/finalize/reset/resume lifecycle using
mocks/stubs for pause(), compileBlock(), finalizeBlock(), reset(), and resume()
(asserting the sequence and side-effects), or remove the test and move the
sequence example into inline documentation near runBlockPeriod() as a commented
example; update the test named "forge pause/resume pattern for block
compilation" in testing/petri/blockCompiler.test.ts accordingly and ensure mocks
verify call order on the real functions rather than comparing two identical
arrays.

173-195: Consider removing trivial dispatch tests.

These tests only verify that a JavaScript ternary operator works correctly (flag ? "petri" : "porbft"). They don't actually test any production code paths or integration with the real dispatch logic.

If the intent is to document the expected dispatch behavior, consider importing and testing the actual dispatch function, or convert these to documentation comments instead.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@testing/petri/blockCompiler.test.ts` around lines 173 - 195, The tests under
"Consensus dispatch switching" are trivially exercising a JavaScript ternary
rather than production logic; remove or replace them with real integration/unit
tests that call the actual dispatch code (e.g., import and invoke the dispatch
function used in mainLoop.ts) instead of local ternary checks in the tests
"petriConsensus flag gates dispatch" and "dispatch function selection is
deterministic"; if keeping a lightweight check for documentation, convert the
two tests into a short comment explaining expected behavior and add one real
test that imports the dispatch implementation and asserts that passing
petriConsensus=true selects the "petri" handler and false selects the "porbft"
handler.

83-169: Result structure tests use inline objects instead of actual types.

These tests construct plain objects matching expected shapes but don't import CompilationResult, ArbitrationResult, or FinalizationResult types from the production code. This means type changes in production won't cause test failures.

Consider importing and using the actual types:

import type { CompilationResult, ArbitrationResult, FinalizationResult } from "@/libs/consensus/petri/block/types"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@testing/petri/blockCompiler.test.ts` around lines 83 - 169, Import the actual
types (CompilationResult, ArbitrationResult, FinalizationResult) and annotate
the test objects with those types instead of using plain inline objects: add
"import type { CompilationResult, ArbitrationResult, FinalizationResult } from
'@/libs/consensus/petri/block/types'" at the top, then change each test's result
declaration to use explicit typing (e.g., "const result: CompilationResult = {
... }", "const result: ArbitrationResult = { ... }", "const result:
FinalizationResult = { ... }") so the tests will fail if the production types
change; update the three describe blocks named "CompilationResult structure",
"ArbitrationResult structure", and "FinalizationResult structure" accordingly.
testing/petri/conflictPath.test.ts (1)

273-275: Move helper function definition to top of file.

isBlockValid is defined at the end of the file but used earlier (line 242). While function hoisting makes this work, it's cleaner and more readable to define helpers near the top with bftThreshold:

♻️ Suggested reordering
 // ---- Helpers ----
 
 function bftThreshold(totalMembers: number): number {
     return Math.floor((totalMembers * 2) / 3) + 1
 }
+
+function isBlockValid(pro: number, total: number): boolean {
+    return pro >= bftThreshold(total)
+}
 
 // ---- Conflict Path: Delta Disagreement → PROBLEMATIC ----
 ...
-
-function isBlockValid(pro: number, total: number): boolean {
-    return pro >= bftThreshold(total)
-}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@testing/petri/conflictPath.test.ts` around lines 273 - 275, Move the helper
function isBlockValid so it's declared near the top of the file alongside
bftThreshold before any tests that call it; update the file by cutting the
isBlockValid(pro: number, total: number): boolean { return pro >=
bftThreshold(total) } definition from the end and pasting it above the tests
(near bftThreshold) so readers see the helper before it's used.
testing/petri/finality.test.ts (1)

200-215: Simplify classification timestamp logic test.

The loop with nested conditionals is harder to read than necessary. Consider a more direct approach:

♻️ Suggested simplification
     test("only PRE_APPROVED classification triggers soft_finality_at", () => {
-        const classifications = [
-            "TO_APPROVE",
-            "PROBLEMATIC",
-            "PRE_APPROVED",
-        ]
-
-        for (const cls of classifications) {
-            const shouldSetTimestamp = cls === "PRE_APPROVED"
-            if (cls === "PRE_APPROVED") {
-                expect(shouldSetTimestamp).toBe(true)
-            } else {
-                expect(shouldSetTimestamp).toBe(false)
-            }
-        }
+        const shouldSet = (cls: string) => cls === "PRE_APPROVED"
+        
+        expect(shouldSet("TO_APPROVE")).toBe(false)
+        expect(shouldSet("PROBLEMATIC")).toBe(false)
+        expect(shouldSet("PRE_APPROVED")).toBe(true)
     })
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@testing/petri/finality.test.ts` around lines 200 - 215, The test "only
PRE_APPROVED classification triggers soft_finality_at" currently uses an
unnecessary inner if; simplify by asserting the boolean expression directly for
each classification. Replace the loop body so that for each cls in
classifications you call expect(cls === "PRE_APPROVED").toBe(expectedValue) (or
build an expected array like [false, false, true] and assert
classifications.map(c => c === "PRE_APPROVED") equals that array). Update the
test function (the test case with that name) to use this direct assertion
pattern rather than the nested conditional.
testing/petri/routing.test.ts (1)

61-78: Probabilistic test assertion is reasonable but could be more robust.

The test checks that "at least some should differ" with diffCount > 5 out of 20 pairs. This is reasonable for a PRNG-based selection, but the threshold is somewhat arbitrary.

Consider documenting the expected collision probability or using a higher iteration count for more statistical confidence:

// With 10 peers and 2 selections, the probability of two random selections
// being identical is roughly 1/45 (C(10,2)). Over 20 pairs, we expect ~19-20
// to differ. Threshold of 5 gives us very high confidence.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@testing/petri/routing.test.ts` around lines 61 - 78, The probabilistic
assertion in the test using selectMembers and mockPeers is weak; increase
confidence by running more trials and tightening the threshold or documenting
the expected collision probability—e.g., run 100 iterations by looping i from
0..99, compute diffCount as you do now, and assert a higher minimum (such as
expect(diffCount).toBeGreaterThan(80)); additionally add a brief comment near
the test explaining the expected collision probability given 10 peers and 2
selections so the threshold is understandable.
testing/petri/benchmark.test.ts (1)

24-29: Consider adding proper type for mock peers.

The any[] return type loses type safety. Consider defining an interface that matches what the actual peer structure expects to catch type mismatches early.

+interface MockPeer {
+    identity: string
+    connection: { string: string }
+}
+
-function mockPeers(count: number): any[] {
+function mockPeers(count: number): MockPeer[] {
     return Array.from({ length: count }, (_, i) => ({
         identity: `peer_${String(i).padStart(3, "0")}`,
         connection: { string: `localhost:${3000 + i}` },
     }))
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@testing/petri/benchmark.test.ts` around lines 24 - 29, The mockPeers function
currently returns any[] losing type safety; define a proper interface (e.g.,
Peer or PeerInfo) matching the shape used by tests — at least identity: string
and connection: { string: string } — or import the existing peer type from the
codebase, then change function mockPeers(count: number): any[] to
mockPeers(count: number): Peer[] (or the imported type) and update any
references so the tests use the strongly typed peer objects.
petri/01-implementation-plan.md (1)

60-60: Minor: Add period after "etc".

In American English, abbreviations like "etc." require a period.

-| Chain (blocks) | `src/libs/blockchain/chainBlocks.ts` | `insertBlock()`, `getBlockByNumber()` etc |
+| Chain (blocks) | `src/libs/blockchain/chainBlocks.ts` | `insertBlock()`, `getBlockByNumber()`, etc. |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@petri/01-implementation-plan.md` at line 60, Minor punctuation fix: update
the table entry for "Chain (blocks)" in petri/01-implementation-plan.md so the
abbreviation "etc" is written as "etc." (add the missing period) — specifically
edit the cell showing `src/libs/blockchain/chainBlocks.ts` and the function list
(`insertBlock()`, `getBlockByNumber()` etc) to read `insertBlock()`,
`getBlockByNumber()` etc.; ensuring the period follows "etc.".
petri/VADEMECUM.md (1)

32-34: Add language specifiers to fenced code blocks.

Several fenced code blocks lack language specifiers. Adding them improves syntax highlighting and satisfies markdown linting rules.

✏️ Suggested fixes

For file path listings (lines 32-34, 149-164, etc.), use text or leave empty but consistent:

-```
+```text
 testing/petri/

For command examples (lines 65-73, 82-94), already correctly use `bash`.

For status format templates (lines 97-101), use `text`:
```diff
-```
+```text
 [PHASE X] Starting: <phase name>

Also applies to: 97-101, 149-164

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@petri/VADEMECUM.md` around lines 32 - 34, Several fenced code blocks (e.g.,
the file-path listing "testing/petri/" and the status-template "[PHASE X]
Starting: <phase name>") are missing language specifiers; update those
triple-backtick blocks to include appropriate languages (use "text" for file
listings and status templates) while keeping existing "bash" specifiers for
command examples, so all fenced blocks have explicit language markers and
satisfy markdown linting.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@petri/03-secretary-coordinated-signing.md`:
- Line 54: The doc declares petri_submitBlockHash returns { status: "collected"
| "mismatch" | "error" } but receiveBlockHashSubmission in petriSecretary.ts
currently always returns { status: "collected" }; either implement the missing
branches or align the doc: update receiveBlockHashSubmission to detect and
return "mismatch" when the submitted hash disagrees with the expected/previously
collected hash and return "error" on exceptional conditions (e.g., validation
failure or internal exception), ensuring these branches emit the correct status
string, or instead update petri/03-secretary-coordinated-signing.md to only list
{ status: "collected" } if you choose not to implement mismatch/error; reference
petri_submitBlockHash and receiveBlockHashSubmission when making the change.

In `@testing/petri/benchmark.test.ts`:
- Around line 268-286: The test "forge getCurrentDeltas returns copy (no
reference leak)" currently calls forge.start(...) and then forge.stop() but may
leak timers if an assertion throws; wrap the test body so that the
ContinuousForge instance created from ContinuousForge
(require("@/libs/consensus/petri/forge/continuousForge").ContinuousForge) is
always cleaned up by using a try/finally where forge.start(mockPeers(3)) is
before the try and forge.stop() is invoked in the finally block to ensure
forge.stop() runs even when expect(...) assertions fail.

In `@testing/petri/byzantineFault.test.ts`:
- Around line 29-30: Remove the redundant explicit type annotations on the two
test variables: change the declarations of ttlRounds: number = 5 and
currentRound: number = 1 to rely on type inference (ttlRounds = 5 and
currentRound = 1) so ESLint no longer flags trivially inferred types; update the
declarations where ttlRounds and currentRound are defined in
byzantineFault.test.ts accordingly.

In `@testing/petri/finality.test.ts`:
- Around line 112-125: The test currently only assigns softFinalityAt locally
and doesn't verify idempotency; change it to call the real updateClassification
twice and assert the stored timestamp stays the same: make the test async,
create or use a txHash, call updateClassification(txHash, "PRE_APPROVED"), read
the stored value via getTransactionFinality(txHash) (or the equivalent getter),
call updateClassification(txHash, "PRE_APPROVED") again, read the value a second
time, and assert that first.softFinalityAt === second.softFinalityAt to ensure
the timestamp is not overwritten.

In `@testing/petri/happyPath.test.ts`:
- Line 11: Remove the unused import "beforeEach" from the import statement that
currently imports describe, expect, test, mock, beforeEach (e.g., the import
line referencing bun:test) so the file only imports the symbols actually used
(describe, expect, test, mock); update the import to omit beforeEach to avoid
the unused-import lint/error.

In `@testing/petri/speculativeExecutor.test.ts`:
- Around line 12-32: The test helper computeDeltaHash should mirror production
fallback behavior: when mapping gcrEdits to editsForHashing use e.operation ??
"" and e.account ?? "" for operation and account (keep the existing
bigint/string handling for amount), then pass that through canonicalJson and
Hashing.sha256 as before so the test helper matches speculativeExecutor.ts
behavior exactly.

---

Duplicate comments:
In `@src/libs/consensus/petri/coordination/petriSecretary.ts`:
- Around line 142-148: Calling resetCollection() at the start of
collectBlockHashes wipes the global inbox and drops early-arriving submissions
from receiveBlockHashSubmission; instead, stop clearing the shared buffer and
scope collection state per-invocation: remove the resetCollection() call from
collectBlockHashes and either (a) create a new per-call collection
map/identifier that collectBlockHashes uses to gather responses while
receiveBlockHashSubmission appends to the shared inbox, or (b) if using a shared
inbox data structure, filter/consume only entries matching the current block id
within collectBlockHashes rather than clearing everything; update
collectBlockHashes to initialize its own local result-tracking (e.g., a local
Map or collectionId) and leave resetCollection() for teardown or explicit global
resets.
- Around line 112-118: The electSecretary function currently returns shard[0] as
a fallback when getSecretaryIdentity(...) resolves to the local node (found ===
undefined), which misleads callers and causes probing the wrong peer (see
handleSecretaryOffline); change electSecretary to stop returning shard[0] and
instead return an optional Peer (e.g., Peer | undefined or null) when the
secretary identity is not present in shard, update the function signature and
callers (including places using isWeSecretary and handleSecretaryOffline) to
handle the undefined case explicitly, and ensure getSecretaryIdentity,
electSecretary, and any code assuming a guaranteed Peer are adjusted to check
for absence before proceeding.
- Around line 175-218: The loop in petriSecretary.ts re-processes the same
pubkeys across iterations because only valid matches are recorded in signatures,
causing mismatchCount to inflate; fix by marking a submission as processed
regardless of outcome: introduce a processedSenders Set (or remove entries from
pendingSubmissions) and, inside the pendingSubmissions for-loop, add pubkey to
processedSenders (or delete from pendingSubmissions) when you skip for wrong
blockNumber, on hash mismatch, or on invalid signature so each sender increments
mismatchCount at most once; keep signatures mapping only for valid signatures
and use processedSenders when deciding whether to skip already-handled senders.

In `@testing/petri/secretaryCoordination.test.ts`:
- Around line 17-19: The test helper electSecretary in
testing/petri/secretaryCoordination.test.ts is inconsistent with production;
replace the stubbed logic by importing and using the real electSecretary (and
isWeSecretary if needed) from the production module (petriSecretary.ts), or
update the helper to precisely replicate production: include the local pubkey in
the identity list, sort deterministically the full set, then select the first
element as production does; update tests to call the imported
electSecretary/isWeSecretary so test election outcomes match production
behavior.

---

Nitpick comments:
In `@petri/01-implementation-plan.md`:
- Line 60: Minor punctuation fix: update the table entry for "Chain (blocks)" in
petri/01-implementation-plan.md so the abbreviation "etc" is written as "etc."
(add the missing period) — specifically edit the cell showing
`src/libs/blockchain/chainBlocks.ts` and the function list (`insertBlock()`,
`getBlockByNumber()` etc) to read `insertBlock()`, `getBlockByNumber()` etc.;
ensuring the period follows "etc.".

In `@petri/03-secretary-coordinated-signing.md`:
- Around line 13-28: The fenced code blocks in the document (the block starting
with "All 10 shard members compile block independently (deterministic)" and the
other fenced blocks around lines 123-140 and 180-192) lack language identifiers;
update each triple-backtick fence to include an appropriate language tag such as
text or plaintext (or typescript for the code-like block at ~123 if relevant) so
markdownlint passes and syntax highlighting is applied consistently.

In `@petri/VADEMECUM.md`:
- Around line 32-34: Several fenced code blocks (e.g., the file-path listing
"testing/petri/" and the status-template "[PHASE X] Starting: <phase name>") are
missing language specifiers; update those triple-backtick blocks to include
appropriate languages (use "text" for file listings and status templates) while
keeping existing "bash" specifiers for command examples, so all fenced blocks
have explicit language markers and satisfy markdown linting.

In `@src/libs/consensus/petri/coordination/petriSecretary.ts`:
- Around line 55-56: The module-level mutable variables pendingSubmissions and
collectionResolve cause test pollution; encapsulate that state so tests don't
need global cleanup. Either (A) move pendingSubmissions and collectionResolve
into an instance of a PetriSecretary class (make existing exported functions
become methods on that class and update callers to use a singleton or new
instance), or (B) keep the module functions but add and export a test-only
resetCollection() that clears pendingSubmissions (pendingSubmissions.clear())
and sets collectionResolve = null; reference the symbols pendingSubmissions,
collectionResolve, and resetCollection in your change so tests can call the
reset or callers can use the new class instance.

In `@testing/petri/benchmark.test.ts`:
- Around line 24-29: The mockPeers function currently returns any[] losing type
safety; define a proper interface (e.g., Peer or PeerInfo) matching the shape
used by tests — at least identity: string and connection: { string: string } —
or import the existing peer type from the codebase, then change function
mockPeers(count: number): any[] to mockPeers(count: number): Peer[] (or the
imported type) and update any references so the tests use the strongly typed
peer objects.

In `@testing/petri/blockCompiler.test.ts`:
- Around line 200-223: The test currently only asserts a hardcoded array and
doesn't exercise behavior; either replace it with an integration/unit test that
invokes runBlockPeriod() and verifies the actual
pause/compile/finalize/reset/resume lifecycle using mocks/stubs for pause(),
compileBlock(), finalizeBlock(), reset(), and resume() (asserting the sequence
and side-effects), or remove the test and move the sequence example into inline
documentation near runBlockPeriod() as a commented example; update the test
named "forge pause/resume pattern for block compilation" in
testing/petri/blockCompiler.test.ts accordingly and ensure mocks verify call
order on the real functions rather than comparing two identical arrays.
- Around line 173-195: The tests under "Consensus dispatch switching" are
trivially exercising a JavaScript ternary rather than production logic; remove
or replace them with real integration/unit tests that call the actual dispatch
code (e.g., import and invoke the dispatch function used in mainLoop.ts) instead
of local ternary checks in the tests "petriConsensus flag gates dispatch" and
"dispatch function selection is deterministic"; if keeping a lightweight check
for documentation, convert the two tests into a short comment explaining
expected behavior and add one real test that imports the dispatch implementation
and asserts that passing petriConsensus=true selects the "petri" handler and
false selects the "porbft" handler.
- Around line 83-169: Import the actual types (CompilationResult,
ArbitrationResult, FinalizationResult) and annotate the test objects with those
types instead of using plain inline objects: add "import type {
CompilationResult, ArbitrationResult, FinalizationResult } from
'@/libs/consensus/petri/block/types'" at the top, then change each test's result
declaration to use explicit typing (e.g., "const result: CompilationResult = {
... }", "const result: ArbitrationResult = { ... }", "const result:
FinalizationResult = { ... }") so the tests will fail if the production types
change; update the three describe blocks named "CompilationResult structure",
"ArbitrationResult structure", and "FinalizationResult structure" accordingly.

In `@testing/petri/classifier.test.ts`:
- Around line 16-40: The helper function mockTx is dead code and not used by any
tests that call classifyFromEdits; either delete the mockTx function declaration
to remove the unused helper, or update tests that currently construct edit
arrays directly to instead create a full transaction via mockTx(...) and pass
its content.gcr_edits (or equivalent) into classifyFromEdits so tests exercise
the full transaction shape; locate the mockTx function and adjust or remove it
and update any tests referencing edit arrays to use mockTx when opting to keep
the helper.

In `@testing/petri/conflictPath.test.ts`:
- Around line 273-275: Move the helper function isBlockValid so it's declared
near the top of the file alongside bftThreshold before any tests that call it;
update the file by cutting the isBlockValid(pro: number, total: number): boolean
{ return pro >= bftThreshold(total) } definition from the end and pasting it
above the tests (near bftThreshold) so readers see the helper before it's used.

In `@testing/petri/featureFlagRollback.test.ts`:
- Around line 34-62: Current tests only assert the ternary expression; update
them to mock the actual shared state and exercise the real dispatch logic:
import and spy/mock getSharedState to return petriConsensus true/false (and
sequences), then call the real routing/dispatch function (the module that
consumes getSharedState—e.g., the dispatcher or router entrypoint used in
production) and assert it chose the "petri" or "porbft" path; also add an
integration-style test that toggles the mocked getSharedState values over time
and verifies the dispatcher resolves to the current value. Ensure you reference
and spy on getSharedState and invoke the actual dispatch function used in your
codebase rather than asserting the ternary expression directly.

In `@testing/petri/finality.test.ts`:
- Around line 200-215: The test "only PRE_APPROVED classification triggers
soft_finality_at" currently uses an unnecessary inner if; simplify by asserting
the boolean expression directly for each classification. Replace the loop body
so that for each cls in classifications you call expect(cls ===
"PRE_APPROVED").toBe(expectedValue) (or build an expected array like [false,
false, true] and assert classifications.map(c => c === "PRE_APPROVED") equals
that array). Update the test function (the test case with that name) to use this
direct assertion pattern rather than the nested conditional.

In `@testing/petri/happyPath.test.ts`:
- Around line 18-37: Extract the duplicated helpers into a single module and
update tests to import them: create testing/petri/helpers.ts that exports
mockPeers, bftThreshold, and isBlockValid (copy the existing implementations
from happyPath.test.ts), then replace the local declarations in
happyPath.test.ts, blockCompiler.test.ts, routing.test.ts, conflictPath.test.ts,
liveness.test.ts, and byzantineFault.test.ts with imports from that module
(import { mockPeers, bftThreshold, isBlockValid } from "testing/petri/helpers"),
ensuring the exported signatures and behavior remain identical to the originals
(including mockPeers return shape and bftThreshold logic).
- Around line 196-256: These tests duplicate lifecycle coverage from
continuousForge.test.ts; either delete the overlapping "Happy Path — Forge State
Machine" test block or trim it to only keep higher-level integration assertions
(e.g., only the end-to-end happy path flows that exercise
ContinuousForge.start/shard interaction and getState transitions) while removing
unit-like cases already covered (reset, double-start, and lifecycle
sub-assertions). Locate the tests referencing ContinuousForge, mockPeers, and
methods start/pause/resume/reset/stop in testing/petri/happyPath.test.ts and
remove or reduce them to a single integration test that asserts state
transitions for a real shard scenario, leaving detailed lifecycle specs in
continuousForge.test.ts.

In `@testing/petri/routing.test.ts`:
- Around line 61-78: The probabilistic assertion in the test using selectMembers
and mockPeers is weak; increase confidence by running more trials and tightening
the threshold or documenting the expected collision probability—e.g., run 100
iterations by looping i from 0..99, compute diffCount as you do now, and assert
a higher minimum (such as expect(diffCount).toBeGreaterThan(80)); additionally
add a brief comment near the test explaining the expected collision probability
given 10 peers and 2 selections so the threshold is understandable.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 01c9238a-e2ec-4f83-9b93-f64e3d8b3e00

📥 Commits

Reviewing files that changed from the base of the PR and between 5e9ded4 and 5d2fffd.

📒 Files selected for processing (22)
  • package.json
  • petri/01-implementation-plan.md
  • petri/03-secretary-coordinated-signing.md
  • petri/VADEMECUM.md
  • petri/architecture-diagram.md
  • src/libs/consensus/petri/coordination/petriSecretary.ts
  • src/libs/consensus/petri/utils/canonicalJson.ts
  • testing/petri/benchmark.test.ts
  • testing/petri/blockCompiler.test.ts
  • testing/petri/byzantineFault.test.ts
  • testing/petri/canonicalJson.test.ts
  • testing/petri/classifier.test.ts
  • testing/petri/conflictPath.test.ts
  • testing/petri/continuousForge.test.ts
  • testing/petri/deltaTracker.test.ts
  • testing/petri/featureFlagRollback.test.ts
  • testing/petri/finality.test.ts
  • testing/petri/happyPath.test.ts
  • testing/petri/liveness.test.ts
  • testing/petri/routing.test.ts
  • testing/petri/secretaryCoordination.test.ts
  • testing/petri/speculativeExecutor.test.ts
✅ Files skipped from review due to trivial changes (2)
  • src/libs/consensus/petri/utils/canonicalJson.ts
  • package.json

New consensus submethod: `petri_submitBlockHash`
- **Direction**: member -> secretary
- **Request params**: `[blockHash, signature, blockNumber]`
- **Response**: `{ status: "collected" | "mismatch" | "error" }`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

RPC response status "mismatch" documented but not implemented.

The doc specifies petri_submitBlockHash returns { status: "collected" | "mismatch" | "error" }, but receiveBlockHashSubmission in petriSecretary.ts (line 80) always returns { status: "collected" }. Either update the implementation to return appropriate status codes, or update the doc to reflect actual behavior.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@petri/03-secretary-coordinated-signing.md` at line 54, The doc declares
petri_submitBlockHash returns { status: "collected" | "mismatch" | "error" } but
receiveBlockHashSubmission in petriSecretary.ts currently always returns {
status: "collected" }; either implement the missing branches or align the doc:
update receiveBlockHashSubmission to detect and return "mismatch" when the
submitted hash disagrees with the expected/previously collected hash and return
"error" on exceptional conditions (e.g., validation failure or internal
exception), ensuring these branches emit the correct status string, or instead
update petri/03-secretary-coordinated-signing.md to only list { status:
"collected" } if you choose not to implement mismatch/error; reference
petri_submitBlockHash and receiveBlockHashSubmission when making the change.

Comment on lines +268 to +286
test("forge getCurrentDeltas returns copy (no reference leak)", () => {
const config = {
forgeIntervalMs: 60000,
blockIntervalMs: 10000,
agreementThreshold: 7,
problematicTTLRounds: 5,
}
const forge = new (require("@/libs/consensus/petri/forge/continuousForge").ContinuousForge)(config)
forge.start(mockPeers(3))

const deltas1 = forge.getCurrentDeltas()
const deltas2 = forge.getCurrentDeltas()

// Should be different object references (spread copy)
expect(deltas1).not.toBe(deltas2)
expect(deltas1).toEqual(deltas2)

forge.stop()
})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Ensure forge cleanup on test failure.

If assertions fail before forge.stop() is called, the forge's internal timers may continue running, potentially affecting subsequent tests. Consider wrapping in try/finally:

🛡️ Proposed fix
     test("forge getCurrentDeltas returns copy (no reference leak)", () => {
         const config = {
             forgeIntervalMs: 60000,
             blockIntervalMs: 10000,
             agreementThreshold: 7,
             problematicTTLRounds: 5,
         }
         const forge = new (require("@/libs/consensus/petri/forge/continuousForge").ContinuousForge)(config)
         forge.start(mockPeers(3))

-        const deltas1 = forge.getCurrentDeltas()
-        const deltas2 = forge.getCurrentDeltas()
-
-        // Should be different object references (spread copy)
-        expect(deltas1).not.toBe(deltas2)
-        expect(deltas1).toEqual(deltas2)
-
-        forge.stop()
+        try {
+            const deltas1 = forge.getCurrentDeltas()
+            const deltas2 = forge.getCurrentDeltas()
+
+            // Should be different object references (spread copy)
+            expect(deltas1).not.toBe(deltas2)
+            expect(deltas1).toEqual(deltas2)
+        } finally {
+            forge.stop()
+        }
     })
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
test("forge getCurrentDeltas returns copy (no reference leak)", () => {
const config = {
forgeIntervalMs: 60000,
blockIntervalMs: 10000,
agreementThreshold: 7,
problematicTTLRounds: 5,
}
const forge = new (require("@/libs/consensus/petri/forge/continuousForge").ContinuousForge)(config)
forge.start(mockPeers(3))
const deltas1 = forge.getCurrentDeltas()
const deltas2 = forge.getCurrentDeltas()
// Should be different object references (spread copy)
expect(deltas1).not.toBe(deltas2)
expect(deltas1).toEqual(deltas2)
forge.stop()
})
test("forge getCurrentDeltas returns copy (no reference leak)", () => {
const config = {
forgeIntervalMs: 60000,
blockIntervalMs: 10000,
agreementThreshold: 7,
problematicTTLRounds: 5,
}
const forge = new (require("@/libs/consensus/petri/forge/continuousForge").ContinuousForge)(config)
forge.start(mockPeers(3))
try {
const deltas1 = forge.getCurrentDeltas()
const deltas2 = forge.getCurrentDeltas()
// Should be different object references (spread copy)
expect(deltas1).not.toBe(deltas2)
expect(deltas1).toEqual(deltas2)
} finally {
forge.stop()
}
})
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@testing/petri/benchmark.test.ts` around lines 268 - 286, The test "forge
getCurrentDeltas returns copy (no reference leak)" currently calls
forge.start(...) and then forge.stop() but may leak timers if an assertion
throws; wrap the test body so that the ContinuousForge instance created from
ContinuousForge
(require("@/libs/consensus/petri/forge/continuousForge").ContinuousForge) is
always cleaned up by using a try/finally where forge.start(mockPeers(3)) is
before the try and forge.stop() is invoked in the finally block to ensure
forge.stop() runs even when expect(...) assertions fail.

Comment on lines +29 to +30
ttlRounds: number = 5,
currentRound: number = 1,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Remove redundant type annotations.

ESLint correctly flags that these types are trivially inferred from the default values:

🔧 Fix type annotations
 function simulateRound(
     shardSize: number,
     byzantineCount: number,
     txHash: string,
     correctDelta: string,
-    ttlRounds: number = 5,
-    currentRound: number = 1,
+    ttlRounds = 5,
+    currentRound = 1,
 ): { promoted: string[]; flagged: string[] } {
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
ttlRounds: number = 5,
currentRound: number = 1,
ttlRounds = 5,
currentRound = 1,
🧰 Tools
🪛 ESLint

[error] 29-29: Type number trivially inferred from a number literal, remove type annotation.

(@typescript-eslint/no-inferrable-types)


[error] 30-30: Type number trivially inferred from a number literal, remove type annotation.

(@typescript-eslint/no-inferrable-types)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@testing/petri/byzantineFault.test.ts` around lines 29 - 30, Remove the
redundant explicit type annotations on the two test variables: change the
declarations of ttlRounds: number = 5 and currentRound: number = 1 to rely on
type inference (ttlRounds = 5 and currentRound = 1) so ESLint no longer flags
trivially inferred types; update the declarations where ttlRounds and
currentRound are defined in byzantineFault.test.ts accordingly.

Comment on lines +112 to +125
test("soft finality is only set once (first PRE_APPROVED)", () => {
const firstTime = 1700000000000
const laterTime = 1700000002000

// First promotion sets the timestamp
let softFinalityAt: number | null = null
softFinalityAt = firstTime

// Second call should not overwrite (simulating idempotency)
// In practice, updateClassification always sets it,
// but the first call is what matters
expect(softFinalityAt).toBe(firstTime)
expect(softFinalityAt).not.toBe(laterTime)
})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Test doesn't actually verify idempotency behavior.

The test comment states it's "simulating idempotency" but the code only assigns a value once (softFinalityAt = firstTime) and never attempts a second assignment. The assertion at line 124 (expect(softFinalityAt).not.toBe(laterTime)) is trivially true since laterTime was never assigned.

To actually test idempotency, you would need to call the production updateClassification function twice and verify the timestamp doesn't change:

test("soft finality is only set once (first PRE_APPROVED)", async () => {
    // Create a tx
    await updateClassification(txHash, "PRE_APPROVED")
    const first = await getTransactionFinality(txHash)
    
    // Attempt to update again
    await updateClassification(txHash, "PRE_APPROVED")
    const second = await getTransactionFinality(txHash)
    
    expect(second.softFinalityAt).toBe(first.softFinalityAt)
})
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@testing/petri/finality.test.ts` around lines 112 - 125, The test currently
only assigns softFinalityAt locally and doesn't verify idempotency; change it to
call the real updateClassification twice and assert the stored timestamp stays
the same: make the test async, create or use a txHash, call
updateClassification(txHash, "PRE_APPROVED"), read the stored value via
getTransactionFinality(txHash) (or the equivalent getter), call
updateClassification(txHash, "PRE_APPROVED") again, read the value a second
time, and assert that first.softFinalityAt === second.softFinalityAt to ensure
the timestamp is not overwritten.

* Mocks: Mempool, Chain, broadcastBlockHash, insertBlock, BroadcastManager, peer RPCs
* Real: DeltaAgreementTracker, ContinuousForge state machine, isBlockValid logic
*/
import { describe, expect, test, mock, beforeEach } from "bun:test"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Remove unused import.

beforeEach is imported but never used in this file.

-import { describe, expect, test, mock, beforeEach } from "bun:test"
+import { describe, expect, test, mock } from "bun:test"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
import { describe, expect, test, mock, beforeEach } from "bun:test"
import { describe, expect, test, mock } from "bun:test"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@testing/petri/happyPath.test.ts` at line 11, Remove the unused import
"beforeEach" from the import statement that currently imports describe, expect,
test, mock, beforeEach (e.g., the import line referencing bun:test) so the file
only imports the symbols actually used (describe, expect, test, mock); update
the import to omit beforeEach to avoid the unused-import lint/error.

Comment on lines +12 to +32
function computeDeltaHash(
gcrEdits: Array<{
type: string
operation: string
account: string
amount?: number | bigint | string
}>,
): string {
const editsForHashing = gcrEdits.map(edit => ({
type: edit.type,
operation: edit.operation,
account: edit.account,
amount:
typeof edit.amount === "bigint"
? edit.amount.toString()
: String(edit.amount ?? ""),
}))

const canonicalEdits = canonicalJson(editsForHashing)
return Hashing.sha256(canonicalEdits)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Test helper doesn't match production fallback handling.

The computeDeltaHash helper has required operation and account fields, but production code (speculativeExecutor.ts lines 104-109) applies ?? "" fallbacks for potentially undefined values:

// Production:
operation: e.operation ?? "",
account: e.account ?? "",

Consider aligning the test helper to match production exactly, or add test cases with undefined operation/account to validate fallback behavior.

🔧 Proposed fix to match production
 function computeDeltaHash(
     gcrEdits: Array<{
         type: string
-        operation: string
-        account: string
+        operation?: string
+        account?: string
         amount?: number | bigint | string
     }>,
 ): string {
     const editsForHashing = gcrEdits.map(edit => ({
         type: edit.type,
-        operation: edit.operation,
-        account: edit.account,
+        operation: edit.operation ?? "",
+        account: edit.account ?? "",
         amount:
             typeof edit.amount === "bigint"
                 ? edit.amount.toString()
                 : String(edit.amount ?? ""),
     }))
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@testing/petri/speculativeExecutor.test.ts` around lines 12 - 32, The test
helper computeDeltaHash should mirror production fallback behavior: when mapping
gcrEdits to editsForHashing use e.operation ?? "" and e.account ?? "" for
operation and account (keep the existing bigint/string handling for amount),
then pass that through canonicalJson and Hashing.sha256 as before so the test
helper matches speculativeExecutor.ts behavior exactly.

tcsenpai added 11 commits March 23, 2026 15:32
Add PETRI_AGREEMENT_THRESHOLD, PETRI_PROBLEMATIC_TTL_ROUNDS, and
PETRI_SHARD_SIZE with their defaults so operators can discover all
tuning knobs. Closes mycelium #148 (Q-6).
The hash is computed over normalized edits ({type, operation, account,
amount}), not the full GCREdit. Closes mycelium #146 (CR-9).
The ok boolean required hardFinalityObserved but not
softFinalityObserved, despite the scenario claiming to verify
softFinalityAt. Closes mycelium #145 (CR-19).
Reject invalid Petri config values (non-positive intervals, shardSize,
agreementThreshold out of range, negative TTL) at startup instead of
silently breaking consensus. Closes mycelium #147 (CR-2).
blockNumber=0 (genesis) is a valid reference but was filtered out by
truthy check. Change to !== undefined in both getAll() and
getByClassification(). Closes mycelium #142 (Q-5).
trackedCount is a getter property, not a function — the typeof check
for "function" always failed and petri_tracker_tx_count was never set.
Add getTrackerCount() on ContinuousForge and use it in MetricsCollector
instead of the unsafe (forge as any).tracker cast.
Closes mycelium #143 (Q-4/CR-3).
A resolved TX from arbitration could still be in the mempool, causing
duplicates in the block. Use a hash-keyed Map to deduplicate before
ordering. Closes mycelium #144 (CR-4).
If getCommonValidatorSeed() or getShard() threw before entering
petriConsensusRoutine, startingConsensus stayed true permanently,
blocking all future consensus rounds. Wrap in try/finally.
Closes mycelium #136 (CR-17).
Mempool cleanup used transactionEntities (all TXs) instead of
committedTxHashes (only successfully inserted). Skipped TXs were
silently removed from mempool despite not being persisted.
Closes mycelium #138 (Q-3).
Unhandled exceptions from getTransactionFinality() propagated without
a structured error response, unlike neighboring RPC cases.
Closes mycelium #139 (CR-16).
If a round had no TO_APPROVE txs and returned early, stale deltas
from the previous round were served via getCurrentDeltas().
Closes mycelium #141 (CR-10).
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 8

♻️ Duplicate comments (1)
src/libs/consensus/petri/forge/continuousForge.ts (1)

154-173: ⚠️ Potential issue | 🟠 Major

Reset currentRoundDeltas before any early return.

If a round finds no TO_APPROVE txs or aborts before Line 203, getCurrentDeltas() keeps serving the previous round's hashes. The petri_exchangeDeltas handler currently reads that map without a round guard, so peers can record stale deltas against the new round.

🔧 Suggested fix
     async runForgeRound(): Promise<void> {
         this.state.currentRound++
         this.state.lastRoundStartedAt = Date.now()
         const round = this.state.currentRound
+        this.currentRoundDeltas = {}
 
         log.debug(`[ContinuousForge] Round ${round} starting`)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/libs/consensus/petri/forge/continuousForge.ts` around lines 154 - 173,
The issue is stale per-round deltas persisting when runForgeRound returns early;
in runForgeRound (continuousForge.ts) reset/clear the currentRoundDeltas map at
the start of the round (or immediately before any early return) so
getCurrentDeltas() and the petri_exchangeDeltas handler never serve
previous-round hashes; update the method that maintains currentRoundDeltas
(referenced as currentRoundDeltas and getCurrentDeltas()) inside runForgeRound
to clear it (e.g., new Map() or clear()) before checking TO_APPROVE txs or
aborting.
🧹 Nitpick comments (4)
testing/loadgen/src/features/consensus/petri_tx_inclusion.ts (3)

201-206: Latency calculation may be affected by clock skew.

softFinalityAt and hardFinalityAt are server-side timestamps from the blockchain, while txSubmittedAt is a client-side Date.now(). Clock drift between client and server could produce misleading or negative latency values.

For devnet testing this is likely acceptable, but worth noting if these metrics are used for performance analysis.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@testing/loadgen/src/features/consensus/petri_tx_inclusion.ts` around lines
201 - 206, Latency fields softFinalityLatencyMs and hardFinalityLatencyMs use
client-side txSubmittedAt which can be skewed vs server finality timestamps
(finalityResult.finality.softFinalityAt/hardFinalityAt), causing
misleading/negative values; update the calculation in the code that builds these
fields (refer to softFinalityLatencyMs, hardFinalityLatencyMs, finalityResult,
txSubmittedAt) to either (a) compute latency using a server-side submission
timestamp if available (e.g., use finalityResult.submittedAt or a
serverTimestamp included in finalityResult) or (b) derive a clock offset by
comparing a server time value from the same response and adjust txSubmittedAt
before subtraction, and as a safeguard clamp negative results to null (or 0) to
avoid reporting negative latencies.

94-94: Consider extracting helper functions to reduce cognitive complexity.

SonarCloud reports cognitive complexity of 23, exceeding the allowed threshold of 15. The function handles setup, transaction submission, and multiple verification phases sequentially.

Consider extracting logical sections into helper functions:

  • submitTransaction(demos, recipientAddress, nonce, amount)
  • verifyInclusion(rpcUrls, txHash, senderAddress, expectedNonce, timeoutSec, pollMs)

This would improve readability and testability while reducing complexity.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@testing/loadgen/src/features/consensus/petri_tx_inclusion.ts` at line 94, The
runPetriTxInclusion function is too cognitively complex; split its setup,
submission, and multi-phase verification logic into clear helpers: implement
submitTransaction(demos, recipientAddress, nonce, amount) to encapsulate
creating and sending the transaction and returning the txHash and any submission
metadata, and implement verifyInclusion(rpcUrls, txHash, senderAddress,
expectedNonce, timeoutSec, pollMs) to encapsulate the polling/validation across
RPC endpoints until the transaction and nonce are observed; refactor
runPetriTxInclusion to call these helpers (and additional small helpers for
initial setup or teardown if needed) so each helper has a single responsibility
and the main function just orchestrates the steps.

133-137: The as any casts are unnecessary—confirm and broadcast methods are properly typed in the SDK.

The same Demos import from @kynesyslabs/demosdk/websdk is used in testing/loadgen/src/transfer_loadgen.ts without any type casting (lines 280 and 284), which proves these methods are available in the SDK's public type definitions. Remove the casts to improve type safety.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@testing/loadgen/src/features/consensus/petri_tx_inclusion.ts` around lines
133 - 137, Remove the unnecessary any casts by calling the strongly-typed SDK
methods directly: replace (demos as any).confirm(signedTx) with
demos.confirm(signedTx) and (demos as any).broadcast(validity) with
demos.broadcast(validity); keep the existing variables (signedTx, validity) and
existing error check but use demos.confirm and demos.broadcast to restore type
safety and rely on the SDK's public definitions.
src/libs/blockchain/mempool_v2.ts (1)

297-300: Preserve the existing mempool tie-breakers in classification queries.

getMempool() already orders by timestamp, reference_block, then hash, but getByClassification() only sorts by timestamp. Reusing the same tie-breakers avoids nondeterministic iteration when two txs share a timestamp, especially for callers like getPreApproved() that can feed consensus flows.

♻️ Suggested change
         return await this.repo.find({
             where,
-            order: { timestamp: "ASC" },
+            order: {
+                timestamp: "ASC",
+                reference_block: "ASC",
+                hash: "ASC",
+            },
         })
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/libs/blockchain/mempool_v2.ts` around lines 297 - 300,
getByClassification() currently orders only by timestamp which can produce
nondeterministic ordering when timestamps tie; update the repository query in
getByClassification() to reuse the same tie-breaker ordering used by
getMempool() (i.e., order by timestamp, then reference_block, then hash, all
ASC) so callers like getPreApproved() see deterministic iteration. Locate
getByClassification() in mempool_v2.ts and change its this.repo.find(...)
options to include the additional order fields timestamp -> reference_block ->
hash matching getMempool().
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.env.example:
- Around line 88-93: The PETRI_... keys are triggering dotenv-linter
UnorderedKey warnings because they are not in alphabetical order; reorder the
block so the env keys are sorted lexicographically (e.g.,
PETRI_AGREEMENT_THRESHOLD, PETRI_BLOCK_INTERVAL_MS, PETRI_CONSENSUS,
PETRI_FORGE_INTERVAL_MS, PETRI_PROBLEMATIC_TTL_ROUNDS, PETRI_SHARD_SIZE) and
update the .env.example block containing PETRI_CONSENSUS,
PETRI_FORGE_INTERVAL_MS, PETRI_BLOCK_INTERVAL_MS, PETRI_AGREEMENT_THRESHOLD,
PETRI_PROBLEMATIC_TTL_ROUNDS, PETRI_SHARD_SIZE accordingly to resolve the linter
errors.
- Around line 36-38: Reorder the OMNI variables in .env.example so they follow
the dotenv-linter expected alphabetical order: move OMNI_MODE to come
immediately after OMNI_ENABLED and before OMNI_PORT; specifically adjust the
block containing OMNI_ENABLED, OMNI_MODE, and OMNI_PORT to read OMNI_ENABLED,
OMNI_MODE, OMNI_PORT to resolve the UnorderedKey warning.

In `@src/features/metrics/MetricsCollector.ts`:
- Around line 757-761: collectPetriMetrics() is reading
state.pendingTransactions.size which ContinuousForge never populates so
petri_pending_tx_count stays 0; replace this dead source by querying a live
pending-tx count (either add/expose a method like
ContinuousForge.getPendingTransactionCount() that returns the mempool count or
call the existing mempool/mempoolManager API directly) and use that value for
the "petri_pending_tx_count" gauge, or alternatively update
runForgeRound()/ContinuousForge to maintain state.pendingTransactions correctly
if you prefer keeping the gauge tied to the forge state.

In `@src/libs/blockchain/mempool_v2.ts`:
- Around line 117-125: The current logic leaves transactions as
TransactionClassification.TO_APPROVE when executeSpeculatively(...) returns
success: false, causing them to never reach delta agreement; update the handling
in the block that calls executeSpeculatively (the code that reads
result.classification === TransactionClassification.TO_APPROVE and assigns
deltaHash from specResult.delta) so that when specResult.success is false you do
not persist a stuck TO_APPROVE—either reject the transaction or mark it with a
terminal failure classification (e.g., TransactionClassification.TO_REJECT or a
FAILED_SIMULATION status), clear any delta hash, and persist that updated
classification immediately; ensure this mirrors how
ContinuousForge.runForgeRound() gates recording on success && delta so the
mempool won’t retain non-progressing entries.

In `@src/libs/consensus/petri/forge/continuousForge.ts`:
- Around line 166-200: The code only fetches txs already marked
TransactionClassification.TO_APPROVE via Mempool.getByClassification so
transactions inserted without a persisted classification are never processed;
update the flow to also handle unclassified mempool rows by fetching
TransactionClassification.UNCLASSIFIED (or a combined query) alongside
TO_APPROVE, run classifyTransaction for those entries, persist the derived
classification and any computed delta via Mempool.updateClassification, and then
continue with the existing speculative execution path (symbols to edit:
Mempool.getByClassification, TransactionClassification.TO_APPROVE/UNCLASSIFIED,
classifyTransaction, executeSpeculatively, and Mempool.updateClassification).
- Around line 93-95: pause() currently only sets state.isPaused and doesn't wait
for an active round; ensure in ContinuousForge you track the currently running
round promise (e.g., add a member like currentRoundPromise or
activeRoundPromise) and assign it at the start of runForgeRound() and clear it
on completion. Update pause() to set state.isPaused then await the tracked
currentRoundPromise (if present) so it drains the in-flight round before
returning. Also update runBlockPeriod() to check/await the same tracked promise
before proceeding to arbitration/block compilation to prevent mempool rewrites
during an active round. Ensure runForgeRound() uses finally to clear the promise
so pause() and runBlockPeriod() never await a dangling promise.

In `@testing/devnet/.env.example`:
- Around line 24-29: Reorder the PETRI_* entries in the devnet .env example to
satisfy dotenv-linter's UnorderedKey rule by sorting them consistently (e.g.,
alphabetically or matching the project's canonical env order); specifically
locate the block containing PETRI_CONSENSUS, PETRI_FORGE_INTERVAL_MS,
PETRI_BLOCK_INTERVAL_MS, PETRI_AGREEMENT_THRESHOLD,
PETRI_PROBLEMATIC_TTL_ROUNDS, and PETRI_SHARD_SIZE and rearrange those keys into
the agreed ordering (for example: PETRI_AGREEMENT_THRESHOLD,
PETRI_BLOCK_INTERVAL_MS, PETRI_CONSENSUS, PETRI_FORGE_INTERVAL_MS,
PETRI_PROBLEMATIC_TTL_ROUNDS, PETRI_SHARD_SIZE) so dotenv-linter stops reporting
UnorderedKey.

In `@testing/loadgen/src/features/consensus/petri_tx_inclusion.ts`:
- Around line 217-222: The thrown error in petri_tx_inclusion omits a reason
when soft finality isn't observed; update the error-reasons collection logic to
push a message when finalityResult?.softFinalityObserved is false (e.g., add
"soft finality not observed" alongside the existing checks for nonceWait,
blockAdvance, txByHash and finalityResult?.hardFinalityObserved) so the Error
thrown from petri_tx_inclusion includes that failure cause.

---

Duplicate comments:
In `@src/libs/consensus/petri/forge/continuousForge.ts`:
- Around line 154-173: The issue is stale per-round deltas persisting when
runForgeRound returns early; in runForgeRound (continuousForge.ts) reset/clear
the currentRoundDeltas map at the start of the round (or immediately before any
early return) so getCurrentDeltas() and the petri_exchangeDeltas handler never
serve previous-round hashes; update the method that maintains currentRoundDeltas
(referenced as currentRoundDeltas and getCurrentDeltas()) inside runForgeRound
to clear it (e.g., new Map() or clear()) before checking TO_APPROVE txs or
aborting.

---

Nitpick comments:
In `@src/libs/blockchain/mempool_v2.ts`:
- Around line 297-300: getByClassification() currently orders only by timestamp
which can produce nondeterministic ordering when timestamps tie; update the
repository query in getByClassification() to reuse the same tie-breaker ordering
used by getMempool() (i.e., order by timestamp, then reference_block, then hash,
all ASC) so callers like getPreApproved() see deterministic iteration. Locate
getByClassification() in mempool_v2.ts and change its this.repo.find(...)
options to include the additional order fields timestamp -> reference_block ->
hash matching getMempool().

In `@testing/loadgen/src/features/consensus/petri_tx_inclusion.ts`:
- Around line 201-206: Latency fields softFinalityLatencyMs and
hardFinalityLatencyMs use client-side txSubmittedAt which can be skewed vs
server finality timestamps
(finalityResult.finality.softFinalityAt/hardFinalityAt), causing
misleading/negative values; update the calculation in the code that builds these
fields (refer to softFinalityLatencyMs, hardFinalityLatencyMs, finalityResult,
txSubmittedAt) to either (a) compute latency using a server-side submission
timestamp if available (e.g., use finalityResult.submittedAt or a
serverTimestamp included in finalityResult) or (b) derive a clock offset by
comparing a server time value from the same response and adjust txSubmittedAt
before subtraction, and as a safeguard clamp negative results to null (or 0) to
avoid reporting negative latencies.
- Line 94: The runPetriTxInclusion function is too cognitively complex; split
its setup, submission, and multi-phase verification logic into clear helpers:
implement submitTransaction(demos, recipientAddress, nonce, amount) to
encapsulate creating and sending the transaction and returning the txHash and
any submission metadata, and implement verifyInclusion(rpcUrls, txHash,
senderAddress, expectedNonce, timeoutSec, pollMs) to encapsulate the
polling/validation across RPC endpoints until the transaction and nonce are
observed; refactor runPetriTxInclusion to call these helpers (and additional
small helpers for initial setup or teardown if needed) so each helper has a
single responsibility and the main function just orchestrates the steps.
- Around line 133-137: Remove the unnecessary any casts by calling the
strongly-typed SDK methods directly: replace (demos as any).confirm(signedTx)
with demos.confirm(signedTx) and (demos as any).broadcast(validity) with
demos.broadcast(validity); keep the existing variables (signedTx, validity) and
existing error check but use demos.confirm and demos.broadcast to restore type
safety and rely on the SDK's public definitions.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 3167fdf1-44b0-44ff-907e-e01c2c6bb126

📥 Commits

Reviewing files that changed from the base of the PR and between 5d2fffd and 27a1fa1.

📒 Files selected for processing (9)
  • .env.example
  • src/config/loader.ts
  • src/features/metrics/MetricsCollector.ts
  • src/libs/blockchain/mempool_v2.ts
  • src/libs/consensus/petri/block/petriBlockCompiler.ts
  • src/libs/consensus/petri/forge/continuousForge.ts
  • src/libs/consensus/petri/types/stateDelta.ts
  • testing/devnet/.env.example
  • testing/loadgen/src/features/consensus/petri_tx_inclusion.ts
✅ Files skipped from review due to trivial changes (1)
  • src/libs/consensus/petri/types/stateDelta.ts
🚧 Files skipped from review as they are similar to previous changes (2)
  • src/libs/consensus/petri/block/petriBlockCompiler.ts
  • src/config/loader.ts

Comment on lines +36 to +38
OMNI_ENABLED=true
OMNI_PORT=3001
OMNI_MODE=OMNI_ONLY
OMNI_MODE=OMNI_PREFERRED
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Reorder Omni keys to satisfy dotenv-linter.

OMNI_MODE is currently placed after OMNI_PORT, which triggers the configured UnorderedKey warning.

Proposed fix
 OMNI_ENABLED=true
-OMNI_PORT=3001
 OMNI_MODE=OMNI_PREFERRED
+OMNI_PORT=3001
 OMNI_FATAL=false
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
OMNI_ENABLED=true
OMNI_PORT=3001
OMNI_MODE=OMNI_ONLY
OMNI_MODE=OMNI_PREFERRED
OMNI_ENABLED=true
OMNI_MODE=OMNI_PREFERRED
OMNI_PORT=3001
🧰 Tools
🪛 dotenv-linter (4.0.0)

[warning] 38-38: [UnorderedKey] The OMNI_MODE key should go before the OMNI_PORT key

(UnorderedKey)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.env.example around lines 36 - 38, Reorder the OMNI variables in
.env.example so they follow the dotenv-linter expected alphabetical order: move
OMNI_MODE to come immediately after OMNI_ENABLED and before OMNI_PORT;
specifically adjust the block containing OMNI_ENABLED, OMNI_MODE, and OMNI_PORT
to read OMNI_ENABLED, OMNI_MODE, OMNI_PORT to resolve the UnorderedKey warning.

Comment on lines +88 to +93
PETRI_CONSENSUS=true
PETRI_FORGE_INTERVAL_MS=2000
PETRI_BLOCK_INTERVAL_MS=10000
PETRI_AGREEMENT_THRESHOLD=7
PETRI_PROBLEMATIC_TTL_ROUNDS=5
PETRI_SHARD_SIZE=10
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Petri env keys are valid, but ordering currently fails dotenv-linter.

Please reorder this block to resolve the UnorderedKey warnings.

Proposed fix
-PETRI_CONSENSUS=true
-PETRI_FORGE_INTERVAL_MS=2000
-PETRI_BLOCK_INTERVAL_MS=10000
-PETRI_AGREEMENT_THRESHOLD=7
+PETRI_AGREEMENT_THRESHOLD=7
+PETRI_BLOCK_INTERVAL_MS=10000
+PETRI_CONSENSUS=true
+PETRI_FORGE_INTERVAL_MS=2000
 PETRI_PROBLEMATIC_TTL_ROUNDS=5
 PETRI_SHARD_SIZE=10
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
PETRI_CONSENSUS=true
PETRI_FORGE_INTERVAL_MS=2000
PETRI_BLOCK_INTERVAL_MS=10000
PETRI_AGREEMENT_THRESHOLD=7
PETRI_PROBLEMATIC_TTL_ROUNDS=5
PETRI_SHARD_SIZE=10
PETRI_AGREEMENT_THRESHOLD=7
PETRI_BLOCK_INTERVAL_MS=10000
PETRI_CONSENSUS=true
PETRI_FORGE_INTERVAL_MS=2000
PETRI_PROBLEMATIC_TTL_ROUNDS=5
PETRI_SHARD_SIZE=10
🧰 Tools
🪛 dotenv-linter (4.0.0)

[warning] 90-90: [UnorderedKey] The PETRI_BLOCK_INTERVAL_MS key should go before the PETRI_CONSENSUS key

(UnorderedKey)


[warning] 91-91: [UnorderedKey] The PETRI_AGREEMENT_THRESHOLD key should go before the PETRI_BLOCK_INTERVAL_MS key

(UnorderedKey)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.env.example around lines 88 - 93, The PETRI_... keys are triggering
dotenv-linter UnorderedKey warnings because they are not in alphabetical order;
reorder the block so the env keys are sorted lexicographically (e.g.,
PETRI_AGREEMENT_THRESHOLD, PETRI_BLOCK_INTERVAL_MS, PETRI_CONSENSUS,
PETRI_FORGE_INTERVAL_MS, PETRI_PROBLEMATIC_TTL_ROUNDS, PETRI_SHARD_SIZE) and
update the .env.example block containing PETRI_CONSENSUS,
PETRI_FORGE_INTERVAL_MS, PETRI_BLOCK_INTERVAL_MS, PETRI_AGREEMENT_THRESHOLD,
PETRI_PROBLEMATIC_TTL_ROUNDS, PETRI_SHARD_SIZE accordingly to resolve the linter
errors.

Comment on lines +757 to +761
const state = forge.getState()
this.metricsService.setGauge("petri_forge_running", state.isRunning ? 1 : 0)
this.metricsService.setGauge("petri_forge_paused", state.isPaused ? 1 : 0)
this.metricsService.setGauge("petri_forge_round", state.currentRound)
this.metricsService.setGauge("petri_pending_tx_count", state.pendingTransactions.size)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

petri_pending_tx_count is wired to dead forge state.

collectPetriMetrics() reads state.pendingTransactions.size, but ContinuousForge never populates that map during runForgeRound(). This gauge will stay at 0 even while Petri has work queued unless the forge maintains that state or the metric is derived from a live source.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/features/metrics/MetricsCollector.ts` around lines 757 - 761,
collectPetriMetrics() is reading state.pendingTransactions.size which
ContinuousForge never populates so petri_pending_tx_count stays 0; replace this
dead source by querying a live pending-tx count (either add/expose a method like
ContinuousForge.getPendingTransactionCount() that returns the mempool count or
call the existing mempool/mempoolManager API directly) and use that value for
the "petri_pending_tx_count" gauge, or alternatively update
runForgeRound()/ContinuousForge to maintain state.pendingTransactions correctly
if you prefer keeping the gauge tied to the forge state.

Comment on lines +117 to +125
if (result.classification === TransactionClassification.TO_APPROVE) {
const specResult = await executeSpeculatively(
transaction,
result.gcrEdits,
)
if (specResult.success && specResult.delta) {
deltaHash = specResult.delta.hash
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Handle speculative-execution failures instead of persisting a stuck TO_APPROVE tx.

If executeSpeculatively() returns success: false, this still saves the tx as TO_APPROVE with no delta_hash. ContinuousForge.runForgeRound() has the same success && delta gate before it records a hash, so these txs never enter delta agreement and can sit in mempool indefinitely. Reject them here, or persist a terminal classification on simulation failure.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/libs/blockchain/mempool_v2.ts` around lines 117 - 125, The current logic
leaves transactions as TransactionClassification.TO_APPROVE when
executeSpeculatively(...) returns success: false, causing them to never reach
delta agreement; update the handling in the block that calls
executeSpeculatively (the code that reads result.classification ===
TransactionClassification.TO_APPROVE and assigns deltaHash from
specResult.delta) so that when specResult.success is false you do not persist a
stuck TO_APPROVE—either reject the transaction or mark it with a terminal
failure classification (e.g., TransactionClassification.TO_REJECT or a
FAILED_SIMULATION status), clear any delta hash, and persist that updated
classification immediately; ensure this mirrors how
ContinuousForge.runForgeRound() gates recording on success && delta so the
mempool won’t retain non-progressing entries.

Comment on lines +93 to +95
pause(): void {
this.state.isPaused = true
log.debug("[ContinuousForge] Paused")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

pause() doesn't drain an in-flight round.

runBlockPeriod() pauses the forge and immediately starts arbitration/block compilation, but pause() only flips a flag for the next timer tick. A round already inside runForgeRound() can still reach Lines 227-239 and rewrite mempool classifications while the block is being compiled. Please add a drain/await path for the active round before the block boundary proceeds.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/libs/consensus/petri/forge/continuousForge.ts` around lines 93 - 95,
pause() currently only sets state.isPaused and doesn't wait for an active round;
ensure in ContinuousForge you track the currently running round promise (e.g.,
add a member like currentRoundPromise or activeRoundPromise) and assign it at
the start of runForgeRound() and clear it on completion. Update pause() to set
state.isPaused then await the tracked currentRoundPromise (if present) so it
drains the in-flight round before returning. Also update runBlockPeriod() to
check/await the same tracked promise before proceeding to arbitration/block
compilation to prevent mempool rewrites during an active round. Ensure
runForgeRound() uses finally to clear the promise so pause() and
runBlockPeriod() never await a dangling promise.

Comment on lines +166 to +200
// Step 2: Get TO_APPROVE transactions
const toApproveTxs = await Mempool.getByClassification(
TransactionClassification.TO_APPROVE,
)

if (toApproveTxs.length === 0) {
log.debug(`[ContinuousForge] Round ${round}: no TO_APPROVE txs`)
return
}

// Step 3: Speculatively execute each and build local delta map
const localDeltas: Record<string, string> = {}

for (const mempoolTx of toApproveTxs) {
// Use existing delta_hash if already computed at insertion
if (mempoolTx.delta_hash) {
localDeltas[mempoolTx.hash] = mempoolTx.delta_hash
continue
}

// Otherwise compute now (for txs received via merge without classification)
const tx = mempoolTx as unknown as import("@kynesyslabs/demosdk/types").Transaction
const classResult = await classifyTransaction(tx)
if (classResult.classification === TransactionClassification.TO_APPROVE) {
const specResult = await executeSpeculatively(tx, classResult.gcrEdits)
if (specResult.success && specResult.delta) {
localDeltas[mempoolTx.hash] = specResult.delta.hash
// Update mempool with computed delta
await Mempool.updateClassification(
mempoolTx.hash,
TransactionClassification.TO_APPROVE,
specResult.delta.hash,
)
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Handle unclassified mempool rows before filtering to TO_APPROVE.

The fallback at Line 186 only helps rows that already have classification = TO_APPROVE. Transactions merged into the mempool without a persisted classification—for example via the direct repo.save() path in src/libs/blockchain/mempool_v2.ts, Lines 222-224—are excluded by Line 167 and will never be classified, promoted, or flagged. Either classify on ingest or include unclassified rows here and persist the derived classification first.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/libs/consensus/petri/forge/continuousForge.ts` around lines 166 - 200,
The code only fetches txs already marked TransactionClassification.TO_APPROVE
via Mempool.getByClassification so transactions inserted without a persisted
classification are never processed; update the flow to also handle unclassified
mempool rows by fetching TransactionClassification.UNCLASSIFIED (or a combined
query) alongside TO_APPROVE, run classifyTransaction for those entries, persist
the derived classification and any computed delta via
Mempool.updateClassification, and then continue with the existing speculative
execution path (symbols to edit: Mempool.getByClassification,
TransactionClassification.TO_APPROVE/UNCLASSIFIED, classifyTransaction,
executeSpeculatively, and Mempool.updateClassification).

Comment on lines +24 to +29
PETRI_CONSENSUS=true
PETRI_FORGE_INTERVAL_MS=2000
PETRI_BLOCK_INTERVAL_MS=10000
PETRI_AGREEMENT_THRESHOLD=7
PETRI_PROBLEMATIC_TTL_ROUNDS=5
PETRI_SHARD_SIZE=10
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Devnet Petri block should be reordered to pass dotenv-linter.

This section has the same UnorderedKey issues; reordering avoids lint noise and keeps env templates consistent.

Proposed fix
-PETRI_CONSENSUS=true
-PETRI_FORGE_INTERVAL_MS=2000
-PETRI_BLOCK_INTERVAL_MS=10000
-PETRI_AGREEMENT_THRESHOLD=7
+PETRI_AGREEMENT_THRESHOLD=7
+PETRI_BLOCK_INTERVAL_MS=10000
+PETRI_CONSENSUS=true
+PETRI_FORGE_INTERVAL_MS=2000
 PETRI_PROBLEMATIC_TTL_ROUNDS=5
 PETRI_SHARD_SIZE=10
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
PETRI_CONSENSUS=true
PETRI_FORGE_INTERVAL_MS=2000
PETRI_BLOCK_INTERVAL_MS=10000
PETRI_AGREEMENT_THRESHOLD=7
PETRI_PROBLEMATIC_TTL_ROUNDS=5
PETRI_SHARD_SIZE=10
PETRI_AGREEMENT_THRESHOLD=7
PETRI_BLOCK_INTERVAL_MS=10000
PETRI_CONSENSUS=true
PETRI_FORGE_INTERVAL_MS=2000
PETRI_PROBLEMATIC_TTL_ROUNDS=5
PETRI_SHARD_SIZE=10
🧰 Tools
🪛 dotenv-linter (4.0.0)

[warning] 26-26: [UnorderedKey] The PETRI_BLOCK_INTERVAL_MS key should go before the PETRI_CONSENSUS key

(UnorderedKey)


[warning] 27-27: [UnorderedKey] The PETRI_AGREEMENT_THRESHOLD key should go before the PETRI_BLOCK_INTERVAL_MS key

(UnorderedKey)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@testing/devnet/.env.example` around lines 24 - 29, Reorder the PETRI_*
entries in the devnet .env example to satisfy dotenv-linter's UnorderedKey rule
by sorting them consistently (e.g., alphabetically or matching the project's
canonical env order); specifically locate the block containing PETRI_CONSENSUS,
PETRI_FORGE_INTERVAL_MS, PETRI_BLOCK_INTERVAL_MS, PETRI_AGREEMENT_THRESHOLD,
PETRI_PROBLEMATIC_TTL_ROUNDS, and PETRI_SHARD_SIZE and rearrange those keys into
the agreed ordering (for example: PETRI_AGREEMENT_THRESHOLD,
PETRI_BLOCK_INTERVAL_MS, PETRI_CONSENSUS, PETRI_FORGE_INTERVAL_MS,
PETRI_PROBLEMATIC_TTL_ROUNDS, PETRI_SHARD_SIZE) so dotenv-linter stops reporting
UnorderedKey.

Comment on lines +217 to +222
const reasons: string[] = []
if (!nonceWait.ok) reasons.push("nonce did not advance")
if (!blockAdvance.ok) reasons.push("block height did not advance")
if (txHash && !txByHash?.ok) reasons.push("tx not found by hash")
if (!finalityResult?.hardFinalityObserved) reasons.push("hard finality not observed via getTransactionFinality RPC")
throw new Error(`petri_tx_inclusion failed: ${reasons.join("; ")}`)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Missing failure reason when soft finality is not observed.

The ok check at line 181 requires softFinalityObserved, but the error reasons don't include a message for this case. If soft finality fails while other checks pass, the thrown error won't explain why the scenario failed.

Proposed fix
         if (!nonceWait.ok) reasons.push("nonce did not advance")
         if (!blockAdvance.ok) reasons.push("block height did not advance")
         if (txHash && !txByHash?.ok) reasons.push("tx not found by hash")
+        if (!finalityResult?.softFinalityObserved) reasons.push("soft finality not observed via getTransactionFinality RPC")
         if (!finalityResult?.hardFinalityObserved) reasons.push("hard finality not observed via getTransactionFinality RPC")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@testing/loadgen/src/features/consensus/petri_tx_inclusion.ts` around lines
217 - 222, The thrown error in petri_tx_inclusion omits a reason when soft
finality isn't observed; update the error-reasons collection logic to push a
message when finalityResult?.softFinalityObserved is false (e.g., add "soft
finality not observed" alongside the existing checks for nonceWait,
blockAdvance, txByHash and finalityResult?.hardFinalityObserved) so the Error
thrown from petri_tx_inclusion includes that failure cause.

tcsenpai added 2 commits March 23, 2026 18:18
…shes

Track processed pubkeys in a dedicated Set instead of checking only the
signatures map. Previously, mismatched or invalid-signature submissions
were never added to signatures{} so they got re-evaluated on every poll
iteration, inflating mismatchCount and potentially preventing the
threshold from being reached even when enough valid signatures existed.
When petriRelay() returns success=false, the TX was silently dropped —
the originating node neither relayed it nor added it to its own mempool.
Now falls back to Mempool.addTransaction() so the TX is preserved for
inclusion in the next block compiled locally.
@sonarqubecloud
Copy link

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

🧹 Nitpick comments (3)
src/utilities/mainLoop.ts (1)

4-6: Consider aligning import path style.

Line 4 uses the @/ alias while lines 5-6 (and most other imports in this file) use the src/ prefix. For consistency, consider using the same path style throughout.

Suggested diff
-import { petriConsensusRoutine } from "@/libs/consensus/petri"
+import { petriConsensusRoutine } from "src/libs/consensus/petri"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utilities/mainLoop.ts` around lines 4 - 6, The imports in mainLoop.ts mix
alias styles — change the import for petriConsensusRoutine to match the
prevailing "src/" prefix (or conversely change other imports if you prefer
"@/"), so update the import line referencing petriConsensusRoutine to use the
same path style as getCommonValidatorSeed and getShard; locate the import for
petriConsensusRoutine and replace "@/libs/consensus/petri" with the consistent
"src/libs/consensus/petri" (or update the other two to
"@/libs/consensus/v2/routines/...") ensuring all imports in this file use the
same alias form.
src/libs/network/endpointExecution.ts (2)

33-33: Inconsistent import path alias.

This import uses @/libs/... while all other local imports in this file use src/libs/... (e.g., lines 1, 2, 4, 5, 16, 18, 31). Consider using a consistent path style for maintainability.

💡 Suggested fix for consistency
-import { relay as petriRelay } from "@/libs/consensus/petri/routing/petriRouter"
+import { relay as petriRelay } from "src/libs/consensus/petri/routing/petriRouter"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/libs/network/endpointExecution.ts` at line 33, The import for petriRelay
currently uses the alias "@/libs/consensus/petri/routing/petriRouter" which is
inconsistent with the other local imports in this file; update the import to use
the same path style as the rest (e.g., change to
"src/libs/consensus/petri/routing/petriRouter") so the symbol petriRelay is
imported via the consistent path convention.

335-337: Response message is misleading after fallback.

When relaySuccess is false, the message says "relay pending", but the transaction has actually been added to the local mempool (not pending relay). Consider updating the message for accuracy.

📝 Suggested message clarification
                     message: relaySuccess
                         ? "Transaction routed to shard members"
-                        : "Transaction accepted locally (relay pending)",
+                        : "Transaction accepted locally (added to mempool)",
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/libs/network/endpointExecution.ts` around lines 335 - 337, The response
message uses relaySuccess to choose between "Transaction routed to shard
members" and "Transaction accepted locally (relay pending)" which is inaccurate
when relay fails; update the conditional that sets the message (the message
property assigned based on relaySuccess in endpointExecution.ts) so the false
branch accurately reflects local mempool acceptance — e.g., change to
"Transaction accepted locally (added to local mempool)" or similar wording to
replace "relay pending".
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/libs/consensus/petri/forge/continuousForge.ts`:
- Around line 228-240: The loop in continuousForge.ts calling
Mempool.updateClassification for each txHash (setting
TransactionClassification.PRE_APPROVED and PROBLEMATIC) can race with concurrent
forge rounds or block compilation because Mempool.updateClassification uses
repo.update() without transaction isolation; change the implementation to
perform these classification changes in a single atomic operation—either add a
batched repository method (e.g., Mempool.updateClassifications or
repo.updateMany) that wraps all txHash updates in a DB transaction, or implement
optimistic locking/version checks in Mempool.updateClassification
(compare-and-swap on a version/timestamp) so concurrent rounds cannot overwrite
each other; update continuousForge.ts to call the new batched/transactional API
instead of per-tx await calls.
- Around line 127-129: getState currently returns a shallow copy of this.state
which leaks the internal pendingTransactions Map; callers can mutate the Map via
the returned object. Fix by returning a new ForgeState where pendingTransactions
is cloned (e.g., construct a new Map from this.state.pendingTransactions) and
keep other fields copied (use the existing spread of this.state but replace
pendingTransactions), so the returned object contains a fresh Map instance and
does not expose internal mutable state; update getState to return that cloned
Map version.
- Around line 280-284: Validate and sanitize peer responses before trusting the
shape used by recordDelta: when handling response.response in continuousForge
(the block assigning to peerDeltas[peer.identity]), check that response.response
is an object, that it has a deltas property which is an object, and that each
key and value in deltas is a string (or otherwise matches the expected type)
before adding it to peerDeltas; if validation fails, log/warn and skip that
peer's deltas to avoid passing malformed data into recordDelta.
- Around line 266-294: The delta exchange in continuousForge.ts currently builds
promises from peers (using peer.longCall) and awaits Promise.all, which can hang
a round if any peer call never resolves; wrap each peer.longCall (or the entire
Promise.all) in a Promise.race that rejects/resolves after an absolute timeout
(e.g., 2-3s) so dangling calls won't block the round, ensure you only write into
peerDeltas from the timed/settled responses (the promises array where
peer.longCall is invoked), and keep existing error handling (log.warn) for
timeouts so late responses are ignored and the function returns promptly with
whatever peerDeltas were gathered.
- Around line 187-201: Remove the unnecessary double-cast of mempoolTx to
Transaction: pass the existing mempoolTx (already typed as MempoolTx which
implements Transaction) directly into classifyTransaction and
executeSpeculatively instead of using "as unknown as Transaction"; keep the
existing logic that checks classResult.classification ===
TransactionClassification.TO_APPROVE, stores specResult.delta.hash into
localDeltas[mempoolTx.hash], and calls
Mempool.updateClassification(mempoolTx.hash,
TransactionClassification.TO_APPROVE, specResult.delta.hash).

In `@src/libs/network/endpointExecution.ts`:
- Around line 326-329: The fallback call to Mempool.addTransaction (using
queriedTx and validatedData.data.reference_block) lacks error handling and can
cause silent loss while the function still returns success: wrap that await in a
try/catch, log the error (including context like the transaction id or
queriedTx), and set the response/result flag to failure (e.g., result.success =
false or equivalent) - mirror the error handling approach used in the default
mempool insertion path (lines that handle errors and set result.success = false)
so failures are propagated correctly.

---

Nitpick comments:
In `@src/libs/network/endpointExecution.ts`:
- Line 33: The import for petriRelay currently uses the alias
"@/libs/consensus/petri/routing/petriRouter" which is inconsistent with the
other local imports in this file; update the import to use the same path style
as the rest (e.g., change to "src/libs/consensus/petri/routing/petriRouter") so
the symbol petriRelay is imported via the consistent path convention.
- Around line 335-337: The response message uses relaySuccess to choose between
"Transaction routed to shard members" and "Transaction accepted locally (relay
pending)" which is inaccurate when relay fails; update the conditional that sets
the message (the message property assigned based on relaySuccess in
endpointExecution.ts) so the false branch accurately reflects local mempool
acceptance — e.g., change to "Transaction accepted locally (added to local
mempool)" or similar wording to replace "relay pending".

In `@src/utilities/mainLoop.ts`:
- Around line 4-6: The imports in mainLoop.ts mix alias styles — change the
import for petriConsensusRoutine to match the prevailing "src/" prefix (or
conversely change other imports if you prefer "@/"), so update the import line
referencing petriConsensusRoutine to use the same path style as
getCommonValidatorSeed and getShard; locate the import for petriConsensusRoutine
and replace "@/libs/consensus/petri" with the consistent
"src/libs/consensus/petri" (or update the other two to
"@/libs/consensus/v2/routines/...") ensuring all imports in this file use the
same alias form.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 2066d833-ec81-4566-9067-912f2120ee9f

📥 Commits

Reviewing files that changed from the base of the PR and between 27a1fa1 and 7f6ce72.

📒 Files selected for processing (6)
  • src/libs/blockchain/chainBlocks.ts
  • src/libs/consensus/petri/coordination/petriSecretary.ts
  • src/libs/consensus/petri/forge/continuousForge.ts
  • src/libs/network/endpointExecution.ts
  • src/libs/network/rpcDispatch.ts
  • src/utilities/mainLoop.ts
✅ Files skipped from review due to trivial changes (1)
  • src/libs/consensus/petri/coordination/petriSecretary.ts
🚧 Files skipped from review as they are similar to previous changes (2)
  • src/libs/network/rpcDispatch.ts
  • src/libs/blockchain/chainBlocks.ts

Comment on lines +127 to +129
getState(): Readonly<ForgeState> {
return { ...this.state }
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Shallow copy leaks mutable pendingTransactions Map reference.

The spread operator creates a shallow copy, but pendingTransactions is a Map. Callers receive a reference to the internal Map and can mutate it.

🔧 Proposed fix
     getState(): Readonly<ForgeState> {
-        return { ...this.state }
+        return {
+            ...this.state,
+            pendingTransactions: new Map(this.state.pendingTransactions),
+        }
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/libs/consensus/petri/forge/continuousForge.ts` around lines 127 - 129,
getState currently returns a shallow copy of this.state which leaks the internal
pendingTransactions Map; callers can mutate the Map via the returned object. Fix
by returning a new ForgeState where pendingTransactions is cloned (e.g.,
construct a new Map from this.state.pendingTransactions) and keep other fields
copied (use the existing spread of this.state but replace pendingTransactions),
so the returned object contains a fresh Map instance and does not expose
internal mutable state; update getState to return that cloned Map version.

Comment on lines +187 to +201
// Otherwise compute now (for txs received via merge without classification)
const tx = mempoolTx as unknown as import("@kynesyslabs/demosdk/types").Transaction
const classResult = await classifyTransaction(tx)
if (classResult.classification === TransactionClassification.TO_APPROVE) {
const specResult = await executeSpeculatively(tx, classResult.gcrEdits)
if (specResult.success && specResult.delta) {
localDeltas[mempoolTx.hash] = specResult.delta.hash
// Update mempool with computed delta
await Mempool.updateClassification(
mempoolTx.hash,
TransactionClassification.TO_APPROVE,
specResult.delta.hash,
)
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check the Transaction type definition and mempool entity to compare shapes
echo "=== Transaction type from demosdk ==="
rg -n "export (interface|type) Transaction" --type=ts -A 20 | head -50

echo "=== Mempool entity definition ==="
ast-grep --pattern $'class MempoolEntity {
  $$$
}'

Repository: kynesyslabs/node

Length of output: 3805


🏁 Script executed:

# Find mempool-related definitions
echo "=== Searching for mempool-related types/classes ==="
fd -e ts -e tsx | xargs rg "mempoolTx|MempoolTx|Mempool" --type=ts -B2 -A2 | grep -E "(class|interface|type|const|let|var).*[Mm]empool" | head -30

echo -e "\n=== Searching for classifyTransaction function ==="
rg -n "function classifyTransaction|const classifyTransaction|async.*classifyTransaction" --type=ts

echo -e "\n=== Searching for executeSpeculatively function ==="
rg -n "function executeSpeculatively|const executeSpeculatively|async.*executeSpeculatively" --type=ts

Repository: kynesyslabs/node

Length of output: 3728


🏁 Script executed:

# Look for the file containing continuousForge.ts to understand context
echo "=== File structure around continuousForge.ts ==="
ls -la src/libs/consensus/petri/forge/

echo -e "\n=== Search for MempoolTx type or class ==="
rg "type MempoolTx|interface MempoolTx|class MempoolTx" --type=ts -A 10

echo -e "\n=== Look at imports in continuousForge.ts ==="
head -50 src/libs/consensus/petri/forge/continuousForge.ts | grep -E "^import|^export|from"

Repository: kynesyslabs/node

Length of output: 1982


🏁 Script executed:

echo "=== classifyTransaction function signature ==="
sed -n '28,50p' src/libs/consensus/petri/classifier/transactionClassifier.ts

echo -e "\n=== executeSpeculatively function signature ==="
sed -n '42,60p' src/libs/consensus/petri/execution/speculativeExecutor.ts

echo -e "\n=== Check if Transaction type is used elsewhere ==="
rg "Transaction\b" tests/mocks/demosdk-types.ts -A 1

Repository: kynesyslabs/node

Length of output: 1912


🏁 Script executed:

echo "=== Full MempoolTx definition ==="
sed -n '1,50p' src/model/entities/Mempool.ts

echo -e "\n=== Check where Transaction interface is defined ==="
rg "interface Transaction|type Transaction" --type=ts -B 2 -A 5

Repository: kynesyslabs/node

Length of output: 6435


🏁 Script executed:

echo "=== Context around lines 187-201 in continuousForge.ts ==="
sed -n '180,210p' src/libs/consensus/petri/forge/continuousForge.ts

Repository: kynesyslabs/node

Length of output: 1685


🏁 Script executed:

echo "=== Check loop or context that assigns mempoolTx ==="
sed -n '170,205p' src/libs/consensus/petri/forge/continuousForge.ts

Repository: kynesyslabs/node

Length of output: 1717


🏁 Script executed:

echo "=== Find toApproveTxs definition ==="
sed -n '160,185p' src/libs/consensus/petri/forge/continuousForge.ts

Repository: kynesyslabs/node

Length of output: 1115


🏁 Script executed:

echo "=== Search for toApproveTxs in the file ==="
grep -n "toApproveTxs" src/libs/consensus/petri/forge/continuousForge.ts

Repository: kynesyslabs/node

Length of output: 280


🏁 Script executed:

echo "=== Find Mempool.getByClassification method ==="
rg -n "getByClassification" --type=ts -B 2 -A 5

echo -e "\n=== Check Mempool imports and class definition ==="
head -100 src/model/entities/Mempool.ts | tail -50

Repository: kynesyslabs/node

Length of output: 4369


Remove unnecessary type cast.

mempoolTx is already typed as MempoolTx (from Mempool.getByClassification), which implements the Transaction interface. The double cast as unknown as Transaction is redundant. Simply pass mempoolTx directly to classifyTransaction and executeSpeculatively.

-                const tx = mempoolTx as unknown as import("@kynesyslabs/demosdk/types").Transaction
-                const classResult = await classifyTransaction(tx)
+                const classResult = await classifyTransaction(mempoolTx)

Also pass mempoolTx directly to executeSpeculatively on line 193.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/libs/consensus/petri/forge/continuousForge.ts` around lines 187 - 201,
Remove the unnecessary double-cast of mempoolTx to Transaction: pass the
existing mempoolTx (already typed as MempoolTx which implements Transaction)
directly into classifyTransaction and executeSpeculatively instead of using "as
unknown as Transaction"; keep the existing logic that checks
classResult.classification === TransactionClassification.TO_APPROVE, stores
specResult.delta.hash into localDeltas[mempoolTx.hash], and calls
Mempool.updateClassification(mempoolTx.hash,
TransactionClassification.TO_APPROVE, specResult.delta.hash).

Comment on lines +228 to +240
for (const txHash of promoted) {
await Mempool.updateClassification(
txHash,
TransactionClassification.PRE_APPROVED,
)
}

for (const txHash of flagged) {
await Mempool.updateClassification(
txHash,
TransactionClassification.PROBLEMATIC,
)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Sequential mempool updates may race with concurrent forge rounds or block compilation.

Per the Mempool.updateClassification implementation (relevant snippet 3), updates call repo.update() without transaction isolation. If a round overlaps with block compilation (due to the pause() drain issue) or another round on a different node, classification states could become inconsistent.

Consider batching updates within a single transaction or adding optimistic locking.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/libs/consensus/petri/forge/continuousForge.ts` around lines 228 - 240,
The loop in continuousForge.ts calling Mempool.updateClassification for each
txHash (setting TransactionClassification.PRE_APPROVED and PROBLEMATIC) can race
with concurrent forge rounds or block compilation because
Mempool.updateClassification uses repo.update() without transaction isolation;
change the implementation to perform these classification changes in a single
atomic operation—either add a batched repository method (e.g.,
Mempool.updateClassifications or repo.updateMany) that wraps all txHash updates
in a DB transaction, or implement optimistic locking/version checks in
Mempool.updateClassification (compare-and-swap on a version/timestamp) so
concurrent rounds cannot overwrite each other; update continuousForge.ts to call
the new batched/transactional API instead of per-tx await calls.

Comment on lines +266 to +294
const promises = peers.map(async peer => {
try {
const response = await peer.longCall(
{
method: "consensus_routine",
params: [{
method: "petri_exchangeDeltas",
params: [{ roundNumber, deltas: localDeltas }],
}],
},
true,
{ sleepTime: 250, retries: 2 },
)

if (response.result === 200 && response.response) {
const data = response.response as { deltas?: Record<string, string> }
if (data.deltas) {
peerDeltas[peer.identity] = data.deltas
}
}
} catch (error) {
log.warn(
`[ContinuousForge] Delta exchange failed with ${peer.identity.substring(0, 16)}...: ${error}`,
)
}
})

await Promise.all(promises)
return peerDeltas
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

No overall timeout on delta exchange; round can stall indefinitely.

Promise.all waits for all peer calls. If longCall lacks an absolute timeout (retries only help transient failures), a hanging peer blocks the entire round. This could cascade into missed forge cycles.

🔧 Proposed fix: wrap with Promise.race timeout
-        await Promise.all(promises)
+        const EXCHANGE_TIMEOUT_MS = this.config.forgeIntervalMs * 0.8
+        await Promise.race([
+            Promise.all(promises),
+            new Promise<void>((_, reject) =>
+                setTimeout(() => reject(new Error("Delta exchange timeout")), EXCHANGE_TIMEOUT_MS)
+            ),
+        ]).catch(err => {
+            log.warn(`[ContinuousForge] Delta exchange incomplete: ${err}`)
+        })
         return peerDeltas
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/libs/consensus/petri/forge/continuousForge.ts` around lines 266 - 294,
The delta exchange in continuousForge.ts currently builds promises from peers
(using peer.longCall) and awaits Promise.all, which can hang a round if any peer
call never resolves; wrap each peer.longCall (or the entire Promise.all) in a
Promise.race that rejects/resolves after an absolute timeout (e.g., 2-3s) so
dangling calls won't block the round, ensure you only write into peerDeltas from
the timed/settled responses (the promises array where peer.longCall is invoked),
and keep existing error handling (log.warn) for timeouts so late responses are
ignored and the function returns promptly with whatever peerDeltas were
gathered.

Comment on lines +280 to +284
if (response.result === 200 && response.response) {
const data = response.response as { deltas?: Record<string, string> }
if (data.deltas) {
peerDeltas[peer.identity] = data.deltas
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Peer response structure is not validated.

The code trusts that response.response has the expected shape { deltas?: Record<string, string> }. A malicious or buggy peer could send arbitrary data, potentially causing unexpected behavior when iterating over deltas in recordDelta.

🔧 Proposed fix
                 if (response.result === 200 && response.response) {
                     const data = response.response as { deltas?: Record<string, string> }
-                    if (data.deltas) {
+                    if (data.deltas && typeof data.deltas === "object") {
                         peerDeltas[peer.identity] = data.deltas
                     }
                 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (response.result === 200 && response.response) {
const data = response.response as { deltas?: Record<string, string> }
if (data.deltas) {
peerDeltas[peer.identity] = data.deltas
}
if (response.result === 200 && response.response) {
const data = response.response as { deltas?: Record<string, string> }
if (data.deltas && typeof data.deltas === "object") {
peerDeltas[peer.identity] = data.deltas
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/libs/consensus/petri/forge/continuousForge.ts` around lines 280 - 284,
Validate and sanitize peer responses before trusting the shape used by
recordDelta: when handling response.response in continuousForge (the block
assigning to peerDeltas[peer.identity]), check that response.response is an
object, that it has a deltas property which is an object, and that each key and
value in deltas is a string (or otherwise matches the expected type) before
adding it to peerDeltas; if validation fails, log/warn and skip that peer's
deltas to avoid passing malformed data into recordDelta.

Comment on lines +326 to +329
await Mempool.addTransaction({
...queriedTx,
reference_block: validatedData.data.reference_block,
})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Missing error handling on fallback mempool insertion.

If Mempool.addTransaction throws or fails during the fallback path, the transaction is silently lost and the function still returns success: true. The default mempool insertion path (lines 408-442) handles errors and sets result.success = false on failure.

🛡️ Suggested fix to handle mempool errors
             if (!relaySuccess) {
                 // Fallback: add to local mempool so the TX is not lost
                 log.warn(
                     `[handleExecuteTransaction] Petri relay failed for ${queriedTx.hash}, adding to local mempool`,
                 )
-                await Mempool.addTransaction({
-                    ...queriedTx,
-                    reference_block: validatedData.data.reference_block,
-                })
+                try {
+                    const { error } = await Mempool.addTransaction({
+                        ...queriedTx,
+                        reference_block: validatedData.data.reference_block,
+                    })
+                    if (error) {
+                        log.error(`[handleExecuteTransaction] Fallback mempool add failed: ${error}`)
+                        return {
+                            success: false,
+                            response: { message: "Failed to add transaction to mempool" },
+                            extra: { error, routing: "petri" },
+                            require_reply: false,
+                        }
+                    }
+                } catch (e) {
+                    log.error(`[handleExecuteTransaction] Fallback mempool add threw: ${e}`)
+                    return {
+                        success: false,
+                        response: { message: "Failed to add transaction to mempool" },
+                        extra: { error: String(e), routing: "petri" },
+                        require_reply: false,
+                    }
+                }
             }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
await Mempool.addTransaction({
...queriedTx,
reference_block: validatedData.data.reference_block,
})
if (!relaySuccess) {
// Fallback: add to local mempool so the TX is not lost
log.warn(
`[handleExecuteTransaction] Petri relay failed for ${queriedTx.hash}, adding to local mempool`,
)
try {
const { error } = await Mempool.addTransaction({
...queriedTx,
reference_block: validatedData.data.reference_block,
})
if (error) {
log.error(`[handleExecuteTransaction] Fallback mempool add failed: ${error}`)
return {
success: false,
response: { message: "Failed to add transaction to mempool" },
extra: { error, routing: "petri" },
require_reply: false,
}
}
} catch (e) {
log.error(`[handleExecuteTransaction] Fallback mempool add threw: ${e}`)
return {
success: false,
response: { message: "Failed to add transaction to mempool" },
extra: { error: String(e), routing: "petri" },
require_reply: false,
}
}
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/libs/network/endpointExecution.ts` around lines 326 - 329, The fallback
call to Mempool.addTransaction (using queriedTx and
validatedData.data.reference_block) lacks error handling and can cause silent
loss while the function still returns success: wrap that await in a try/catch,
log the error (including context like the transaction id or queriedTx), and set
the response/result flag to failure (e.g., result.success = false or equivalent)
- mirror the error handling approach used in the default mempool insertion path
(lines that handle errors and set result.success = false) so failures are
propagated correctly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant