-
Notifications
You must be signed in to change notification settings - Fork 17
Inference statistics #26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
WalkthroughComprehensive update adding batch query capabilities across smart contracts (models, nodes, tasks with history tracking), introducing a centralized address manager for chain-specific deployments, implementing multi-backend caching infrastructure, extending the aggregator server with REST API endpoints for data retrieval, and renaming the task completion function while enriching reward accumulation events with token metadata. Changes
Sequence Diagram(s)sequenceDiagram
actor Client
participant Server as Aggregator<br/>Server
participant Contract as Smart<br/>Contracts
participant Cache as Cache<br/>Backend
rect rgb(240, 248, 255)
Note over Server: API Request Flow with Caching
Client->>Server: GET /models
alt Cache Hit
Server->>Cache: get(cache_key)
Cache-->>Server: data
else Cache Miss
Server->>Contract: call getActiveModelsWithDetails()
Contract-->>Server: ModelDetails[]
Server->>Cache: set(cache_key, data, ttl)
Cache-->>Server: ✓
end
Server->>Server: _format_task_data()
Server-->>Client: models list (JSON)
end
rect rgb(240, 248, 255)
Note over Server: Task History Query with Filtering
Client->>Server: GET /operator-inference-history
Server->>Server: _get_filtered_task_ids(operator)
alt Cached Task History
Server->>Cache: get(history_key)
Cache-->>Server: [task_ids]
else Fresh Query
Server->>Contract: call getTasksByOperator()
Contract-->>Server: [task_ids]
Server->>Cache: set(history_key, data, ttl)
end
alt include_details=true
loop Per Task
Server->>Contract: call getTask(task_id)
Contract-->>Server: Task details
Server->>Server: _format_task_data()
end
end
Server-->>Client: paginated results (JSON)
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Areas requiring extra attention:
Possibly related PRs
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Warning There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure. 🔧 Pylint (4.0.2)client/src/common/addresses.pyclient/src/common/cache.pyclient/src/aggregator/main.py
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
🛡️ Immunefi PR ReviewsThis pull request is not eligible for a PR Reviews review. Please contact Immunefi support. Reason: This PR can't be reviewed because no PR Reviews plan is configured for your organisation. Please ask your admin to set up a plan to enable reviews. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
contracts/src/SertnServiceManager.sol (1)
64-69: Restore onlyAggregators authorization checkThe modifier now reverts when
msg.senderis an authorized aggregator and silently allows everyone else, breaking every aggregator-protected call. Flip the condition back so authorized aggregators pass and outsiders revert.modifier onlyAggregators() { - if (aggregators.contains(msg.sender)) { + if (!aggregators.contains(msg.sender)) { revert NotAggregator(); } _; }
🧹 Nitpick comments (5)
contracts/anvil/deploy-sertn.sh (1)
17-34: Consider adding error handling for JSON formatting failures.The script now correctly handles multiple chain-specific deployment files using
nullgloband an array loop. However, the Python formatting step within the loop doesn't check for failures.Consider adding error handling to catch JSON formatting issues:
for json_file in "${json_files[@]}"; do - python3 -c " + if ! python3 -c " import json with open('$json_file', 'r') as f: data = json.load(f) with open('$json_file', 'w') as f: json.dump(data, f, indent=4) -" +" + then + echo "Error: Failed to format contracts/$json_file" + exit 1 + fi echo "Formatted contracts/$json_file" donecontracts/src/ModelRegistry.sol (1)
175-192: LGTM! Batch query function provides efficient model details retrieval.The implementation correctly iterates active model IDs and populates comprehensive details for each. Hardcoding
isActive: trueis appropriate since all returned models are from theactiveModelIdsset.For deployments with very large numbers of active models (100+), consider documenting gas costs or providing a paginated variant to avoid potential out-of-gas scenarios. However, for typical use cases, this implementation is perfectly adequate.
client/tests/conftest.py (1)
103-115: Replace fixed sleep with readiness check
time.sleep(5)makes the fixture nondeterministic—if the server takes longer than five seconds to bind (or fails fast), tests will flake. Uvicorn exposes aserver.startedflag precisely so callers can wait until the server is actually accepting connections when running in a background thread. Consider looping onserver.started(or probing the socket) before yielding, so the fixture only proceeds once the server is ready. (bugfactory.io)client/tests/test_caches.py (1)
1-14: Remove unused pytest import.The
pytestimport on line 3 is flagged as unused by static analysis. If pytest fixtures aren't being used in this test class structure, the import can be safely removed.Apply this diff:
-import pytest - from common.cache import (client/tests/test_model_registry.py (1)
24-27: Consider adding error handling to the API helper.The
request_models_from_api()helper assumes the API is always available and returns 200. If the server isn't ready or encounters an error, the assertion will fail without clear context.Consider adding a more descriptive error message:
def request_models_from_api(): try: resp = requests.get("http://localhost:8090/models", timeout=5) assert resp.status_code == 200, f"API returned status {resp.status_code}: {resp.text}" return {m["name"]: m for m in resp.json()} except requests.RequestException as e: raise AssertionError(f"Failed to connect to API: {e}")
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
client/uv.lockis excluded by!**/*.lock
📒 Files selected for processing (40)
abis/ModelRegistry.abi.json(1 hunks)abis/SertnNodesManager.abi.json(1 hunks)abis/SertnServiceManager.abi.json(2 hunks)abis/SertnTaskManager.abi.json(2 hunks)client/pyproject.toml(1 hunks)client/src/aggregator/main.py(2 hunks)client/src/aggregator/server.py(3 hunks)client/src/avs_operator/main.py(2 hunks)client/src/common/addresses.py(1 hunks)client/src/common/cache.py(1 hunks)client/src/common/config.py(3 hunks)client/src/common/constants.py(2 hunks)client/src/common/eth.py(3 hunks)client/src/main.py(1 hunks)client/src/management/owner.py(2 hunks)client/tests/conftest.py(3 hunks)client/tests/test_caches.py(1 hunks)client/tests/test_model_registry.py(9 hunks)client/tests/test_nodes.py(1 hunks)client/tests/test_workflow.py(9 hunks)contracts/anvil/deploy-sertn.sh(1 hunks)contracts/deployments/sertnDeployment.json(0 hunks)contracts/deployments/sertnDeployment_31337.json(1 hunks)contracts/interfaces/IModelRegistry.sol(2 hunks)contracts/interfaces/ISertnNodesManager.sol(1 hunks)contracts/interfaces/ISertnServiceManager.sol(2 hunks)contracts/interfaces/ISertnTaskManager.sol(1 hunks)contracts/script/InitLocalWorkers.t.sol(3 hunks)contracts/script/SertnDeployer.s.sol(1 hunks)contracts/src/ModelRegistry.sol(1 hunks)contracts/src/SertnNodesManager.sol(1 hunks)contracts/src/SertnServiceManager.sol(2 hunks)contracts/src/SertnTaskManager.sol(6 hunks)contracts/test/ModelRegistry.t.sol(1 hunks)contracts/test/SertnNodesManager.t.sol(1 hunks)contracts/test/SertnServiceManager.t.sol(6 hunks)contracts/test/SertnTaskManager.t.sol(1 hunks)contracts/test/mockContracts/SertnNodesManagerMock.sol(1 hunks)contracts/test/mockContracts/SertnServiceManagerMock.sol(1 hunks)cspell.json(3 hunks)
💤 Files with no reviewable changes (1)
- contracts/deployments/sertnDeployment.json
🧰 Additional context used
🧬 Code graph analysis (13)
client/src/aggregator/main.py (1)
client/src/common/addresses.py (1)
init_addresses(36-66)
client/tests/conftest.py (5)
client/src/aggregator/main.py (2)
Aggregator(54-467)start_server(76-80)client/src/avs_operator/main.py (1)
TaskOperator(40-227)client/src/management/owner.py (1)
AvsOwner(17-48)client/src/aggregator/server.py (1)
start(57-59)client/src/common/addresses.py (1)
STRATEGIES_ADDRESSES(87-88)
client/src/avs_operator/main.py (2)
client/src/common/eth.py (2)
EthereumClient(42-220)load_ecdsa_private_key(28-39)client/src/common/addresses.py (1)
init_addresses(36-66)
client/src/management/owner.py (1)
client/src/common/addresses.py (1)
init_addresses(36-66)
client/tests/test_nodes.py (4)
client/tests/conftest.py (3)
aggregator(63-72)operator(118-130)aggregator_server(76-114)client/src/avs_operator/main.py (1)
TaskOperator(40-227)client/src/common/config.py (1)
OperatorConfig(165-187)client/src/avs_operator/nodes.py (1)
sync_nodes(49-83)
client/src/main.py (1)
client/src/common/config.py (1)
load_config(215-260)
client/tests/test_model_registry.py (2)
client/tests/conftest.py (4)
models_root(186-196)write_model(200-244)dummy_address(248-251)aggregator_server(76-114)client/src/models/execution_layer/model_registry.py (1)
sync_models(149-240)
client/tests/test_workflow.py (5)
client/tests/conftest.py (4)
aggregator(63-72)aggregator_server(76-114)operator(118-130)strategies(255-260)client/src/aggregator/main.py (2)
Aggregator(54-467)listen_for_events(269-290)client/src/avs_operator/main.py (1)
listen_for_events(61-90)client/src/common/contract_constants.py (2)
TaskStateMap(23-45)TaskStructMap(4-20)client/src/common/addresses.py (1)
STRATEGIES_ADDRESSES(87-88)
client/src/common/cache.py (3)
client/src/common/config.py (1)
CacheConfig(76-122)client/src/common/logging.py (4)
get_logger(41-45)debug(49-51)info(54-56)warning(59-61)client/tests/test_caches.py (1)
method(146-148)
client/src/common/eth.py (1)
client/src/common/addresses.py (3)
SERVICE_MANAGER_ADDRESS(79-80)TASK_MANAGER_ADDRESS(75-76)ALLOCATION_MANAGER_ADDRESS(83-84)
client/tests/test_caches.py (2)
client/src/common/cache.py (26)
CloudflareKVCache(173-313)InMemoryCache(126-170)MemcachedCache(51-123)NoOpCache(316-335)cached_method(408-481)get_cache(338-405)exists(41-43)exists(102-112)exists(161-162)exists(270-281)exists(331-332)get(26-28)get(69-77)get(135-140)get(198-220)get(322-323)delete(36-38)delete(91-100)delete(152-159)delete(250-268)delete(328-329)clear(46-48)clear(114-123)clear(164-170)clear(283-313)clear(334-335)client/src/common/config.py (1)
CacheConfig(76-122)
client/src/aggregator/server.py (6)
client/tests/conftest.py (2)
aggregator(63-72)operator(118-130)client/src/common/cache.py (8)
CacheBackend(22-48)cached_method(408-481)get_cache(338-405)get(26-28)get(69-77)get(135-140)get(198-220)get(322-323)client/src/common/contract_constants.py (3)
TaskStateMap(23-45)TaskStructMap(4-20)from_int(40-45)client/src/common/logging.py (1)
get_logger(41-45)client/src/aggregator/main.py (1)
Aggregator(54-467)client/src/common/addresses.py (3)
STRATEGIES_ADDRESSES(87-88)ETH_STRATEGY_ADDRESSES(91-92)SERVICE_MANAGER_ADDRESS(79-80)
client/src/common/addresses.py (1)
client/tests/conftest.py (1)
strategies(255-260)
🪛 Flake8 (7.3.0)
client/tests/test_nodes.py
[error] 103-103: local variable 'expected_result' is assigned to but never used
(F841)
[error] 108-108: f-string is missing placeholders
(F541)
client/tests/test_caches.py
[error] 3-3: 'pytest' imported but unused
(F401)
🪛 Ruff (0.14.2)
client/tests/test_nodes.py
103-103: Local variable expected_result is assigned to but never used
Remove assignment to unused variable expected_result
(F841)
108-108: f-string without any placeholders
Remove extraneous f prefix
(F541)
🔇 Additional comments (30)
contracts/script/SertnDeployer.s.sol (1)
152-155: LGTM! Chain-specific deployment file naming.The change to include chain ID in the deployment filename enables proper multi-chain deployment support. The string concatenation using
Strings.toString(block.chainid)is correct and aligns with the broader refactoring to support chain-specific address management.cspell.json (1)
10-10: LGTM! Valid technical terms added to dictionary.The additions are legitimate technical terms: "bips" (basis points), "Memorystore" (caching service), and "urandom" (Python's secure random module).
Also applies to: 28-28, 43-43
client/src/common/constants.py (3)
5-6: LGTM! Well-defined address utilities.The
ZERO_ADDRESSandADDRESS_REGEXPconstants are correctly defined and provide useful utilities for address validation and zero-address checks.
42-46: LGTM! Clear documentation for OPERATOR_SET_ID.The constant is well-documented with clear explanation of its purpose and future considerations regarding multiple AVSs.
1-46: Address constants migration verified successfully.All references to the removed constants (TASK_MANAGER_ADDRESS, SERVICE_MANAGER_ADDRESS, ALLOCATION_MANAGER_ADDRESS, STRATEGIES_ADDRESSES, ETH_STRATEGY_ADDRESSES) have been properly updated to use the new
addressesmodule. No stray references remain in the codebase.contracts/deployments/sertnDeployment_31337.json (1)
1-12: LGTM! Chain-specific deployment configuration.This deployment manifest for chain ID 31337 (Anvil local network) contains properly formatted Ethereum addresses and aligns with the new chain-specific deployment strategy introduced in this PR.
client/src/avs_operator/main.py (2)
15-15: LGTM! Import additions align with centralized address management.The new imports support the refactored address initialization pattern and improve logging consistency.
Also applies to: 20-20
43-43: Consider error handling for address initialization.The
addresses.init_addresses(chain_id=self.config.chain_id)call can raiseFileNotFoundErrororKeyErrorif the deployment file is missing or malformed. While the exceptions are descriptive, consider whether TaskOperator initialization should catch and log these errors with additional context about the operator's configuration.The current implementation will propagate exceptions, which may be the desired behavior. Verify that calling code (e.g.,
run_operatorin line 31-36) handles initialization failures appropriately.contracts/script/InitLocalWorkers.t.sol (3)
7-8: LGTM! Imports support chain-specific deployment loading.The IERC20 and Strings imports enable the chain-ID-based file path construction introduced later in the file.
24-25: LGTM! Interface imports align with updated contract structure.These imports provide access to the updated task manager and model registry interfaces.
104-110: LGTM! Chain-specific deployment files enable multi-chain support.The dynamic file path construction using
Strings.toString(block.chainid)aligns with the centralized address management pattern introduced across the codebase, matching the Python-side implementation.contracts/test/mockContracts/SertnNodesManagerMock.sol (1)
205-235: LGTM! Mock implementation provides comprehensive test data.The
getAllNodesWithDetails()function returns well-structured mock data with a single node containing realistic field values. The nested array structure properly represents node details, supported models, and model allocations, enabling effective testing of consumers of this interface.contracts/test/ModelRegistry.t.sol (2)
302-315: LGTM! Empty state test provides important edge case coverage.The test correctly validates that
getActiveModelsWithDetails()returns an empty array when all models are disabled, ensuring proper handling of the empty state.
317-382: LGTM! Comprehensive test validates model details and updates.This test thoroughly validates:
- Creation of multiple models
- Updates to model properties (name, computeCost, requiredFUCUs, verificationStrategy)
- Accurate reflection of changes in
getActiveModelsWithDetails()output- Preservation of unchanged model properties
The test methodology using a loop to find and verify each model by ID is robust and maintainable.
client/src/main.py (1)
60-60: LGTM! Config loading consolidation improves code maintainability.Moving the single
load_config()call before mode dispatch eliminates code duplication and ensures consistent configuration handling across operator and aggregator modes. This refactor properly applies the DRY principle.contracts/interfaces/IModelRegistry.sol (2)
15-26: LGTM! ModelDetails struct provides comprehensive model metadata.The struct includes all essential model attributes, enabling efficient batch retrieval of model information. The field selection is well-balanced between completeness and gas efficiency.
186-190: LGTM! Batch query function signature is well-documented.The function signature properly declares a view function returning a memory array, ensuring gas-free external calls. The NatSpec documentation clearly describes the purpose and return value.
contracts/test/SertnNodesManager.t.sol (1)
806-937: LGTM! Comprehensive test coverage for getAllNodesWithDetails.The test suite thoroughly validates the new batch query function across multiple scenarios:
- Empty state handling
- Active-only node filtering (excluding inactive nodes)
- Nodes without model support
- Multi-model configurations with proper ordering
The assertions correctly verify the uint256[8] packing format and the parallel arrays for supported models and allocations.
abis/ModelRegistry.abi.json (1)
99-148: LGTM! Well-structured ABI for batch model details retrieval.The new
getActiveModelsWithDetails()function provides a consolidated view of active models, returning a structured tuple array with all relevant model metadata (modelId, modelName, modelVerifier, verificationStrategy, computeCost, requiredFUCUs, isActive).abis/SertnNodesManager.abi.json (1)
80-102: LGTM! Efficient batch query structure for node details.The function returns three parallel arrays enabling efficient bulk retrieval of node information. The uint256[8] packing for node details is space-efficient, though consumers will need to properly unpack the packed data.
abis/SertnServiceManager.abi.json (2)
427-427: Function rename: taskCompleted → taskResolvedThe rename clarifies the semantic meaning of task completion. Note that this is a breaking change for any off-chain consumers of this contract interface.
559-564: Enhanced event payload with token metadata.Adding the token address to the
TaskRewardAccumulatedevent provides better transparency for reward tracking, enabling consumers to distinguish rewards by token type.contracts/interfaces/ISertnServiceManager.sol (2)
21-26: LGTM! Event signature enriched with token parameter.The addition of the
IERC20 tokenparameter betweenfeeandcurrentIntervalenables better reward tracking and aligns with the implementation changes in SertnServiceManager.sol.
71-76: LGTM! Function rename reflects improved semantics.The rename from
taskCompletedtotaskResolvedbetter expresses the intent of finalizing task resolution. The function signature remains otherwise unchanged.contracts/test/SertnServiceManager.t.sol (3)
275-314: LGTM! Test properly updated for taskResolved and enriched event.The test correctly:
- Renames the test function to match the new interface
- Updates the event expectation to include
mockToken1as the token parameter- Calls
taskResolvedinstead oftaskCompleted- Maintains proper test coverage with appropriate assertions
316-325: LGTM! Negative test case properly updated.The test correctly verifies that non-task-manager callers are rejected when calling
taskResolved.
432-437: LGTM! Integration test updated for new function name.The complex workflow test properly uses
taskResolvedin the end-to-end scenario.client/tests/test_caches.py (2)
16-133: LGTM! Comprehensive cache backend test coverage.The test suite thoroughly validates all cache backend implementations:
- InMemoryCache: lifecycle operations (set, get, exists, delete)
- MemcachedCache: proper mocking and error handling (MemcacheError)
- CloudflareKVCache: HTTP-based operations with mocked requests and status code handling
- NoOpCache: ensures no-op behavior when caching is disabled
The tests appropriately use mocks to isolate external dependencies and verify error scenarios.
136-186: LGTM! Thorough validation of cached_method decorator.The tests correctly verify:
- Graceful handling when the cache attribute is absent
- Proper caching behavior with multiple arguments
- Cache key generation from method arguments
- Cache hit/miss tracking via call counts
client/tests/test_model_registry.py (1)
59-163: LGTM! Enhanced test with API validation.The test now validates both the blockchain state and the aggregator API responses at each lifecycle stage:
- Model creation
- Model updates (cost, fucus, verifier)
- Model deactivation
- Model reactivation
- Model removal
This dual validation ensures consistency between the on-chain registry and the off-chain API layer. The use of the
aggregator_serverfixture properly ensures the server is available during testing.
| "cachetools>=6.2.1", | ||
| "pymemcache>=4.0.0", | ||
| "starlette>=0.49.1", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Verify Starlette dependency necessity.
Starlette is already a transitive dependency of FastAPI (line 14). Adding it explicitly could be intentional for version pinning, but may also introduce version conflicts if FastAPI expects a different Starlette version.
Please verify:
- Whether this explicit dependency is necessary or if FastAPI's transitive dependency suffices
- That the minimum version (0.49.1) is compatible with the FastAPI version in use (0.115.12)
🏁 Script executed:
#!/bin/bash
# Check if FastAPI 0.115.12 has any Starlette version constraints
pip index versions fastapi | grep -A 10 "0.115.12" || echo "Checking FastAPI dependencies..."
python3 -c "
import urllib.request
import json
try:
with urllib.request.urlopen('https://pypi.org/pypi/fastapi/0.115.12/json') as response:
data = json.loads(response.read())
deps = data.get('info', {}).get('requires_dist', [])
starlette_deps = [d for d in deps if 'starlette' in d.lower()]
if starlette_deps:
print('FastAPI 0.115.12 Starlette requirements:')
for dep in starlette_deps:
print(f' {dep}')
else:
print('No explicit Starlette dependency found')
except Exception as e:
print(f'Error: {e}')
"Length of output: 2168
I need to search more specifically for FastAPI 0.115.12's Starlette requirements to get a definitive answer.
Based on my verification, I've found a critical issue with the Starlette dependency constraint.
Critical: Starlette version 0.49.1 is incompatible with FastAPI 0.115.12.
FastAPI 0.115.12 requires starlette >=0.40.0,<0.47.0, but the added constraint specifies starlette>=0.49.1, which falls outside the upper bound. This will cause dependency resolution failures during installation.
This appears to be a typo. Verify the intended Starlette version—likely should be >=0.40.1 or similar to align with FastAPI's compatibility range.
🤖 Prompt for AI Agents
In client/pyproject.toml around lines 23 to 25, the Starlette constraint
"starlette>=0.49.1" is incompatible with FastAPI 0.115.12 (which requires
starlette >=0.40.0,<0.47.0); change the Starlette requirement to a compatible
range (for example "starlette>=0.40.0,<0.47.0" or the specific patch you intend
such as "starlette>=0.40.1,<0.47.0") so dependency resolution succeeds, then run
dependency resolution (pip/poetry) to verify no conflicts.
|
|
||
|
|
||
| def run_aggregator(config: AggregatorConfig) -> None: | ||
| addresses.init_addresses(chain_id=config.chain_id) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Verify error handling for address initialization.
The address initialization is correctly placed before any component that might need addresses. However, ensure proper error handling for initialization failures.
Please verify:
- That
config.chain_idis always set (not None or invalid) - That
init_addressesraises appropriate exceptions on failure (the FileNotFoundError from the addresses module is visible in snippets) - Whether calling
init_addressesmultiple times (e.g., in tests) is safe
🏁 Script executed:
#!/bin/bash
# Check if init_addresses handles being called multiple times
ast-grep --pattern 'def init_addresses($$$) {
$$$
}'
# Check for any guard against multiple initialization
rg -n -A5 -B5 'def init_addresses' client/src/common/addresses.py
# Look for chain_id validation in config
rg -n -C3 'chain_id' client/src/common/config.pyLength of output: 776
I need to gather more detailed information about the init_addresses implementation and how it's being called.
Now I need to examine the actual implementation of init_addresses:
I need to examine the actual implementation more carefully. Let me search for the complete context:
Based on my verification through the script output and web research, I can now generate the rewritten review comment:
Add error handling for address initialization failures and verify initialization safety.
The config.chain_id parameter is validated with a constraint (ge=1) and has a default value (31337), so it should always be available. However, two concerns need addressing:
-
Error handling: The
init_addressesmethod attempts to load a deployment file based onchain_id. If the file doesn't exist (invalid chain, missing deployment),FileNotFoundErrorwill be raised. Consider wrapping the call with appropriate exception handling or documenting expected exceptions. -
Multiple initialization: Verify that calling
init_addressesmultiple times (e.g., in tests) is safe and idempotent. If state is stored, ensure it can be reinitialized without side effects.
🏁 Script executed:
#!/bin/bash
# Get full init_addresses implementation
rg -n -A20 'def init_addresses' client/src/common/addresses.py
# Check how addresses is imported and used in main.py
rg -n -B2 -A10 'addresses' client/src/aggregator/main.py | head -50Length of output: 140
Let me search for these files in the repository:
🏁 Script executed:
#!/bin/bash
# Find addresses-related files
fd -name "addresses.py"
# Find aggregator main.py
fd -name "main.py" -path "*/aggregator/*"
# List directory structure to understand repo layout
find . -maxdepth 3 -type d -name "client" -o -name "aggregator" -o -name "common" 2>/dev/null | head -20Length of output: 514
Let me search for these files with corrected paths:
🏁 Script executed:
#!/bin/bash
# Find addresses.py
fd addresses.py ./client/src/common
# Find main.py in aggregator
fd main.py ./client/src/aggregator
# Get full init_addresses implementation
cat ./client/src/common/addresses.py | grep -n -A 30 "def init_addresses"
# Check how addresses is used in main.py
cat ./client/src/aggregator/main.py | grep -n -B 3 -A 5 "addresses"Length of output: 3322
Add guard clause to prevent multiple initializations.
Error handling for init_addresses is already well-implemented with specific FileNotFoundError and KeyError exceptions. However, the method sets an _initialized flag but lacks a guard clause to check it before re-initializing. This means multiple calls will re-execute the initialization logic, which is inefficient and could fail if called again after the deployment file becomes unavailable.
Suggested fix: Add a guard at the start of init_addresses:
def init_addresses(self, chain_id: int) -> None:
if self._initialized:
return
# ... rest of implementationThe config.chain_id is always valid (has constraint ge=1 and default value 31337).
🤖 Prompt for AI Agents
In client/src/aggregator/main.py around line 35, init_addresses is called but
the init_addresses implementation lacks a guard to prevent re-initialization;
add an early-return guard at the top of init_addresses that checks
self._initialized and returns immediately if True, leaving the existing
FileNotFoundError/KeyError handling and setting of _initialized unchanged so
subsequent calls are no-ops.
| def __init__( | ||
| self, memcached_host, memcached_port, connect_timeout=2.0, timeout=2.0 | ||
| ): | ||
| self.client = MemcacheClient( | ||
| (memcached_host, memcached_port), | ||
| connect_timeout=connect_timeout, | ||
| timeout=timeout, | ||
| ) | ||
|
|
||
| # Test connection with a simple operation | ||
| self.client.version() | ||
| logger.debug( | ||
| f"Successfully connected to Memcached at {memcached_host}:{memcached_port}" | ||
| ) | ||
|
|
||
| def get(self, key: str) -> Optional[Any]: | ||
| try: | ||
| return self.client.get(key) | ||
| except MemcacheError as e: | ||
| logger.error(f"Memcached get error for key {key}: {e}") | ||
| return None | ||
| except Exception as e: | ||
| logger.error(f"Unexpected error getting key {key}: {e}") | ||
| return None | ||
|
|
||
| def set(self, key: str, value: Any, ttl: int = 300) -> bool: | ||
| try: | ||
| # Memcached expects TTL as expire time (0 = never expire, >0 = seconds) | ||
| self.client.set(key=key, value=value, expire=ttl) | ||
| return True | ||
| except MemcacheError as e: | ||
| logger.error(f"Memcached set error for key {key}: {e}") | ||
| return False | ||
| except Exception as e: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Memcached backend can’t cache complex objects
pymemcache.Client rejects non-bytes values unless you configure a serializer. Most of the new @cached_method call sites stash dicts/lists, so set(...) will raise MemcacheClientError, we log the failure, and Memcached never stores anything. Please wire in a serde (e.g. pickle) so we can round-trip Python objects.
-from pymemcache.client.base import Client as MemcacheClient
-from pymemcache.exceptions import MemcacheError
+from pymemcache.client.base import Client as MemcacheClient
+from pymemcache import serde
+from pymemcache.exceptions import MemcacheError
@@
self.client = MemcacheClient(
(memcached_host, memcached_port),
connect_timeout=connect_timeout,
timeout=timeout,
+ serde=serde.pickle_serde,
)Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In client/src/common/cache.py around lines 54 to 87, the Memcache client is
failing to store non-bytes Python objects because no serializer is configured;
update the client initialization to provide a serializer/deserializer (or
implement manual pickle.dumps/pickle.loads in set/get) so dicts/lists
round-trip: use pickle.dumps for values (and return a bytes marker flag) and
pickle.loads on get when the flag indicates pickled data, handle string/bytes
pass-through, import pickle, and add error handling/logging for
serialization/deserialization failures so we continue to return False/None on
errors and avoid unhandled exceptions.
| chain_id: int = 31337, | ||
| gas_strategy: GasStrategy = GasStrategy.STANDARD, | ||
| ): | ||
| addresses.init_addresses(chain_id=chain_id) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Verify chain_id default and initialization behavior.
The default chain_id=31337 (Anvil local network) is added, and addresses are initialized in the constructor.
Please verify:
- Whether 31337 is an appropriate default for production use cases, or if callers should be required to provide this explicitly
- How multiple
AvsOwnerinstances behave with the singletonaddressesmodule - would creating multiple instances with different chain_ids cause conflicts?
🏁 Script executed:
#!/bin/bash
# Check if addresses is a singleton and how it handles re-initialization
ast-grep --pattern 'class AddressManager {
$$$
}'
# Look for usage patterns of AvsOwner to see if multiple instances are created
rg -n -C3 'AvsOwner\(' --type=py --glob='!**/test_*'
# Check test files for multiple instantiation patterns
rg -n -C3 'AvsOwner\(' --type=py --glob='**/test_*'Length of output: 871
🏁 Script executed:
#!/bin/bash
# Find and display the AvsOwner class definition and __init__ method
ast-grep --pattern 'class AvsOwner {
$$$
__init__($$$) {
$$$
}
$$$
}'
# Also search with rg to see the actual class and init
rg -n -A20 'class AvsOwner' --type=pyLength of output: 1473
🏁 Script executed:
#!/bin/bash
# Check the addresses module to understand if it's a singleton
rg -n -B2 -A10 'def init_addresses' --type=py
# Look for the addresses module implementation
fd -e py -x grep -l 'init_addresses\|AddressManager' {} \;Length of output: 1129
🏁 Script executed:
#!/bin/bash
# Check if commands.py creates multiple AvsOwner instances or just one
rg -n -B5 -A10 'avs_owner = AvsOwner' --type=pyLength of output: 1184
🏁 Script executed:
#!/bin/bash
# Check how addresses is imported and defined in owner.py
rg -n -B5 -A5 'import.*addresses' client/src/management/owner.py
# Check addresses module initialization - is it a singleton?
rg -n -B2 -A15 'class.*AddressManager\|^addresses\s*=' --type=py client/src/common/addresses.py | head -40
# Find all instances of addresses module to understand the singleton
rg -n 'addresses\s*=' --type=py client/src/common/addresses.pyLength of output: 263
🏁 Script executed:
#!/bin/bash
# Check what eth_rpc_url values are used in config to understand production networks
rg -n 'eth_rpc_url' --type=py client/src/management/commands.py -B2 -A2
# Look for config definitions
fd -e py -name 'config*' client/src/Length of output: 629
🏁 Script executed:
#!/bin/bash
# Look for config class definition
rg -n 'class.*Config\|eth_rpc_url' --type=py client/src/common/ -A2
# Check aggregator/main.py to see pattern there (mentioned in original review)
rg -n -B5 -A10 'AvsOwner\|chain_id' --type=py client/src/aggregator/main.py | head -50Length of output: 0
🏁 Script executed:
#!/bin/bash
# Check if there's any validation of chain_id vs eth_rpc_url
rg -n 'chain_id\|eth_rpc_url' --type=py client/src/common/eth.py -B2 -A2Length of output: 0
🏁 Script executed:
#!/bin/bash
# Search for where config_obj is created and what eth_rpc_url values it can have
rg -n 'config_obj\|eth_rpc_url' --type=py client/src/management/ -B3 -A1 | head -80
# Check how config is instantiated
rg -n 'class.*Config' --type=py client/src/Length of output: 537
🏁 Script executed:
#!/bin/bash
# Look at the full context of commands.py to understand config setup
cat client/src/management/commands.py | head -100Length of output: 2628
🏁 Script executed:
#!/bin/bash
# Check OwnerConfig definition to see what eth_rpc_url values are used
rg -n -A30 'class OwnerConfig' --type=py client/src/common/config.py
# Check if there's any chain_id in config
rg -n 'chain_id' --type=py client/src/common/config.pyLength of output: 1022
🏁 Script executed:
#!/bin/bash
# Check how load_config works
rg -n -B5 -A15 'def load_config' --type=py client/src/common/config.pyLength of output: 659
🏁 Script executed:
#!/bin/bash
# Check BaseConfig to see what fields it has including eth_rpc_url and chain_id
rg -n -B5 -A50 'class BaseConfig' --type=py client/src/common/config.pyLength of output: 2028
🏁 Script executed:
#!/bin/bash
# Check if OwnerConfig overrides chain_id or eth_rpc_url
cat client/src/common/config.py | grep -A50 'class OwnerConfig'Length of output: 1610
🏁 Script executed:
#!/bin/bash
# Check ModelRegistry to see if it has same issue with chain_id
rg -n -B5 -A20 'class ModelRegistry' --type=py
# Verify if ModelRegistry __init__ also has chain_id parameter with default
rg -n -A15 'def __init__' --type=py client/src/models/execution_layer/model_registry.py | head -40Length of output: 2774
🏁 Script executed:
#!/bin/bash
# Check the full commands.py to see if there are other AvsOwner creations or similar patterns
cat client/src/management/commands.py | grep -n 'AvsOwner\|ModelRegistry\|chain_id'Length of output: 213
Pass config_obj.chain_id to AvsOwner constructor.
The chain_id parameter is not being passed from the loaded config to AvsOwner. Although config_obj.chain_id is populated from the config file (with default 31337), the call at line 147 ignores it, causing AvsOwner to always use the hardcoded default. This means if a user configures a different chain ID (e.g., Sepolia, Mainnet), it will be silently ignored and addresses.init_addresses() will load the wrong deployment file for the wrong network.
Fix: Change line 147 in client/src/management/commands.py from:
avs_owner = AvsOwner(private_key, config_obj.eth_rpc_url)to:
avs_owner = AvsOwner(private_key, config_obj.eth_rpc_url, chain_id=config_obj.chain_id)🤖 Prompt for AI Agents
In client/src/management/owner.py around lines 22-25 and in
client/src/management/commands.py at line 147, AvsOwner is constructed without
passing the loaded config's chain_id so addresses.init_addresses() always uses
the hardcoded default; modify the AvsOwner instantiation in commands.py to pass
chain_id=config_obj.chain_id (i.e., call AvsOwner(private_key,
config_obj.eth_rpc_url, chain_id=config_obj.chain_id)) so the configured chain
ID is propagated into owner initialization and address loading.
| expected_result = get_expected_nodes(nodes_config) | ||
| resp = get_nodes_list() | ||
|
|
||
| assert len(resp) == 1, "Node count mismatch" | ||
| node = resp[0] | ||
| assert node["total_fucus"] == 1000, f"Total fucus mismatch" | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix lint blockers in test_operator_nodes_update
ruff/Flake8 choke on the unused expected_result (F841) and the f-string without placeholders (F541), so the suite never reaches runtime. Reuse expected_result in the assertions and drop the redundant f-string.
- expected_result = get_expected_nodes(nodes_config)
- resp = get_nodes_list()
-
- assert len(resp) == 1, "Node count mismatch"
+ expected_result = get_expected_nodes(nodes_config)
+ resp = get_nodes_list()
+
+ assert len(resp) == len(expected_result), "Node count mismatch"
node = resp[0]
- assert node["total_fucus"] == 1000, f"Total fucus mismatch"
+ assert node["name"] in expected_result, "Unexpected node returned"
+ expected_node = expected_result[node["name"]]
+ assert node["total_fucus"] == expected_node["total_fucus"], "Total fucus mismatch"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| expected_result = get_expected_nodes(nodes_config) | |
| resp = get_nodes_list() | |
| assert len(resp) == 1, "Node count mismatch" | |
| node = resp[0] | |
| assert node["total_fucus"] == 1000, f"Total fucus mismatch" | |
| expected_result = get_expected_nodes(nodes_config) | |
| resp = get_nodes_list() | |
| assert len(resp) == len(expected_result), "Node count mismatch" | |
| node = resp[0] | |
| assert node["name"] in expected_result, "Unexpected node returned" | |
| expected_node = expected_result[node["name"]] | |
| assert node["total_fucus"] == expected_node["total_fucus"], "Total fucus mismatch" |
🧰 Tools
🪛 Flake8 (7.3.0)
[error] 103-103: local variable 'expected_result' is assigned to but never used
(F841)
[error] 108-108: f-string is missing placeholders
(F541)
🪛 Ruff (0.14.2)
103-103: Local variable expected_result is assigned to but never used
Remove assignment to unused variable expected_result
(F841)
108-108: f-string without any placeholders
Remove extraneous f prefix
(F541)
🤖 Prompt for AI Agents
In client/tests/test_nodes.py around lines 103 to 109, the test defines
expected_result but doesn't use it (F841) and uses an f-string with no
placeholders (F541); change the assertions to compare against expected_result
(e.g., assert node["total_fucus"] == expected_result["total_fucus"]) and replace
the f-string assertion message with a normal string literal (e.g., "Total fucus
mismatch") so expected_result is used and the redundant f-string is removed.
| function _updateTaskStateInHistory( | ||
| uint256 taskId, | ||
| TaskState oldState, | ||
| TaskState newState | ||
| ) internal { | ||
| // Remove from old state array | ||
| uint256 oldIndex = taskIndexInState[taskId][uint8(oldState)]; | ||
| uint256[] storage oldStateArray = tasksByState[oldState]; | ||
| uint256 lastTaskId = oldStateArray[oldStateArray.length - 1]; | ||
|
|
||
| // Move last element to the position of the element to remove | ||
| oldStateArray[oldIndex] = lastTaskId; | ||
| taskIndexInState[lastTaskId][uint8(oldState)] = oldIndex; | ||
|
|
||
| // Remove last element | ||
| oldStateArray.pop(); | ||
|
|
||
| // Add to new state array | ||
| tasksByState[newState].push(taskId); | ||
| taskIndexInState[taskId][uint8(newState)] = tasksByState[newState].length - 1; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Legacy tasks will revert on state transition
After the upgrade, tasksByState / taskIndexInState are empty for every pre-existing task. When one of those tasks completes or is challenged, _updateTaskStateInHistory immediately indexes into an empty array and the call reverts, bricking the flow for any in-flight work. We need to tolerate “untracked” tasks by short-circuiting the removal path (or seeding state during upgrade) before we can safely move them into the new bookkeeping.
function _updateTaskStateInHistory(
uint256 taskId,
TaskState oldState,
TaskState newState
) internal {
- // Remove from old state array
- uint256 oldIndex = taskIndexInState[taskId][uint8(oldState)];
- uint256[] storage oldStateArray = tasksByState[oldState];
- uint256 lastTaskId = oldStateArray[oldStateArray.length - 1];
+ uint256[] storage oldStateArray = tasksByState[oldState];
+ if (oldStateArray.length == 0) {
+ // Legacy task with no history entry — just add to the new state.
+ tasksByState[newState].push(taskId);
+ taskIndexInState[taskId][uint8(newState)] = tasksByState[newState].length - 1;
+ return;
+ }
+
+ uint256 oldIndex = taskIndexInState[taskId][uint8(oldState)];
+ if (oldIndex >= oldStateArray.length || oldStateArray[oldIndex] != taskId) {
+ // History is out of sync; fall back to appending.
+ tasksByState[newState].push(taskId);
+ taskIndexInState[taskId][uint8(newState)] = tasksByState[newState].length - 1;
+ return;
+ }
+
+ // Remove from old state array
+ uint256 lastTaskId = oldStateArray[oldStateArray.length - 1];
// Move last element to the position of the element to remove
oldStateArray[oldIndex] = lastTaskId;
taskIndexInState[lastTaskId][uint8(oldState)] = oldIndex;| function taskResolved(address operator, uint256 fee, address strategy, uint32) external { | ||
| emit TaskCompleted(operator, fee, strategy); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Verify event naming consistency after function rename.
The function has been renamed from taskCompleted to taskResolved, but it still emits the TaskCompleted event. While this may be intentional for testing backward compatibility or transition scenarios, consider:
- If the actual contract also renamed its event, this mock should match
- If this discrepancy is intentional for testing, add a comment explaining why
Run this script to check if the actual SertnServiceManager contract uses TaskCompleted or TaskResolved event:
🏁 Script executed:
#!/bin/bash
# Search for TaskCompleted and TaskResolved event emissions in SertnServiceManager
echo "=== Searching for event definitions ==="
rg -n "event Task(Completed|Resolved)" --type=sol
echo ""
echo "=== Searching for event emissions ==="
rg -n "emit Task(Completed|Resolved)" --type=solLength of output: 134
🏁 Script executed:
#!/bin/bash
# Search for Solidity files and event definitions
echo "=== Finding Solidity files in contracts directory ==="
fd -e sol -x grep -l "TaskCompleted\|TaskResolved" {} \;
echo ""
echo "=== Direct search for TaskCompleted event definition ==="
rg "event TaskCompleted" -n
echo ""
echo "=== Direct search for TaskResolved event definition ==="
rg "event TaskResolved" -n
echo ""
echo "=== Search for emit statements ==="
rg "emit Task" -nLength of output: 1647
🏁 Script executed:
#!/bin/bash
# Find the actual function that taskResolved is mocking
echo "=== Search for function signatures in SertnServiceManager ==="
rg "function task(Resolved|Completed)" -n -A 2
echo ""
echo "=== Check the interface ISertnServiceManager if it exists ==="
fd -e sol -type f | xargs grep -l "interface.*SertnServiceManager"
echo ""
echo "=== Look at the complete taskResolved function in actual contract if it exists ==="
rg "function taskResolved" -n -A 5Length of output: 2623
🏁 Script executed:
#!/bin/bash
# Get the complete implementation of taskResolved in SertnServiceManager
echo "=== Complete taskResolved implementation in SertnServiceManager ==="
rg "function taskResolved" -n -A 20 contracts/src/SertnServiceManager.sol
echo ""
echo "=== Search for emit statements within SertnServiceManager.taskResolved context ==="
sed -n '216,250p' contracts/src/SertnServiceManager.solLength of output: 3016
Fix mock contract to emit the correct event.
The mock's taskResolved function emits TaskCompleted, but the actual SertnServiceManager.taskResolved implementation emits TaskRewardAccumulated(_operator, _fee, _strategy.underlyingToken(), currentInterval) instead. Update the mock to emit the correct event to accurately reflect production behavior.
🤖 Prompt for AI Agents
In contracts/test/mockContracts/SertnServiceManagerMock.sol around lines 23–24,
the mock currently emits TaskCompleted but should mirror the real implementation
by emitting TaskRewardAccumulated; change the emit call to emit
TaskRewardAccumulated(operator, fee, strategy.underlyingToken(), /* the uint32
interval param */) using the provided uint32 parameter as currentInterval and
ensure the event signature matches the real contract's parameters.
Summary by CodeRabbit
Release Notes