-
Notifications
You must be signed in to change notification settings - Fork 122
DSperse Integration #168
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
DSperse Integration #168
Conversation
WalkthroughThis PR introduces TruthTorchLM demonstration notebooks with dynamic function discovery and example workflows, implements the DSperse proof handler for model_net deployment with witness/proof/verification lifecycle, and adds supporting configuration, input schema, and documentation for the model_net deployment layer. Changes
Sequence Diagram(s)sequenceDiagram
participant Session as VerifiedModelSession
participant Handler as DsperseHandler
participant DSperse as DSperse Runner
participant FS as Filesystem
rect rgb(200, 220, 255)
Note over Session,FS: Input & Witness Generation
Session->>Handler: gen_input_file(session)
Handler->>FS: write input.json
Session->>Handler: generate_witness(session)
Handler->>Handler: _resolve_config()
Handler->>Handler: _ensure_dsperse_available()
Handler->>DSperse: run dslice
DSperse->>FS: create run_{timestamp}/
Handler->>Handler: _select_latest_run()
Handler->>FS: read run results
Handler-->>Session: return witness content
end
rect rgb(220, 255, 220)
Note over Session,FS: Proof Generation
Session->>Handler: gen_proof(session)
Handler->>Handler: _ensure_dsperse_available()
Handler->>Handler: _select_latest_run()
Handler->>Handler: _locate_proof_json()
Handler->>DSperse: run prover
Handler->>FS: load proof.json & instances
Handler-->>Session: return (proof_str, instances_str)
end
rect rgb(255, 230, 200)
Note over Session,FS: Verification
Session->>Handler: verify_proof(session, validator_inputs, proof)
Handler->>Handler: _ensure_dsperse_available()
Handler->>DSperse: run verifier
DSperse-->>Handler: verification metrics
Handler->>Handler: interpret results
Handler-->>Session: return bool (success/failure)
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Areas requiring extra attention:
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Tip 📝 Customizable high-level summaries are now available in beta!You can now customize how CodeRabbit generates the high-level summary in your pull requests — including its content, structure, tone, and formatting.
Example instruction:
Note: This feature is currently in beta for Pro-tier users, and pricing will be announced later. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
| ve = entry.get("verification_execution", {}) | ||
| if ve and (ve.get("verified") or ve.get("success") or ve.get("success") is True): | ||
| return True | ||
| except Exception: |
Check notice
Code scanning / CodeQL
Empty except Note
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 15 days ago
To fix the problem, the empty except Exception: block in verify_proof (line 138) should be modified to handle exceptions in a safer way. The best way, while maintaining the original intent (that is, returning False when verification fails), is to log the exception using the existing logging system (bt.logging.error). This will preserve the original flow but ensure any unexpected exceptions are visible during debugging. Only change the block at line 138 in verify_proof. You do not need to import anything because bt.logging is already available in the file.
-
Copy modified lines R138-R139
| @@ -135,8 +135,8 @@ | ||
| ve = entry.get("verification_execution", {}) | ||
| if ve and (ve.get("verified") or ve.get("success") or ve.get("success") is True): | ||
| return True | ||
| except Exception: | ||
| pass | ||
| except Exception as e: | ||
| bt.logging.error(f"[DSperse] Exception during fallback verification: {e}", exc_info=True) | ||
| return False | ||
|
|
||
| def aggregate_proofs(self, session: VerifiedModelSession, proofs: list[str]) -> tuple[str, float]: |
| candidate = os.path.join(run_dir, name, "proof.json") | ||
| if os.path.exists(candidate): | ||
| return candidate | ||
| except Exception: |
Check notice
Code scanning / CodeQL
Empty except Note
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 15 days ago
The ideal fix is to handle the caught exception in a way that preserves important debugging information without altering existing program flow. In this case, when an exception occurs during the directory scan, the intended behaviour is to fall back to the alternate search method, so raising the exception is not appropriate. Therefore, the best approach is to log the exception using the standard logging mechanism (in this code, bt.logging.error() appears to be available via the bittensor import), then continue as before. This makes exceptions discoverable via logs without changing control flow.
Make the following change in neurons/execution_layer/proof_handlers/dsperse_handler.py:
- In the
except Exception:block (line 206), replacepasswith a call tobt.logging.error()that logs the exception with a descriptive message. - Add exception details using
exc_info=Trueto include the traceback.
No additional imports are needed, since logging is already available via the bittensor library.
-
Copy modified lines R206-R207
| @@ -203,8 +203,8 @@ | ||
| candidate = os.path.join(run_dir, name, "proof.json") | ||
| if os.path.exists(candidate): | ||
| return candidate | ||
| except Exception: | ||
| pass | ||
| except Exception as e: | ||
| bt.logging.error(f"Exception searching for proof.json in {run_dir}: {e}", exc_info=True) | ||
| # As a fallback, consult run_results.json for recorded proof paths | ||
| rr = os.path.join(run_dir, "run_results.json") | ||
| if os.path.exists(rr): |
| path = pe.get("proof_file") or pe.get("proof_path") | ||
| if path and os.path.exists(path): | ||
| return path | ||
| except Exception: |
Check notice
Code scanning / CodeQL
Empty except Note
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 15 days ago
The best way to fix this issue is to avoid silently swallowing exceptions. Instead, log the error when an exception occurs so that developers and operators are aware of the nature and source of failures. The bittensor as bt import suggests you can use bt.logging.warning or bt.logging.error to log the problem. You should add a message within the except block at line 220 (and ideally at 207 as well, as they are identical issues in the same function) describing what operation failed and including the exception details. This preserves debugging information without breaking the existing fallback logic, and maintains functional parity.
Modify only the except blocks (lines 207 and 220) in neurons/execution_layer/proof_handlers/dsperse_handler.py within the _locate_proof_json function to log exceptions rather than pass silently.
No new imports are needed, as bt is already imported, and bt.logging.warning is a standard logging call.
-
Copy modified lines R206-R207 -
Copy modified lines R220-R221
| @@ -203,8 +203,8 @@ | ||
| candidate = os.path.join(run_dir, name, "proof.json") | ||
| if os.path.exists(candidate): | ||
| return candidate | ||
| except Exception: | ||
| pass | ||
| except Exception as e: | ||
| bt.logging.warning(f"Exception during search for proof.json under {run_dir}: {e}") | ||
| # As a fallback, consult run_results.json for recorded proof paths | ||
| rr = os.path.join(run_dir, "run_results.json") | ||
| if os.path.exists(rr): | ||
| @@ -217,8 +217,8 @@ | ||
| path = pe.get("proof_file") or pe.get("proof_path") | ||
| if path and os.path.exists(path): | ||
| return path | ||
| except Exception: | ||
| pass | ||
| except Exception as e: | ||
| bt.logging.warning(f"Exception while reading run_results.json in {run_dir}: {e}") | ||
| return None | ||
|
|
||
| def _ensure_dsperse_available(self) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
🧹 Nitpick comments (5)
docs/notebooks/TruthTorchLM_quickstart.ipynb (1)
86-97: Remove unusedjsonimport.The
jsonmodule is imported but never used in this cell or subsequent cells in this notebook.Apply this diff:
- "import textwrap, json\n", + "import textwrap\n",docs/notebooks/TruthTorchLM_quickstart_one_command.ipynb (1)
89-94: Consider usingdefinstead of lambda for generator functions.Using
defimproves readability, debugging (named functions in tracebacks), and follows PEP 8 recommendations.Apply this diff:
- "gen_a = lambda prompt: _gen_llama(prompt, path_a)\n", - "gen_b = lambda prompt: _gen_llama(prompt, path_b)\n", + "def gen_a(prompt): return _gen_llama(prompt, path_a)\n", + "def gen_b(prompt): return _gen_llama(prompt, path_b)\n",neurons/execution_layer/proof_handlers/dsperse_handler.py (3)
198-222: Add logging for exception handling in_locate_proof_json.The empty
exceptblocks on lines 206 and 220 suppress all errors, making it difficult to diagnose issues with proof file discovery. Consider logging at trace/debug level.try: for name in sorted(os.listdir(run_dir)): if name.startswith("slice_"): candidate = os.path.join(run_dir, name, "proof.json") if os.path.exists(candidate): return candidate - except Exception: - pass + except Exception as e: + bt.logging.trace(f"[DSperse] Error scanning slice dirs: {e}") # As a fallback, consult run_results.json for recorded proof paths rr = os.path.join(run_dir, "run_results.json") if os.path.exists(rr): try: with open(rr, "r", encoding="utf-8") as f: run_results = json.load(f) for entry in run_results.get("execution_chain", {}).get("execution_results", []): pe = entry.get("proof_execution", {}) # different keys observed: proof_file or proof_path path = pe.get("proof_file") or pe.get("proof_path") if path and os.path.exists(path): return path - except Exception: - pass + except Exception as e: + bt.logging.trace(f"[DSperse] Error reading run_results.json: {e}") return None
108-113: Consider underscore prefix for intentionally unused parameters.The
validator_inputsandproofparameters are unused but required by the base interface. Using underscore prefixes (_validator_inputs,_proof) clearly signals intent and satisfies linters.def verify_proof( self, session: VerifiedModelSession, - validator_inputs: GenericInput, # not used by DSperse verify - proof: dict | str, # not used; verify reads from run_dir + dslice + _validator_inputs: GenericInput, # not used by DSperse verify + _proof: dict | str, # not used; verify reads from run_dir + dslice ) -> bool:
142-149: Clarify the meaning of the0.0return value.The method returns
tuple[str, float]where the float is always0.0. Document what this value represents (e.g., aggregation time, confidence score, or placeholder) for maintainability.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (7)
docs/notebooks/TruthTorchLM_demo.ipynb(1 hunks)docs/notebooks/TruthTorchLM_quickstart.ipynb(1 hunks)docs/notebooks/TruthTorchLM_quickstart_one_command.ipynb(1 hunks)neurons/deployment_layer/model_net/README.md(1 hunks)neurons/deployment_layer/model_net/input.py(1 hunks)neurons/deployment_layer/model_net/metadata.json(1 hunks)neurons/execution_layer/proof_handlers/dsperse_handler.py(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
neurons/deployment_layer/model_net/input.py (3)
neurons/execution_layer/base_input.py (1)
BaseInput(7-53)neurons/execution_layer/input_registry.py (2)
InputRegistry(6-48)register(12-19)neurons/_validator/models/request_type.py (1)
RequestType(4-14)
🪛 Flake8 (7.3.0)
docs/notebooks/TruthTorchLM_demo.ipynb
[error] 2-2: block comment should start with '# '
(E265)
docs/notebooks/TruthTorchLM_quickstart_one_command.ipynb
[error] 89-89: do not assign a lambda expression, use a def
(E731)
[error] 90-90: do not assign a lambda expression, use a def
(E731)
[error] 233-233: undefined name 'method_value'
(F821)
[error] 234-234: undefined name 'method_value'
(F821)
[error] 286-286: undefined name 'method_value'
(F821)
[error] 286-286: undefined name 'method_value'
(F821)
docs/notebooks/TruthTorchLM_quickstart.ipynb
[error] 1-1: 'json' imported but unused
(F401)
🪛 GitHub Check: CodeQL
neurons/execution_layer/proof_handlers/dsperse_handler.py
[notice] 138-138: Empty except
'except' clause does nothing but pass and there is no explanatory comment.
[notice] 206-206: Empty except
'except' clause does nothing but pass and there is no explanatory comment.
[notice] 220-220: Empty except
'except' clause does nothing but pass and there is no explanatory comment.
🪛 markdownlint-cli2 (0.18.1)
neurons/deployment_layer/model_net/README.md
24-24: Reference links and images should use a label that is defined
Missing link or image reference definition: ""dslicepath""
(MD052, reference-links-images)
25-25: Reference links and images should use a label that is defined
Missing link or image reference definition: ""runroot""
(MD052, reference-links-images)
🔇 Additional comments (19)
docs/notebooks/TruthTorchLM_demo.ipynb (6)
1-31: LGTM!The notebook introduction and installation cell are well-documented with clear instructions for package installation and API key requirements.
43-81: LGTM!The package discovery logic is robust with multiple candidate names, proper exception handling, and helpful error messages when the package isn't found.
143-166: LGTM!The
find_callablehelper provides a sensible approach with exact matching first followed by fuzzy fallback, anddescribe_signaturehandles inspection gracefully.
186-265: LGTM!The multi-LLM truthfulness check implementation adapts well to varying function signatures with proper fallback handling when API keys are missing.
280-335: LGTM!The long-form generation workflow follows the same robust pattern with signature-based argument building and graceful degradation.
346-370: LGTM!The
to_jsonablehelper provides a reasonable fallback for non-serializable objects, and the result printing logic properly handles empty or None results.docs/notebooks/TruthTorchLM_quickstart.ipynb (3)
112-158: LGTM!The
find_callableandextract_scorehelper functions are well-implemented with appropriate fallbacks and recursion for nested structures.
176-251: LGTM!The multi-LLM truthfulness check workflow has good error handling with fallback to positional arguments and includes a useful smoke test for score validation.
264-331: LGTM!The long-form generation workflow follows the established pattern with signature-based argument construction and reasonable smoke tests for both text length and score validity.
docs/notebooks/TruthTorchLM_quickstart_one_command.ipynb (5)
19-26: LGTM!The
_ensurehelper for lightweight package installation is a clean pattern for notebook dependency management.
52-68: LGTM!The model download logic with fallback candidates is robust and provides helpful feedback when models are unavailable.
105-127: LGTM!The Transformers model loading with multiple candidates and graceful fallback to model ID string is well-implemented.
253-276: LGTM!The
_extract_scorehelper effectively handles various result shapes with appropriate recursion for nested structures.
354-375: LGTM!The result normalization and smoke test logic provides helpful diagnostics and reasonable validation for the expected false claim.
neurons/deployment_layer/model_net/metadata.json (1)
1-14: Well-structured deployment metadata.The configuration correctly defines the DSperse deployment layer with appropriate fields for slice management and proof system identification.
neurons/deployment_layer/model_net/README.md (1)
1-46: Comprehensive documentation for DSperse deployment.The README clearly explains the deployment structure, expected directory layout, configuration requirements, and typical workflow. The static analysis warnings about reference links are false positives—lines 24-25 contain Python code examples, not Markdown link references.
neurons/deployment_layer/model_net/input.py (1)
27-34: Input generation logic is appropriate for benchmarking.The
generate()method correctly produces a 1×16 float vector wrapped under"input_data". The comment appropriately notes that the length should be adjusted if the model expects different dimensions.neurons/execution_layer/proof_handlers/dsperse_handler.py (2)
53-81: Witness generation logic is sound.The method properly creates the run directory, executes the runner, and handles the case where no run directory is created. The flexible return type accommodates both content and path-based responses.
188-196: Run selection logic is correct.Using modification time to select the latest run directory is a reasonable approach for identifying the most recent execution.
| " elif any(k in p for k in ['dataset', 'data', 'claims', 'questions', 'texts', 'samples']):\n", | ||
| " kwargs[pname] = [CLAIM]\n", | ||
| " # Method selectors\n", | ||
| " elif ('truth_methods' in p or 'methods' == p):\n", | ||
| " if tm_list:\n", | ||
| " kwargs[pname] = tm_list\n", | ||
| " elif 'method' in p and method_value is not None:\n", | ||
| " kwargs[pname] = method_value\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Undefined variable method_value will cause NameError.
The variable method_value is referenced on lines 233-234 but is never defined anywhere in this notebook. This will cause a runtime error when the condition 'method' in p is true.
Either define method_value before use, or remove the condition if it's not needed:
- " elif 'method' in p and method_value is not None:\n",
- " kwargs[pname] = method_value\n",
+ " # Note: Single 'method' parameter handling removed - use 'truth_methods' instead\n",Alternatively, if method_value should be derived from tm_list:
+ " method_value = tm_list[0] if tm_list else None\n",
" for pname in sig.parameters:\n",📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| " elif any(k in p for k in ['dataset', 'data', 'claims', 'questions', 'texts', 'samples']):\n", | |
| " kwargs[pname] = [CLAIM]\n", | |
| " # Method selectors\n", | |
| " elif ('truth_methods' in p or 'methods' == p):\n", | |
| " if tm_list:\n", | |
| " kwargs[pname] = tm_list\n", | |
| " elif 'method' in p and method_value is not None:\n", | |
| " kwargs[pname] = method_value\n", | |
| " elif any(k in p for k in ['dataset', 'data', 'claims', 'questions', 'texts', 'samples']):\n", | |
| " kwargs[pname] = [CLAIM]\n", | |
| " # Method selectors\n", | |
| " elif ('truth_methods' in p or 'methods' == p):\n", | |
| " if tm_list:\n", | |
| " kwargs[pname] = tm_list\n", | |
| " # Note: Single 'method' parameter handling removed - use 'truth_methods' instead\n", |
🧰 Tools
🪛 Flake8 (7.3.0)
[error] 233-233: undefined name 'method_value'
(F821)
[error] 234-234: undefined name 'method_value'
(F821)
🤖 Prompt for AI Agents
In docs/notebooks/TruthTorchLM_quickstart_one_command.ipynb around lines 230 to
237, the variable method_value is referenced but never defined which will raise
a NameError; to fix, define method_value before this block (for example derive
it from tm_list or set a sensible default such as None or a first-method
fallback) or remove the `'method' in p and method_value is not None` branch if
it’s unnecessary; ensure any chosen default/derivation is documented and that
method_value is in scope when used.
| " if used_name == 'evaluate_truth_method':\n", | ||
| " patterns = []\n", | ||
| " # Build dataset candidates\n", | ||
| " ds1 = [CLAIM]\n", | ||
| " ds2 = [{'claim': CLAIM}]\n", | ||
| " ds3 = [{'question': CLAIM}]\n", | ||
| " ds4 = [{'text': CLAIM}]\n", | ||
| " tm_list = [method_value] if method_value is not None else []\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same undefined method_value issue in evaluate_truth_method branch.
This block also references method_value which is undefined, and will cause a NameError when executed.
Apply this diff to use tm_list directly (which is already populated earlier):
- " tm_list = [method_value] if method_value is not None else []\n",
+ " # tm_list already populated from _select_truth_methods\n",Or define method_value at the beginning of the else block (after line 181):
method_value = tm_list[0] if tm_list else None🧰 Tools
🪛 Flake8 (7.3.0)
[error] 286-286: undefined name 'method_value'
(F821)
[error] 286-286: undefined name 'method_value'
(F821)
🤖 Prompt for AI Agents
In docs/notebooks/TruthTorchLM_quickstart_one_command.ipynb around lines 282 to
289, the branch for used_name == 'evaluate_truth_method' references an undefined
variable method_value which will raise a NameError; fix this by either replacing
the usage with the existing tm_list (e.g., use tm_list directly to produce
tm_list = [method_value] if method_value is not None else [] -> instead build
tm_list from existing value) or define method_value earlier (after line 181)
with method_value = tm_list[0] if tm_list else None so the subsequent code can
safely reference it; ensure tm_list is populated as intended before using
method_value.
| @staticmethod | ||
| def validate(data: dict[str, object]) -> None: | ||
| return NetInputSchema(**data) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
validate() should return None per the base class contract.
The validate() method returns the NetInputSchema instance, but the base class signature specifies -> None. Additionally, the base class declares validate(self, data) with self, but this implementation uses @staticmethod. Either raise a ValidationError on failure and return None on success, or remove the return statement.
@staticmethod
def validate(data: dict[str, object]) -> None:
- return NetInputSchema(**data)
+ NetInputSchema(**data) # Raises ValidationError if invalid🤖 Prompt for AI Agents
In neurons/deployment_layer/model_net/input.py around lines 35 to 37, the
validate method currently is a @staticmethod that returns a NetInputSchema
instance, which violates the base class contract (should be an instance method
that returns None and raises on validation failure). Remove the @staticmethod
decorator, change the signature to def validate(self, data: dict[str, object])
-> None, instantiate NetInputSchema(**data) (letting it raise ValidationError on
failure) and ensure the method returns None (either by removing the return or
adding an explicit return None).
| except Exception: | ||
| # Fallback: check if any entry has verification_execution.verified truthy | ||
| try: | ||
| for entry in results.get("execution_chain", {}).get("execution_results", []): | ||
| ve = entry.get("verification_execution", {}) | ||
| if ve and (ve.get("verified") or ve.get("success") or ve.get("success") is True): | ||
| return True | ||
| except Exception: | ||
| pass | ||
| return False |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Empty except blocks silently swallow errors.
The nested exception handlers hide failures that could indicate bugs in the verification logic or unexpected response formats. At minimum, log the exceptions for debugging.
except Exception:
# Fallback: check if any entry has verification_execution.verified truthy
try:
for entry in results.get("execution_chain", {}).get("execution_results", []):
ve = entry.get("verification_execution", {})
- if ve and (ve.get("verified") or ve.get("success") or ve.get("success") is True):
+ if ve and (ve.get("verified") or ve.get("success")):
return True
- except Exception:
- pass
+ except Exception as e:
+ bt.logging.warning(f"[DSperse] Fallback verification parsing failed: {e}")
return FalseAlso note: ve.get("success") or ve.get("success") is True is redundant—the second clause is subsumed by the first.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| except Exception: | |
| # Fallback: check if any entry has verification_execution.verified truthy | |
| try: | |
| for entry in results.get("execution_chain", {}).get("execution_results", []): | |
| ve = entry.get("verification_execution", {}) | |
| if ve and (ve.get("verified") or ve.get("success") or ve.get("success") is True): | |
| return True | |
| except Exception: | |
| pass | |
| return False | |
| except Exception: | |
| # Fallback: check if any entry has verification_execution.verified truthy | |
| try: | |
| for entry in results.get("execution_chain", {}).get("execution_results", []): | |
| ve = entry.get("verification_execution", {}) | |
| if ve and (ve.get("verified") or ve.get("success")): | |
| return True | |
| except Exception as e: | |
| bt.logging.warning(f"[DSperse] Fallback verification parsing failed: {e}") | |
| return False |
🧰 Tools
🪛 GitHub Check: CodeQL
[notice] 138-138: Empty except
'except' clause does nothing but pass and there is no explanatory comment.
🤖 Prompt for AI Agents
In neurons/execution_layer/proof_handlers/dsperse_handler.py around lines 131 to
140, the nested empty except blocks silently swallow errors and the condition
ve.get("success") or ve.get("success") is True is redundant; replace the bare
excepts with explicit exception capture (except Exception as e) and log the
exception (e.g., logger.exception or logging.exception) so failures in
verification parsing are visible, and simplify the truthiness check to use
ve.get("verified") or ve.get("success") only; optionally narrow the caught
exception types if you expect specific parsing issues.
| def _ensure_dsperse_available(self) -> None: | ||
| if DsperseRunner is None or DsperseProver is None or DsperseVerifier is None: | ||
| raise ImportError( | ||
| "DSperse library not available. Please ensure 'dsperse' is installed and importable." | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
_ensure_dsperse_available() check is ineffective.
This check will never trigger because if the imports at lines 13-15 fail, the module itself won't load. The check against None won't catch the ImportError. Consider using a lazy-import pattern if you want to support optional DSperse availability.
+# At top of file, replace direct imports:
+DsperseRunner = None
+DsperseProver = None
+DsperseVerifier = None
+
+try:
+ from dsperse.src.run.runner import Runner as DsperseRunner
+ from dsperse.src.prover import Prover as DsperseProver
+ from dsperse.src.verifier import Verifier as DsperseVerifier
+except ImportError:
+ pass # DSperse not available; _ensure_dsperse_available() will raiseCommittable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In neurons/execution_layer/proof_handlers/dsperse_handler.py around lines 224 to
228, the current check comparing DsperseRunner/DsperseProver/DsperseVerifier to
None is ineffective because failed imports prevent module loading; replace it
with a lazy-import pattern: move the dsperse imports into
_ensure_dsperse_available (or a module-level initializer) and wrap them in
try/except ImportError to set a boolean flag or raise a clear ImportError with
guidance; ensure subsequent code uses that flag or calls the initializer before
referencing DSperse classes.
Integrate DSperse into Subnet 2
Summary by CodeRabbit
Documentation
New Features
✏️ Tip: You can customize this high-level summary in your review settings.