Skip to content

Conversation

@0pendansor
Copy link
Collaborator

@0pendansor 0pendansor commented Nov 27, 2025

Integrate DSperse into Subnet 2

Summary by CodeRabbit

  • Documentation

    • Added comprehensive TruthTorchLM demonstration notebook with multi-LLM truthfulness checks
    • Added TruthTorchLM quickstart guide for easy setup and evaluation
    • Added offline quickstart option using local GGUF models for local testing
  • New Features

    • Added Model Net deployment layer with support for DSperse-based proofs
    • Introduced DSperse proof handler supporting proof generation, verification, and proof aggregation

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link

coderabbitai bot commented Nov 27, 2025

Walkthrough

This PR introduces TruthTorchLM demonstration notebooks with dynamic function discovery and example workflows, implements the DSperse proof handler for model_net deployment with witness/proof/verification lifecycle, and adds supporting configuration, input schema, and documentation for the model_net deployment layer.

Changes

Cohort / File(s) Summary
TruthTorchLM Documentation Notebooks
docs/notebooks/TruthTorchLM_demo.ipynb, docs/notebooks/TruthTorchLM_quickstart.ipynb, docs/notebooks/TruthTorchLM_quickstart_one_command.ipynb
Three Jupyter notebooks demonstrating TruthTorchLM functionality with multi-LLM truthfulness checks, long-form generation with truth values, function discovery helpers (find\_callable, describe\_signature), signature inspection, provider-key gating, and fallback error handling. Quickstart\_one\_command uses local GGUF models via llama.cpp for offline execution.
Model Net Deployment Configuration
neurons/deployment_layer/model_net/metadata.json, neurons/deployment_layer/model_net/README.md
Static metadata configuration and documentation for DSperse-based model\_net deployment, including proof\_system declaration, slice configuration, and typical workflow structure.
Model Net Input Schema
neurons/deployment_layer/model_net/input.py
Introduces NetInputSchema and NetInput class registered with InputRegistry, providing 1×16 random float vector generation and validation for model\_net deployment inputs.
DSperse Proof Handler
neurons/execution_layer/proof_handlers/dsperse_handler.py
New proof system handler implementing DsperseConfig and DsperseHandler subclass with input generation, witness generation, proof generation/verification, proof aggregation, run discovery, and DSperse availability checks.

Sequence Diagram(s)

sequenceDiagram
    participant Session as VerifiedModelSession
    participant Handler as DsperseHandler
    participant DSperse as DSperse Runner
    participant FS as Filesystem
    
    rect rgb(200, 220, 255)
    Note over Session,FS: Input & Witness Generation
    Session->>Handler: gen_input_file(session)
    Handler->>FS: write input.json
    
    Session->>Handler: generate_witness(session)
    Handler->>Handler: _resolve_config()
    Handler->>Handler: _ensure_dsperse_available()
    Handler->>DSperse: run dslice
    DSperse->>FS: create run_{timestamp}/
    Handler->>Handler: _select_latest_run()
    Handler->>FS: read run results
    Handler-->>Session: return witness content
    end
    
    rect rgb(220, 255, 220)
    Note over Session,FS: Proof Generation
    Session->>Handler: gen_proof(session)
    Handler->>Handler: _ensure_dsperse_available()
    Handler->>Handler: _select_latest_run()
    Handler->>Handler: _locate_proof_json()
    Handler->>DSperse: run prover
    Handler->>FS: load proof.json & instances
    Handler-->>Session: return (proof_str, instances_str)
    end
    
    rect rgb(255, 230, 200)
    Note over Session,FS: Verification
    Session->>Handler: verify_proof(session, validator_inputs, proof)
    Handler->>Handler: _ensure_dsperse_available()
    Handler->>DSperse: run verifier
    DSperse-->>Handler: verification metrics
    Handler->>Handler: interpret results
    Handler-->>Session: return bool (success/failure)
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Areas requiring extra attention:

  • dsperse\_handler.py: Core proof handler logic with multiple interacting methods (config resolution, run discovery, proof JSON location), DSperse subprocess orchestration, and error handling paths across witness/proof/verification workflows.
  • TruthTorchLM notebooks: Dynamic function discovery, signature introspection, and provider-key gating across multiple candidate function names; verify fallback logic and error messaging consistency.
  • Model Net input schema and metadata: Ensure alignment with DSperse slice structure and session storage conventions referenced in handler.

Poem

🐰 New proofs bloom like carrots in the ground,
DSperse slices dance, witnesses abound!
TruthTorch shines bright with notebooks so keen,
Function discovery—the cleverest scene!
Verification paths hop, skip, and bound! ✨

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 30.77% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'DSperse Integration' directly aligns with the PR's core objective to integrate DSperse into Subnet 2, covering the main changes across handler implementation, deployment configuration, and demonstration notebooks.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch dsperse-integration

Tip

📝 Customizable high-level summaries are now available in beta!

You can now customize how CodeRabbit generates the high-level summary in your pull requests — including its content, structure, tone, and formatting.

  • Provide your own instructions using the high_level_summary_instructions setting.
  • Format the summary however you like (bullet lists, tables, multi-section layouts, contributor stats, etc.).
  • Use high_level_summary_in_walkthrough to move the summary from the description to the walkthrough section.

Example instruction:

"Divide the high-level summary into five sections:

  1. 📝 Description — Summarize the main change in 50–60 words, explaining what was done.
  2. 📓 References — List relevant issues, discussions, documentation, or related PRs.
  3. 📦 Dependencies & Requirements — Mention any new/updated dependencies, environment variable changes, or configuration updates.
  4. 📊 Contributor Summary — Include a Markdown table showing contributions:
    | Contributor | Lines Added | Lines Removed | Files Changed |
  5. ✔️ Additional Notes — Add any extra reviewer context.
    Keep each section concise (under 200 words) and use bullet or numbered lists for clarity."

Note: This feature is currently in beta for Pro-tier users, and pricing will be announced later.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

ve = entry.get("verification_execution", {})
if ve and (ve.get("verified") or ve.get("success") or ve.get("success") is True):
return True
except Exception:

Check notice

Code scanning / CodeQL

Empty except Note

'except' clause does nothing but pass and there is no explanatory comment.

Copilot Autofix

AI 15 days ago

To fix the problem, the empty except Exception: block in verify_proof (line 138) should be modified to handle exceptions in a safer way. The best way, while maintaining the original intent (that is, returning False when verification fails), is to log the exception using the existing logging system (bt.logging.error). This will preserve the original flow but ensure any unexpected exceptions are visible during debugging. Only change the block at line 138 in verify_proof. You do not need to import anything because bt.logging is already available in the file.


Suggested changeset 1
neurons/execution_layer/proof_handlers/dsperse_handler.py

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/neurons/execution_layer/proof_handlers/dsperse_handler.py b/neurons/execution_layer/proof_handlers/dsperse_handler.py
--- a/neurons/execution_layer/proof_handlers/dsperse_handler.py
+++ b/neurons/execution_layer/proof_handlers/dsperse_handler.py
@@ -135,8 +135,8 @@
                     ve = entry.get("verification_execution", {})
                     if ve and (ve.get("verified") or ve.get("success") or ve.get("success") is True):
                         return True
-            except Exception:
-                pass
+            except Exception as e:
+                bt.logging.error(f"[DSperse] Exception during fallback verification: {e}", exc_info=True)
             return False
 
     def aggregate_proofs(self, session: VerifiedModelSession, proofs: list[str]) -> tuple[str, float]:
EOF
@@ -135,8 +135,8 @@
ve = entry.get("verification_execution", {})
if ve and (ve.get("verified") or ve.get("success") or ve.get("success") is True):
return True
except Exception:
pass
except Exception as e:
bt.logging.error(f"[DSperse] Exception during fallback verification: {e}", exc_info=True)
return False

def aggregate_proofs(self, session: VerifiedModelSession, proofs: list[str]) -> tuple[str, float]:
Copilot is powered by AI and may make mistakes. Always verify output.
candidate = os.path.join(run_dir, name, "proof.json")
if os.path.exists(candidate):
return candidate
except Exception:

Check notice

Code scanning / CodeQL

Empty except Note

'except' clause does nothing but pass and there is no explanatory comment.

Copilot Autofix

AI 15 days ago

The ideal fix is to handle the caught exception in a way that preserves important debugging information without altering existing program flow. In this case, when an exception occurs during the directory scan, the intended behaviour is to fall back to the alternate search method, so raising the exception is not appropriate. Therefore, the best approach is to log the exception using the standard logging mechanism (in this code, bt.logging.error() appears to be available via the bittensor import), then continue as before. This makes exceptions discoverable via logs without changing control flow.

Make the following change in neurons/execution_layer/proof_handlers/dsperse_handler.py:

  • In the except Exception: block (line 206), replace pass with a call to bt.logging.error() that logs the exception with a descriptive message.
  • Add exception details using exc_info=True to include the traceback.

No additional imports are needed, since logging is already available via the bittensor library.


Suggested changeset 1
neurons/execution_layer/proof_handlers/dsperse_handler.py

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/neurons/execution_layer/proof_handlers/dsperse_handler.py b/neurons/execution_layer/proof_handlers/dsperse_handler.py
--- a/neurons/execution_layer/proof_handlers/dsperse_handler.py
+++ b/neurons/execution_layer/proof_handlers/dsperse_handler.py
@@ -203,8 +203,8 @@
                     candidate = os.path.join(run_dir, name, "proof.json")
                     if os.path.exists(candidate):
                         return candidate
-        except Exception:
-            pass
+        except Exception as e:
+            bt.logging.error(f"Exception searching for proof.json in {run_dir}: {e}", exc_info=True)
         # As a fallback, consult run_results.json for recorded proof paths
         rr = os.path.join(run_dir, "run_results.json")
         if os.path.exists(rr):
EOF
@@ -203,8 +203,8 @@
candidate = os.path.join(run_dir, name, "proof.json")
if os.path.exists(candidate):
return candidate
except Exception:
pass
except Exception as e:
bt.logging.error(f"Exception searching for proof.json in {run_dir}: {e}", exc_info=True)
# As a fallback, consult run_results.json for recorded proof paths
rr = os.path.join(run_dir, "run_results.json")
if os.path.exists(rr):
Copilot is powered by AI and may make mistakes. Always verify output.
path = pe.get("proof_file") or pe.get("proof_path")
if path and os.path.exists(path):
return path
except Exception:

Check notice

Code scanning / CodeQL

Empty except Note

'except' clause does nothing but pass and there is no explanatory comment.

Copilot Autofix

AI 15 days ago

The best way to fix this issue is to avoid silently swallowing exceptions. Instead, log the error when an exception occurs so that developers and operators are aware of the nature and source of failures. The bittensor as bt import suggests you can use bt.logging.warning or bt.logging.error to log the problem. You should add a message within the except block at line 220 (and ideally at 207 as well, as they are identical issues in the same function) describing what operation failed and including the exception details. This preserves debugging information without breaking the existing fallback logic, and maintains functional parity.

Modify only the except blocks (lines 207 and 220) in neurons/execution_layer/proof_handlers/dsperse_handler.py within the _locate_proof_json function to log exceptions rather than pass silently.

No new imports are needed, as bt is already imported, and bt.logging.warning is a standard logging call.

Suggested changeset 1
neurons/execution_layer/proof_handlers/dsperse_handler.py

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/neurons/execution_layer/proof_handlers/dsperse_handler.py b/neurons/execution_layer/proof_handlers/dsperse_handler.py
--- a/neurons/execution_layer/proof_handlers/dsperse_handler.py
+++ b/neurons/execution_layer/proof_handlers/dsperse_handler.py
@@ -203,8 +203,8 @@
                     candidate = os.path.join(run_dir, name, "proof.json")
                     if os.path.exists(candidate):
                         return candidate
-        except Exception:
-            pass
+        except Exception as e:
+            bt.logging.warning(f"Exception during search for proof.json under {run_dir}: {e}")
         # As a fallback, consult run_results.json for recorded proof paths
         rr = os.path.join(run_dir, "run_results.json")
         if os.path.exists(rr):
@@ -217,8 +217,8 @@
                     path = pe.get("proof_file") or pe.get("proof_path")
                     if path and os.path.exists(path):
                         return path
-            except Exception:
-                pass
+            except Exception as e:
+                bt.logging.warning(f"Exception while reading run_results.json in {run_dir}: {e}")
         return None
 
     def _ensure_dsperse_available(self) -> None:
EOF
@@ -203,8 +203,8 @@
candidate = os.path.join(run_dir, name, "proof.json")
if os.path.exists(candidate):
return candidate
except Exception:
pass
except Exception as e:
bt.logging.warning(f"Exception during search for proof.json under {run_dir}: {e}")
# As a fallback, consult run_results.json for recorded proof paths
rr = os.path.join(run_dir, "run_results.json")
if os.path.exists(rr):
@@ -217,8 +217,8 @@
path = pe.get("proof_file") or pe.get("proof_path")
if path and os.path.exists(path):
return path
except Exception:
pass
except Exception as e:
bt.logging.warning(f"Exception while reading run_results.json in {run_dir}: {e}")
return None

def _ensure_dsperse_available(self) -> None:
Copilot is powered by AI and may make mistakes. Always verify output.
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🧹 Nitpick comments (5)
docs/notebooks/TruthTorchLM_quickstart.ipynb (1)

86-97: Remove unused json import.

The json module is imported but never used in this cell or subsequent cells in this notebook.

Apply this diff:

-    "import textwrap, json\n",
+    "import textwrap\n",
docs/notebooks/TruthTorchLM_quickstart_one_command.ipynb (1)

89-94: Consider using def instead of lambda for generator functions.

Using def improves readability, debugging (named functions in tracebacks), and follows PEP 8 recommendations.

Apply this diff:

-    "gen_a = lambda prompt: _gen_llama(prompt, path_a)\n",
-    "gen_b = lambda prompt: _gen_llama(prompt, path_b)\n",
+    "def gen_a(prompt): return _gen_llama(prompt, path_a)\n",
+    "def gen_b(prompt): return _gen_llama(prompt, path_b)\n",
neurons/execution_layer/proof_handlers/dsperse_handler.py (3)

198-222: Add logging for exception handling in _locate_proof_json.

The empty except blocks on lines 206 and 220 suppress all errors, making it difficult to diagnose issues with proof file discovery. Consider logging at trace/debug level.

         try:
             for name in sorted(os.listdir(run_dir)):
                 if name.startswith("slice_"):
                     candidate = os.path.join(run_dir, name, "proof.json")
                     if os.path.exists(candidate):
                         return candidate
-        except Exception:
-            pass
+        except Exception as e:
+            bt.logging.trace(f"[DSperse] Error scanning slice dirs: {e}")
         # As a fallback, consult run_results.json for recorded proof paths
         rr = os.path.join(run_dir, "run_results.json")
         if os.path.exists(rr):
             try:
                 with open(rr, "r", encoding="utf-8") as f:
                     run_results = json.load(f)
                 for entry in run_results.get("execution_chain", {}).get("execution_results", []):
                     pe = entry.get("proof_execution", {})
                     # different keys observed: proof_file or proof_path
                     path = pe.get("proof_file") or pe.get("proof_path")
                     if path and os.path.exists(path):
                         return path
-            except Exception:
-                pass
+            except Exception as e:
+                bt.logging.trace(f"[DSperse] Error reading run_results.json: {e}")
         return None

108-113: Consider underscore prefix for intentionally unused parameters.

The validator_inputs and proof parameters are unused but required by the base interface. Using underscore prefixes (_validator_inputs, _proof) clearly signals intent and satisfies linters.

     def verify_proof(
         self,
         session: VerifiedModelSession,
-        validator_inputs: GenericInput,  # not used by DSperse verify
-        proof: dict | str,               # not used; verify reads from run_dir + dslice
+        _validator_inputs: GenericInput,  # not used by DSperse verify
+        _proof: dict | str,               # not used; verify reads from run_dir + dslice
     ) -> bool:

142-149: Clarify the meaning of the 0.0 return value.

The method returns tuple[str, float] where the float is always 0.0. Document what this value represents (e.g., aggregation time, confidence score, or placeholder) for maintainability.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 11f1b83 and dcdd183.

📒 Files selected for processing (7)
  • docs/notebooks/TruthTorchLM_demo.ipynb (1 hunks)
  • docs/notebooks/TruthTorchLM_quickstart.ipynb (1 hunks)
  • docs/notebooks/TruthTorchLM_quickstart_one_command.ipynb (1 hunks)
  • neurons/deployment_layer/model_net/README.md (1 hunks)
  • neurons/deployment_layer/model_net/input.py (1 hunks)
  • neurons/deployment_layer/model_net/metadata.json (1 hunks)
  • neurons/execution_layer/proof_handlers/dsperse_handler.py (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
neurons/deployment_layer/model_net/input.py (3)
neurons/execution_layer/base_input.py (1)
  • BaseInput (7-53)
neurons/execution_layer/input_registry.py (2)
  • InputRegistry (6-48)
  • register (12-19)
neurons/_validator/models/request_type.py (1)
  • RequestType (4-14)
🪛 Flake8 (7.3.0)
docs/notebooks/TruthTorchLM_demo.ipynb

[error] 2-2: block comment should start with '# '

(E265)

docs/notebooks/TruthTorchLM_quickstart_one_command.ipynb

[error] 89-89: do not assign a lambda expression, use a def

(E731)


[error] 90-90: do not assign a lambda expression, use a def

(E731)


[error] 233-233: undefined name 'method_value'

(F821)


[error] 234-234: undefined name 'method_value'

(F821)


[error] 286-286: undefined name 'method_value'

(F821)


[error] 286-286: undefined name 'method_value'

(F821)

docs/notebooks/TruthTorchLM_quickstart.ipynb

[error] 1-1: 'json' imported but unused

(F401)

🪛 GitHub Check: CodeQL
neurons/execution_layer/proof_handlers/dsperse_handler.py

[notice] 138-138: Empty except
'except' clause does nothing but pass and there is no explanatory comment.


[notice] 206-206: Empty except
'except' clause does nothing but pass and there is no explanatory comment.


[notice] 220-220: Empty except
'except' clause does nothing but pass and there is no explanatory comment.

🪛 markdownlint-cli2 (0.18.1)
neurons/deployment_layer/model_net/README.md

24-24: Reference links and images should use a label that is defined
Missing link or image reference definition: ""dslicepath""

(MD052, reference-links-images)


25-25: Reference links and images should use a label that is defined
Missing link or image reference definition: ""runroot""

(MD052, reference-links-images)

🔇 Additional comments (19)
docs/notebooks/TruthTorchLM_demo.ipynb (6)

1-31: LGTM!

The notebook introduction and installation cell are well-documented with clear instructions for package installation and API key requirements.


43-81: LGTM!

The package discovery logic is robust with multiple candidate names, proper exception handling, and helpful error messages when the package isn't found.


143-166: LGTM!

The find_callable helper provides a sensible approach with exact matching first followed by fuzzy fallback, and describe_signature handles inspection gracefully.


186-265: LGTM!

The multi-LLM truthfulness check implementation adapts well to varying function signatures with proper fallback handling when API keys are missing.


280-335: LGTM!

The long-form generation workflow follows the same robust pattern with signature-based argument building and graceful degradation.


346-370: LGTM!

The to_jsonable helper provides a reasonable fallback for non-serializable objects, and the result printing logic properly handles empty or None results.

docs/notebooks/TruthTorchLM_quickstart.ipynb (3)

112-158: LGTM!

The find_callable and extract_score helper functions are well-implemented with appropriate fallbacks and recursion for nested structures.


176-251: LGTM!

The multi-LLM truthfulness check workflow has good error handling with fallback to positional arguments and includes a useful smoke test for score validation.


264-331: LGTM!

The long-form generation workflow follows the established pattern with signature-based argument construction and reasonable smoke tests for both text length and score validity.

docs/notebooks/TruthTorchLM_quickstart_one_command.ipynb (5)

19-26: LGTM!

The _ensure helper for lightweight package installation is a clean pattern for notebook dependency management.


52-68: LGTM!

The model download logic with fallback candidates is robust and provides helpful feedback when models are unavailable.


105-127: LGTM!

The Transformers model loading with multiple candidates and graceful fallback to model ID string is well-implemented.


253-276: LGTM!

The _extract_score helper effectively handles various result shapes with appropriate recursion for nested structures.


354-375: LGTM!

The result normalization and smoke test logic provides helpful diagnostics and reasonable validation for the expected false claim.

neurons/deployment_layer/model_net/metadata.json (1)

1-14: Well-structured deployment metadata.

The configuration correctly defines the DSperse deployment layer with appropriate fields for slice management and proof system identification.

neurons/deployment_layer/model_net/README.md (1)

1-46: Comprehensive documentation for DSperse deployment.

The README clearly explains the deployment structure, expected directory layout, configuration requirements, and typical workflow. The static analysis warnings about reference links are false positives—lines 24-25 contain Python code examples, not Markdown link references.

neurons/deployment_layer/model_net/input.py (1)

27-34: Input generation logic is appropriate for benchmarking.

The generate() method correctly produces a 1×16 float vector wrapped under "input_data". The comment appropriately notes that the length should be adjusted if the model expects different dimensions.

neurons/execution_layer/proof_handlers/dsperse_handler.py (2)

53-81: Witness generation logic is sound.

The method properly creates the run directory, executes the runner, and handles the case where no run directory is created. The flexible return type accommodates both content and path-based responses.


188-196: Run selection logic is correct.

Using modification time to select the latest run directory is a reasonable approach for identifying the most recent execution.

Comment on lines +230 to +237
" elif any(k in p for k in ['dataset', 'data', 'claims', 'questions', 'texts', 'samples']):\n",
" kwargs[pname] = [CLAIM]\n",
" # Method selectors\n",
" elif ('truth_methods' in p or 'methods' == p):\n",
" if tm_list:\n",
" kwargs[pname] = tm_list\n",
" elif 'method' in p and method_value is not None:\n",
" kwargs[pname] = method_value\n",
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Undefined variable method_value will cause NameError.

The variable method_value is referenced on lines 233-234 but is never defined anywhere in this notebook. This will cause a runtime error when the condition 'method' in p is true.

Either define method_value before use, or remove the condition if it's not needed:

-    "            elif 'method' in p and method_value is not None:\n",
-    "                kwargs[pname] = method_value\n",
+    "            # Note: Single 'method' parameter handling removed - use 'truth_methods' instead\n",

Alternatively, if method_value should be derived from tm_list:

+    "        method_value = tm_list[0] if tm_list else None\n",
     "        for pname in sig.parameters:\n",
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
" elif any(k in p for k in ['dataset', 'data', 'claims', 'questions', 'texts', 'samples']):\n",
" kwargs[pname] = [CLAIM]\n",
" # Method selectors\n",
" elif ('truth_methods' in p or 'methods' == p):\n",
" if tm_list:\n",
" kwargs[pname] = tm_list\n",
" elif 'method' in p and method_value is not None:\n",
" kwargs[pname] = method_value\n",
" elif any(k in p for k in ['dataset', 'data', 'claims', 'questions', 'texts', 'samples']):\n",
" kwargs[pname] = [CLAIM]\n",
" # Method selectors\n",
" elif ('truth_methods' in p or 'methods' == p):\n",
" if tm_list:\n",
" kwargs[pname] = tm_list\n",
" # Note: Single 'method' parameter handling removed - use 'truth_methods' instead\n",
🧰 Tools
🪛 Flake8 (7.3.0)

[error] 233-233: undefined name 'method_value'

(F821)


[error] 234-234: undefined name 'method_value'

(F821)

🤖 Prompt for AI Agents
In docs/notebooks/TruthTorchLM_quickstart_one_command.ipynb around lines 230 to
237, the variable method_value is referenced but never defined which will raise
a NameError; to fix, define method_value before this block (for example derive
it from tm_list or set a sensible default such as None or a first-method
fallback) or remove the `'method' in p and method_value is not None` branch if
it’s unnecessary; ensure any chosen default/derivation is documented and that
method_value is in scope when used.

Comment on lines +282 to +289
" if used_name == 'evaluate_truth_method':\n",
" patterns = []\n",
" # Build dataset candidates\n",
" ds1 = [CLAIM]\n",
" ds2 = [{'claim': CLAIM}]\n",
" ds3 = [{'question': CLAIM}]\n",
" ds4 = [{'text': CLAIM}]\n",
" tm_list = [method_value] if method_value is not None else []\n",
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Same undefined method_value issue in evaluate_truth_method branch.

This block also references method_value which is undefined, and will cause a NameError when executed.

Apply this diff to use tm_list directly (which is already populated earlier):

-    "        tm_list = [method_value] if method_value is not None else []\n",
+    "        # tm_list already populated from _select_truth_methods\n",

Or define method_value at the beginning of the else block (after line 181):

method_value = tm_list[0] if tm_list else None
🧰 Tools
🪛 Flake8 (7.3.0)

[error] 286-286: undefined name 'method_value'

(F821)


[error] 286-286: undefined name 'method_value'

(F821)

🤖 Prompt for AI Agents
In docs/notebooks/TruthTorchLM_quickstart_one_command.ipynb around lines 282 to
289, the branch for used_name == 'evaluate_truth_method' references an undefined
variable method_value which will raise a NameError; fix this by either replacing
the usage with the existing tm_list (e.g., use tm_list directly to produce
tm_list = [method_value] if method_value is not None else [] -> instead build
tm_list from existing value) or define method_value earlier (after line 181)
with method_value = tm_list[0] if tm_list else None so the subsequent code can
safely reference it; ensure tm_list is populated as intended before using
method_value.

Comment on lines +35 to +37
@staticmethod
def validate(data: dict[str, object]) -> None:
return NetInputSchema(**data)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

validate() should return None per the base class contract.

The validate() method returns the NetInputSchema instance, but the base class signature specifies -> None. Additionally, the base class declares validate(self, data) with self, but this implementation uses @staticmethod. Either raise a ValidationError on failure and return None on success, or remove the return statement.

     @staticmethod
     def validate(data: dict[str, object]) -> None:
-        return NetInputSchema(**data)
+        NetInputSchema(**data)  # Raises ValidationError if invalid
🤖 Prompt for AI Agents
In neurons/deployment_layer/model_net/input.py around lines 35 to 37, the
validate method currently is a @staticmethod that returns a NetInputSchema
instance, which violates the base class contract (should be an instance method
that returns None and raises on validation failure). Remove the @staticmethod
decorator, change the signature to def validate(self, data: dict[str, object])
-> None, instantiate NetInputSchema(**data) (letting it raise ValidationError on
failure) and ensure the method returns None (either by removing the return or
adding an explicit return None).

Comment on lines +131 to +140
except Exception:
# Fallback: check if any entry has verification_execution.verified truthy
try:
for entry in results.get("execution_chain", {}).get("execution_results", []):
ve = entry.get("verification_execution", {})
if ve and (ve.get("verified") or ve.get("success") or ve.get("success") is True):
return True
except Exception:
pass
return False
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Empty except blocks silently swallow errors.

The nested exception handlers hide failures that could indicate bugs in the verification logic or unexpected response formats. At minimum, log the exceptions for debugging.

         except Exception:
             # Fallback: check if any entry has verification_execution.verified truthy
             try:
                 for entry in results.get("execution_chain", {}).get("execution_results", []):
                     ve = entry.get("verification_execution", {})
-                    if ve and (ve.get("verified") or ve.get("success") or ve.get("success") is True):
+                    if ve and (ve.get("verified") or ve.get("success")):
                         return True
-            except Exception:
-                pass
+            except Exception as e:
+                bt.logging.warning(f"[DSperse] Fallback verification parsing failed: {e}")
             return False

Also note: ve.get("success") or ve.get("success") is True is redundant—the second clause is subsumed by the first.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
except Exception:
# Fallback: check if any entry has verification_execution.verified truthy
try:
for entry in results.get("execution_chain", {}).get("execution_results", []):
ve = entry.get("verification_execution", {})
if ve and (ve.get("verified") or ve.get("success") or ve.get("success") is True):
return True
except Exception:
pass
return False
except Exception:
# Fallback: check if any entry has verification_execution.verified truthy
try:
for entry in results.get("execution_chain", {}).get("execution_results", []):
ve = entry.get("verification_execution", {})
if ve and (ve.get("verified") or ve.get("success")):
return True
except Exception as e:
bt.logging.warning(f"[DSperse] Fallback verification parsing failed: {e}")
return False
🧰 Tools
🪛 GitHub Check: CodeQL

[notice] 138-138: Empty except
'except' clause does nothing but pass and there is no explanatory comment.

🤖 Prompt for AI Agents
In neurons/execution_layer/proof_handlers/dsperse_handler.py around lines 131 to
140, the nested empty except blocks silently swallow errors and the condition
ve.get("success") or ve.get("success") is True is redundant; replace the bare
excepts with explicit exception capture (except Exception as e) and log the
exception (e.g., logger.exception or logging.exception) so failures in
verification parsing are visible, and simplify the truthiness check to use
ve.get("verified") or ve.get("success") only; optionally narrow the caught
exception types if you expect specific parsing issues.

Comment on lines +224 to +228
def _ensure_dsperse_available(self) -> None:
if DsperseRunner is None or DsperseProver is None or DsperseVerifier is None:
raise ImportError(
"DSperse library not available. Please ensure 'dsperse' is installed and importable."
)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

_ensure_dsperse_available() check is ineffective.

This check will never trigger because if the imports at lines 13-15 fail, the module itself won't load. The check against None won't catch the ImportError. Consider using a lazy-import pattern if you want to support optional DSperse availability.

+# At top of file, replace direct imports:
+DsperseRunner = None
+DsperseProver = None
+DsperseVerifier = None
+
+try:
+    from dsperse.src.run.runner import Runner as DsperseRunner
+    from dsperse.src.prover import Prover as DsperseProver
+    from dsperse.src.verifier import Verifier as DsperseVerifier
+except ImportError:
+    pass  # DSperse not available; _ensure_dsperse_available() will raise

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In neurons/execution_layer/proof_handlers/dsperse_handler.py around lines 224 to
228, the current check comparing DsperseRunner/DsperseProver/DsperseVerifier to
None is ineffective because failed imports prevent module loading; replace it
with a lazy-import pattern: move the dsperse imports into
_ensure_dsperse_available (or a module-level initializer) and wrap them in
try/except ImportError to set a boolean flag or raise a clear ImportError with
guidance; ensure subsequent code uses that flag or calls the initializer before
referencing DSperse classes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants