Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
26 changes: 17 additions & 9 deletions neurons/validator.py
Original file line number Diff line number Diff line change
Expand Up @@ -403,16 +403,24 @@ async def run_step(self):
)
normalized_score = constants.DEFAULT_SCORE
else:
normalized_score = compute_score(
scores_per_uid[uid],
competition.bench,
competition.minb,
competition.maxb,
competition.pow,
competition.bheight,
metadata.id.competition_id,
competition.id,
# Get metadata for this UID
metadata = retrieve_model_metadata(
self.subtensor, self.config.netuid, self.metagraph.hotkeys[uid]
)
if metadata is None:
bt.logging.warning(f"No metadata found for UID {uid}; defaulting score to 0")
normalized_score = constants.DEFAULT_SCORE
else:
normalized_score = compute_score(
scores_per_uid[uid],
competition.bench,
competition.minb,
competition.maxb,
competition.pow,
competition.bheight,
metadata.id.competition_id,
competition.id,
)
Comment on lines +406 to +423
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Optimize metadata retrieval to avoid duplicate network calls.

While the added metadata existence check and fallback handling is a good safety improvement, this implementation retrieves metadata twice for the same UID - once during evaluation (line 250) and again during normalization. This could impact performance and potentially lead to inconsistencies if metadata changes between calls.

Consider storing the metadata from the evaluation loop and reusing it during normalization:

# In the evaluation loop (around line 250), store metadata in a dictionary
metadata_per_uid = {}
# ... existing evaluation code ...
metadata = retrieve_model_metadata(
    self.subtensor, self.config.netuid, self.metagraph.hotkeys[uid]
)
metadata_per_uid[uid] = metadata  # Store for later use

# In the normalization loop, replace the new retrieval with stored metadata:
-                    # Get metadata for this UID
-                    metadata = retrieve_model_metadata(
-                        self.subtensor, self.config.netuid, self.metagraph.hotkeys[uid]
-                    )
+                    # Use previously retrieved metadata
+                    metadata = metadata_per_uid.get(uid)

This approach would:

  1. Eliminate redundant network calls
  2. Ensure consistency between evaluation and normalization phases
  3. Maintain the same safety checks for missing metadata

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In neurons/validator.py around lines 406 to 423, the code retrieves metadata
twice for the same UID, causing redundant network calls and potential
inconsistencies. To fix this, modify the evaluation loop to store the retrieved
metadata in a dictionary keyed by UID. Then, during normalization, reuse the
stored metadata from this dictionary instead of calling retrieve_model_metadata
again. This eliminates duplicate calls, ensures consistent metadata usage, and
retains the existing safety checks for missing metadata.

normalized_scores[uid] = normalized_score
else:
bt.logging.debug(f"Setting zero normalized score for UID {uid}")
Expand Down