Skip to content

Add syntax based generated test for Inferredbugs dataset#23

Open
sklisa wants to merge 3 commits intoopen-thoughts:mainfrom
sklisa:inferredbugs
Open

Add syntax based generated test for Inferredbugs dataset#23
sklisa wants to merge 3 commits intoopen-thoughts:mainfrom
sklisa:inferredbugs

Conversation

@sklisa
Copy link
Copy Markdown
Contributor

@sklisa sklisa commented Mar 18, 2026

Let GPT 5 nano generate syntax based test as verifier for inferred bugs dataset

Generated dataset https://huggingface.co/datasets/DCAgent/inferredbugs-sandboxes-verifier

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a robust system to enhance the InferredBugs dataset by automatically generating and injecting LLM-authored structural verifiers. The primary goal is to create a 'verified' version of the dataset, where each bug-fixing task is accompanied by a custom-generated test harness capable of statically validating the correctness of a submitted fix. This automation streamlines the creation of high-quality, verifiable tasks for evaluating code-fixing agents.

Highlights

  • Automated Verifier Generation: Introduced a new Python script (generate_with_verifier.py) to automate the end-to-end process of creating a verified InferredBugs dataset. This includes downloading the base dataset, extracting tasks, updating instructions, injecting LLM-authored verifiers, and uploading the result to Hugging Face.
  • LLM-Authored Structural Verifiers: Implemented a module (structural_verifier.py) that leverages an LLM (GPT-5 nano) to author structural verifiers (test.sh and test_state.py) for each InferredBugs task. These verifiers statically inspect code to confirm bug fixes without execution.
  • Instruction Augmentation: Added functionality to append 'Deliverable Requirement' sections to existing instruction.md files within each task. This guides agents on where to save their corrected files, ensuring compatibility with the structural verifiers.
  • Parallel Processing: The verifier authoring process now utilizes a ThreadPoolExecutor to process tasks concurrently, significantly speeding up the generation of verifiers for multiple tasks.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new feature to generate a verified InferredBugs dataset using LLM-authored verifiers. The changes involve a new script to orchestrate the process, including downloading tasks, updating instructions, and injecting verifiers, along with a module for authoring these structural verifiers. The overall approach is clear and well-structured. My feedback focuses on improving maintainability, configurability, and robustness in error handling.

Comment on lines +156 to +157
print(f"ERROR [{task_name}]: Model response is missing <bash_script> tags.")
bash_script = bash_match.group(1).strip() if bash_match else "# Error: No bash script authored"
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

When the <bash_script> tag is missing, an error message is printed to stdout. For critical parsing failures like this, it might be more robust to raise a specific exception (e.g., ValueError) to halt execution or to use a proper logging mechanism with a higher severity level, rather than just printing to stdout.

Comment on lines +160 to +161
python_match = re.search(r"<python_script>(.*?)</python_script>", content, re.DOTALL)
if not python_match:
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Similar to the <bash_script> tag, if the <python_script> tag is missing, an error message is printed to stdout. Consider raising a ValueError or logging this as a critical error to ensure proper handling of malformed LLM responses.

content = instr_path.read_text()

# Simple heuristic to find the target file from the instruction text
import re
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The import re statement is placed inside the update_instructions_with_requirement function. It's generally a best practice in Python to place all imports at the top of the file, outside of any functions. This improves readability and ensures that modules are imported only once, avoiding potential performance overhead if the function is called multiple times.

import sys
import argparse
from pathlib import Path
import re

Comment on lines +68 to +69
source_repo = "mlfoundations-dev/inferredbugs-sandboxes"
target_repo = "DCAgent/inferredbugs-sandboxes-verifier"
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The source_repo and target_repo names are hardcoded. It would be more flexible and maintainable to define these as command-line arguments, allowing users to specify different repositories without modifying the code. This adheres to the general rule of extracting magic numbers/strings into named variables or configuration.

    parser.add_argument("--source_repo", type=str, default="mlfoundations-dev/inferredbugs-sandboxes", help="Source Hugging Face repository for InferredBugs tasks")
    parser.add_argument("--target_repo", type=str, default="DCAgent/inferredbugs-sandboxes-verifier", help="Target Hugging Face repository for verified InferredBugs tasks")
    args = parser.parse_args()
    
    source_repo = args.source_repo
    target_repo = args.target_repo

Comment on lines +82 to +83
output_dir = PROJECT_ROOT / "data" / "inferredbugs" / "workdir"
output_dir.mkdir(parents=True, exist_ok=True)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The output_dir path is hardcoded. Making this configurable via a command-line argument would allow for greater flexibility in specifying where the working directory for extracted tasks should be located, especially in different deployment environments. This aligns with the general rule of extracting magic numbers/strings into named variables or configuration.

    parser.add_argument("--output_dir", type=str, default=None, help="Directory to store extracted tasks. Defaults to a workdir inside data/inferredbugs.")
    args = parser.parse_args()
    
    # ... existing code ...

    output_dir = Path(args.output_dir) if args.output_dir else PROJECT_ROOT / "data" / "inferredbugs" / "workdir"

print(f"Error calling LLM for verifier authoring on {task_name}: {e}")
return f"#!/bin/bash\\necho 'Error: {e}'\\nexit 1", f"# Error: {str(e)}"

def inject_inferredbugs_verifier(dataset_dir: str, questions: List[str], model_name: str = "gpt-5-nano", max_workers: int = 30):
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The max_workers parameter for the ThreadPoolExecutor is hardcoded to 30. This is a "magic number" that could be made configurable, perhaps as an argument to inject_inferredbugs_verifier, to allow for tuning performance based on available resources or specific use cases.

Suggested change
def inject_inferredbugs_verifier(dataset_dir: str, questions: List[str], model_name: str = "gpt-5-nano", max_workers: int = 30):
def inject_inferredbugs_verifier(dataset_dir: str, questions: List[str], model_name: str = "gpt-5-nano", max_workers: int = 30):
"""Orchestrates the authoring and injection of verifiers into tasks."""
from concurrent.futures import ThreadPoolExecutor, as_completed
tasks_root = Path(dataset_dir)
task_dirs = sorted([d for d in tasks_root.iterdir() if d.is_dir()], key=lambda x: x.name)
print(f"Authoring verifier harnesses for {len(task_dirs)} tasks using {model_name} (workers={max_workers})...")

Comment on lines +196 to +197
with open(test_py_path, "w") as f:
f.write(python_code)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

When writing test_state.py and test.sh files, it's good practice to explicitly specify the encoding, typically encoding="utf-8", to ensure consistent behavior across different environments and prevent potential UnicodeEncodeError issues.

Suggested change
with open(test_py_path, "w") as f:
f.write(python_code)
with open(test_py_path, "w", encoding="utf-8") as f:
f.write(python_code)
with open(test_sh_path, "w", encoding="utf-8") as f:
f.write(bash_code)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant