Add syntax based generated test for Inferredbugs dataset#23
Add syntax based generated test for Inferredbugs dataset#23sklisa wants to merge 3 commits intoopen-thoughts:mainfrom
Conversation
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces a robust system to enhance the InferredBugs dataset by automatically generating and injecting LLM-authored structural verifiers. The primary goal is to create a 'verified' version of the dataset, where each bug-fixing task is accompanied by a custom-generated test harness capable of statically validating the correctness of a submitted fix. This automation streamlines the creation of high-quality, verifiable tasks for evaluating code-fixing agents. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces a new feature to generate a verified InferredBugs dataset using LLM-authored verifiers. The changes involve a new script to orchestrate the process, including downloading tasks, updating instructions, and injecting verifiers, along with a module for authoring these structural verifiers. The overall approach is clear and well-structured. My feedback focuses on improving maintainability, configurability, and robustness in error handling.
| print(f"ERROR [{task_name}]: Model response is missing <bash_script> tags.") | ||
| bash_script = bash_match.group(1).strip() if bash_match else "# Error: No bash script authored" |
There was a problem hiding this comment.
When the <bash_script> tag is missing, an error message is printed to stdout. For critical parsing failures like this, it might be more robust to raise a specific exception (e.g., ValueError) to halt execution or to use a proper logging mechanism with a higher severity level, rather than just printing to stdout.
| python_match = re.search(r"<python_script>(.*?)</python_script>", content, re.DOTALL) | ||
| if not python_match: |
| content = instr_path.read_text() | ||
|
|
||
| # Simple heuristic to find the target file from the instruction text | ||
| import re |
There was a problem hiding this comment.
The import re statement is placed inside the update_instructions_with_requirement function. It's generally a best practice in Python to place all imports at the top of the file, outside of any functions. This improves readability and ensures that modules are imported only once, avoiding potential performance overhead if the function is called multiple times.
import sys
import argparse
from pathlib import Path
import re| source_repo = "mlfoundations-dev/inferredbugs-sandboxes" | ||
| target_repo = "DCAgent/inferredbugs-sandboxes-verifier" |
There was a problem hiding this comment.
The source_repo and target_repo names are hardcoded. It would be more flexible and maintainable to define these as command-line arguments, allowing users to specify different repositories without modifying the code. This adheres to the general rule of extracting magic numbers/strings into named variables or configuration.
parser.add_argument("--source_repo", type=str, default="mlfoundations-dev/inferredbugs-sandboxes", help="Source Hugging Face repository for InferredBugs tasks")
parser.add_argument("--target_repo", type=str, default="DCAgent/inferredbugs-sandboxes-verifier", help="Target Hugging Face repository for verified InferredBugs tasks")
args = parser.parse_args()
source_repo = args.source_repo
target_repo = args.target_repo| output_dir = PROJECT_ROOT / "data" / "inferredbugs" / "workdir" | ||
| output_dir.mkdir(parents=True, exist_ok=True) |
There was a problem hiding this comment.
The output_dir path is hardcoded. Making this configurable via a command-line argument would allow for greater flexibility in specifying where the working directory for extracted tasks should be located, especially in different deployment environments. This aligns with the general rule of extracting magic numbers/strings into named variables or configuration.
parser.add_argument("--output_dir", type=str, default=None, help="Directory to store extracted tasks. Defaults to a workdir inside data/inferredbugs.")
args = parser.parse_args()
# ... existing code ...
output_dir = Path(args.output_dir) if args.output_dir else PROJECT_ROOT / "data" / "inferredbugs" / "workdir"| print(f"Error calling LLM for verifier authoring on {task_name}: {e}") | ||
| return f"#!/bin/bash\\necho 'Error: {e}'\\nexit 1", f"# Error: {str(e)}" | ||
|
|
||
| def inject_inferredbugs_verifier(dataset_dir: str, questions: List[str], model_name: str = "gpt-5-nano", max_workers: int = 30): |
There was a problem hiding this comment.
The max_workers parameter for the ThreadPoolExecutor is hardcoded to 30. This is a "magic number" that could be made configurable, perhaps as an argument to inject_inferredbugs_verifier, to allow for tuning performance based on available resources or specific use cases.
| def inject_inferredbugs_verifier(dataset_dir: str, questions: List[str], model_name: str = "gpt-5-nano", max_workers: int = 30): | |
| def inject_inferredbugs_verifier(dataset_dir: str, questions: List[str], model_name: str = "gpt-5-nano", max_workers: int = 30): | |
| """Orchestrates the authoring and injection of verifiers into tasks.""" | |
| from concurrent.futures import ThreadPoolExecutor, as_completed | |
| tasks_root = Path(dataset_dir) | |
| task_dirs = sorted([d for d in tasks_root.iterdir() if d.is_dir()], key=lambda x: x.name) | |
| print(f"Authoring verifier harnesses for {len(task_dirs)} tasks using {model_name} (workers={max_workers})...") |
| with open(test_py_path, "w") as f: | ||
| f.write(python_code) |
There was a problem hiding this comment.
When writing test_state.py and test.sh files, it's good practice to explicitly specify the encoding, typically encoding="utf-8", to ensure consistent behavior across different environments and prevent potential UnicodeEncodeError issues.
| with open(test_py_path, "w") as f: | |
| f.write(python_code) | |
| with open(test_py_path, "w", encoding="utf-8") as f: | |
| f.write(python_code) | |
| with open(test_sh_path, "w", encoding="utf-8") as f: | |
| f.write(bash_code) |
Let GPT 5 nano generate syntax based test as verifier for inferred bugs dataset
Generated dataset https://huggingface.co/datasets/DCAgent/inferredbugs-sandboxes-verifier