Skip to content

Add patcher and oracle solver for Nemotron-Terminal-Synthetic-Tasks#22

Open
harvenstar wants to merge 4 commits intoopen-thoughts:mainfrom
harvenstar:hanwen/nemotron-patcher
Open

Add patcher and oracle solver for Nemotron-Terminal-Synthetic-Tasks#22
harvenstar wants to merge 4 commits intoopen-thoughts:mainfrom
harvenstar:hanwen/nemotron-patcher

Conversation

@harvenstar
Copy link
Copy Markdown

Summary

Adds two scripts to prepare nvidia/Nemotron-Terminal-Synthetic-Tasks for RL training on Daytona:

  • data/patchers/patch_nemotron_synthetic_tasks.py — Patches tasks to use shared base images instead of per-task unique images. Removes private Nvidia registry references from task.toml, removes COPY files/ /app/ from Dockerfiles, and moves task data files to setup_files/ for Harbor injection. Result: ~10 unique Dockerfiles (one per category), within Daytona's snapshot limit.

  • data/patchers/synthesize_oracle_solvers.py — Generates solution/solve.sh for each task using an LLM (gpt-5-mini by default). Only reads instruction.md (never test files) to prevent the LLM from cheating. Supports parallel generation with configurable concurrency and automatic retry with exponential backoff.

Test Results (200 tasks: 100 easy + 100 medium)

  • Easy: 52/100 passed (52%)
  • Medium: 0/100 passed (0%)
  • Overall: 52/200 (26%)

Medium failures breakdown: 77 solve.sh quality issues, 18 environment dependency issues (e.g. model_training tests require torch but Dockerfile only has data science packages — this appears to be an issue in the original dataset), 5 other.

Notes

  • The Synthetic-Tasks dataset is larger than expected (~249k+ tasks vs Corpus's ~140k skill_based). Easy count matches Corpus (~45k), but medium has ~204k (Corpus reports ~89.3k). Synthetic-Tasks appears to be the pre-filtering pool.
  • Easy and medium have different category sets: medium includes model_training and system_administration (not in easy), while easy includes data_processing, data_querying, and dependency_management (not in medium).

Patches nvidia/Nemotron-Terminal-Synthetic-Tasks to use shared base
images instead of per-task unique images, enabling RL training on
Daytona.

Two sources of unique images are fixed:
- Removes `docker_image` from task.toml (pointed to private Nvidia
  Gitlab registry, inaccessible outside Nvidia)
- Removes `COPY files/ /app/` from Dockerfiles (baked per-task data
  files into images, making every image unique)

Task-specific data files are moved from environment/files/ to
setup_files/, which Harbor uploads to /setup_files/ in the container
before the agent runs. A setup preamble is prepended to instruction.md
for tasks that have data files.

Result: ~10 unique Dockerfiles (one per category: data_science,
security, debugging, etc.) — within Daytona's snapshot limit.
Generates solution/solve.sh for each task using GPT-5-mini, reading only
instruction.md (never test files) to prevent LLM from cheating by hardcoding
expected outputs or modifying verifiers.
- Remove temperature param (gpt-5-mini only supports default)
- Use max_completion_tokens instead of max_tokens
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces two essential Python scripts designed to streamline the preparation of the nvidia/Nemotron-Terminal-Synthetic-Tasks dataset for Reinforcement Learning (RL) training within the Daytona environment. The first script optimizes Docker image usage by standardizing task environments, significantly reducing the number of unique images required. The second script automates the creation of oracle solutions for each task using a large language model, ensuring that solutions are generated purely from task instructions without access to test files.

Highlights

  • Task Patcher for RL Training: Introduced a Python script (patch_nemotron_synthetic_tasks.py) that modifies Nemotron-Terminal-Synthetic-Tasks to enable shared Docker images for RL training on Daytona, optimizing resource usage and compatibility.
  • Oracle Solver Generation: Added a Python script (synthesize_oracle_solvers.py) that leverages an LLM to generate solution/solve.sh for each task, ensuring solutions are derived solely from task instructions without access to test files.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces two new Python scripts: patch_nemotron_synthetic_tasks.py and synthesize_oracle_solvers.py. The first script patches Nemotron-Terminal-Synthetic-Tasks to optimize Docker image usage for Daytona's snapshot system by modifying Dockerfiles, task configurations, and instruction files. The second script generates oracle solver scripts for these tasks using an LLM. Review comments suggest improving the robustness of a regex in the patching script, enhancing the guidance provided in a Dockerfile consolidation warning, and including exception messages in the error reporting for the solver synthesis script for better debugging.

gpt-5-mini uses reasoning tokens that count against
max_completion_tokens. With 2048 the model often exhausted
the budget on thinking, leaving zero tokens for output.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant