feat: Add EcoTune-based inference tuning module#1156
Open
ust-xu wants to merge 2 commits intoflagos-ai:mainfrom
Open
feat: Add EcoTune-based inference tuning module#1156ust-xu wants to merge 2 commits intoflagos-ai:mainfrom
ust-xu wants to merge 2 commits intoflagos-ai:mainfrom
Conversation
ust-xu
commented
Mar 19, 2026
- Adds a lightweight EcoTune core for inference-time hyperparameter optimization
- Keeps the contribution minimal: no benchmarks, demos, datasets, or docs in this PR
- Designed to be reused by future inference-tuning entrypoints and config-driven workflows
- Supports bootstrap from default decoding parameters
There was a problem hiding this comment.
Pull request overview
Introduces an EcoTune-based inference-time tuning core under flagscale.inference.tuning, implementing a small Bayesian-optimization style loop with a multi-fidelity GP surrogate and token-cost-aware acquisition to propose decoding/config suggestions under a budget.
Changes:
- Added multi-fidelity GP surrogate model for score prediction across config + fidelity.
- Added token-aware Expected Improvement acquisition and an optimizer implementing ask/tell with promotion to max fidelity.
- Added search space utilities and public package exports for
flagscale.inference.tuning.ecotune.
Reviewed changes
Copilot reviewed 6 out of 6 changed files in this pull request and generated 8 comments.
Show a summary per file
| File | Description |
|---|---|
| flagscale/inference/tuning/ecotune/surrogate.py | Implements a multi-fidelity GP surrogate with fit/predict and incumbent querying. |
| flagscale/inference/tuning/ecotune/search_space.py | Defines parameter dimensions, sampling, and config↔vector transforms. |
| flagscale/inference/tuning/ecotune/optimizer.py | Implements EcoTune optimizer loop (budgeting, suggestion, promotion, history). |
| flagscale/inference/tuning/ecotune/acquisition.py | Adds token-cost-aware Expected Improvement acquisition. |
| flagscale/inference/tuning/ecotune/init.py | Exposes EcoTune public API symbols. |
| flagscale/inference/tuning/init.py | Exposes tuning API from flagscale.inference.tuning. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
You can also share your feedback on Copilot code review. Take the survey.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.