-
Notifications
You must be signed in to change notification settings - Fork 3.5k
fix: llm-info preferred provider #7962
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
💡 To request a new review, comment |
AI Code ReviewAI review failed due to service initialization issues. Please check the Continue API key and configuration. No specific line comments generated. 💡 To request a new detailed review, comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
1 issue found across 2 files
Prompt for AI agents (all 1 issues)
Understand the root cause of the following 1 issues and fix them.
<file name="packages/llm-info/src/index.ts">
<violation number="1" location="packages/llm-info/src/index.ts:44">
Returning the provider-specific model without provider causes inconsistent runtime shape; include provider on this path for consistency.</violation>
</file>
React with 👍 or 👎 to teach cubic. Mention @cubic-dev-ai
to give feedback, ask questions, or re-run the review.
🎉 This PR is included in version 1.22.0 🎉 The release is available on: Your semantic-release bot 📦🚀 |
Description
When 2 providers offer the same model but have different api limitations/params e.g. context length etc conflicts can arise. Should try to find info in provider and then fall back.
Fixes https://github.com/continuedev/continue/actions/runs/17993730030/job/51188936181
Summary by cubic
Fixes model info conflicts when multiple providers expose the same model by preferring the active provider during lookup. Ensures correct limits and templates are applied.