feat(scripts): add support for more LLM providers in check_llm_key.py#6833
feat(scripts): add support for more LLM providers in check_llm_key.py#6833Hundao merged 2 commits intoaden-hive:mainfrom
Conversation
📝 WalkthroughWalkthroughThe change expands the LLM provider validation support by adding five new OpenAI-compatible providers— Changes
Estimated code review effort🎯 1 (Trivial) | ⏱️ ~3 minutes Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
JiwaniZakir
left a comment
There was a problem hiding this comment.
The Perplexity entry uses https://api.perplexity.ai/models while every other provider in this block (including the newly added DeepSeek, Together, Mistral, and xAI) uses a /v1/models path — worth double-checking whether the Perplexity models endpoint actually omits the /v1/ prefix or if this is a copy-paste inconsistency. Additionally, the lambdas accept **kw but don't forward it to check_openai_compatible, which matches the existing Cerebras pattern, but it means any caller-supplied keyword arguments are silently dropped — if that's intentional it should probably be documented or the signature simplified to lambda key, **_. Finally, there are no corresponding test cases added for the five new providers; even a basic fixture asserting that a valid/invalid key returns the expected shape would prevent regressions if check_openai_compatible changes its response format.
|
@JiwaniZakir's right about the Perplexity endpoint. @BHUVANAN8 |
…lambda kwargs to **_
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@scripts/check_llm_key.py`:
- Around line 321-335: The provider-to-env-var map in the function
_get_api_key_env_var is missing entries for the newly supported providers; add
mappings for "deepseek" -> "DEEPSEEK_API_KEY", "xai" -> "XAI_API_KEY", and
"perplexity" -> "PERPLEXITY_API_KEY" (alongside existing entries like "together"
etc.) so those providers resolve to their own API key env vars instead of
falling back to OPENAI_API_KEY; update the mapping logic in _get_api_key_env_var
to return these specific env var names for the corresponding provider keys.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
| "deepseek": lambda key, **_: check_openai_compatible( | ||
| key, "https://api.deepseek.com/v1/models", "DeepSeek" | ||
| ), | ||
| "together": lambda key, **_: check_openai_compatible( | ||
| key, "https://api.together.xyz/v1/models", "Together AI" | ||
| ), | ||
| "mistral": lambda key, **_: check_openai_compatible( | ||
| key, "https://api.mistral.ai/v1/models", "Mistral" | ||
| ), | ||
| "xai": lambda key, **_: check_openai_compatible( | ||
| key, "https://api.x.ai/v1/models", "xAI" | ||
| ), | ||
| "perplexity": lambda key, **_: check_openai_compatible( | ||
| key, "https://api.perplexity.ai/v1/models", "Perplexity" | ||
| ), |
There was a problem hiding this comment.
Add matching API-key env-var resolution for the newly supported providers.
These new provider routes are fine, but core/framework/runner/runner.py::_get_api_key_env_var is still missing deepseek, xai, and perplexity mappings, so those models fall back to OPENAI_API_KEY. That can cause incorrect credential detection despite native support being added here.
Suggested follow-up patch (outside this file)
--- a/core/framework/runner/runner.py
+++ b/core/framework/runner/runner.py
@@
elif model_lower.startswith("openrouter/"):
return "OPENROUTER_API_KEY"
+ elif model_lower.startswith("deepseek/") or model_lower.startswith("deepseek-"):
+ return "DEEPSEEK_API_KEY"
@@
elif model_lower.startswith("together/"):
return "TOGETHER_API_KEY"
+ elif model_lower.startswith("xai/") or model_lower.startswith("grok-"):
+ return "XAI_API_KEY"
+ elif model_lower.startswith("perplexity/"):
+ return "PERPLEXITY_API_KEY"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/check_llm_key.py` around lines 321 - 335, The provider-to-env-var map
in the function _get_api_key_env_var is missing entries for the newly supported
providers; add mappings for "deepseek" -> "DEEPSEEK_API_KEY", "xai" ->
"XAI_API_KEY", and "perplexity" -> "PERPLEXITY_API_KEY" (alongside existing
entries like "together" etc.) so those providers resolve to their own API key
env vars instead of falling back to OPENAI_API_KEY; update the mapping logic in
_get_api_key_env_var to return these specific env var names for the
corresponding provider keys.
|
Thanks @Hundao and @JiwaniZakir for the careful review and for catching this To clarify - I verified the endpoints against each provider's official API documentation:
However, I didn't have active API keys for all providers to validate them via live cURL calls, so I relied on their OpenAI-compatible specs. I missed that Perplexity's hosted endpoint resolves to To avoid this class of issues going forward, I've opened #6834 to add I'll update this PR with the following fixes:
Will push the changes shortly and ping once ready |
|
The approach of reusing |
Description
Fixes #6063
This PR expands the native API validation capabilities of the scripts/check_llm_key.py health check script. Previously, the script only natively supported 5 providers, requiring users to manually specify custom
api_basevariables for other popular models.Changes made:
deepseek,together,mistral,xai, andperplexitynatively into thePROVIDERSconfiguration dictionary.v1/models(or/modelsfor Perplexity) target endpoint to ensure swift authentication checks without manual configuration.Verification:
python3 -m py_compileto strictly assure that all lambda map references and dictionary mappings resolve correctly without runtime configuration syntax errors.Summary by CodeRabbit