Skip to content

feat(scripts): add support for more LLM providers in check_llm_key.py#6833

Merged
Hundao merged 2 commits intoaden-hive:mainfrom
BHUVANAN8:feat/issue-6063-llm-providers
Mar 29, 2026
Merged

feat(scripts): add support for more LLM providers in check_llm_key.py#6833
Hundao merged 2 commits intoaden-hive:mainfrom
BHUVANAN8:feat/issue-6063-llm-providers

Conversation

@BHUVANAN8
Copy link
Copy Markdown
Contributor

@BHUVANAN8 BHUVANAN8 commented Mar 27, 2026

Description

Fixes #6063

This PR expands the native API validation capabilities of the scripts/check_llm_key.py health check script. Previously, the script only natively supported 5 providers, requiring users to manually specify custom api_base variables for other popular models.

Changes made:

  • Added deepseek, together, mistral, xai, and perplexity natively into the PROVIDERS configuration dictionary.
  • Mapped all five providers to the existing check_openai_compatible validation function since their APIs are fully OpenAI-compatible.
  • Configured each provider directly to its correct v1/models (or /models for Perplexity) target endpoint to ensure swift authentication checks without manual configuration.

Verification:

  • Verified syntax integrity using python3 -m py_compile to strictly assure that all lambda map references and dictionary mappings resolve correctly without runtime configuration syntax errors.

Summary by CodeRabbit

  • New Features
    • Added API key validation support for DeepSeek, Together AI, Mistral, xAI, and Perplexity providers.

@BHUVANAN8
Copy link
Copy Markdown
Contributor Author

@Hundao I've submitted the PR to natively add the deepseek, mistral, together, xai, and perplexity providers as requested in #6063!

Let me know if this looks good to you or if you need any adjustments.

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 28, 2026

📝 Walkthrough

Walkthrough

The change expands the LLM provider validation support by adding five new OpenAI-compatible providers—deepseek, together, mistral, xai, and perplexity—to the PROVIDERS dispatch dictionary. Each provider entry reuses the existing check_openai_compatible function with its specific API endpoint. The minimax entry syntax was also refined.

Changes

Cohort / File(s) Summary
Provider Dispatch Extensions
scripts/check_llm_key.py
Added five new OpenAI-compatible providers (deepseek, together, mistral, xai, perplexity) each invoking check_openai_compatible with provider-specific endpoints. Refined minimax entry parameter handling from **kw to **_.

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~3 minutes

Poem

🐰 Five new providers hopping into view,
OpenAI-compatible through and through,
With DeepSeek, Together, Mistral in tow,
xAI and Perplexity steal the show! ✨
The dispatch keys now let many more play,
What a wonderful LLM buffet! 🎉

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately describes the main change: adding support for more LLM providers in the check_llm_key.py script.
Linked Issues check ✅ Passed The PR adds five of the proposed OpenAI-compatible providers (deepseek, together, mistral, xai, perplexity) as requested in issue #6063, with correct endpoint mappings.
Out of Scope Changes check ✅ Passed All changes are directly related to expanding LLM provider support in check_llm_key.py; no unrelated modifications were introduced.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@JiwaniZakir JiwaniZakir left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Perplexity entry uses https://api.perplexity.ai/models while every other provider in this block (including the newly added DeepSeek, Together, Mistral, and xAI) uses a /v1/models path — worth double-checking whether the Perplexity models endpoint actually omits the /v1/ prefix or if this is a copy-paste inconsistency. Additionally, the lambdas accept **kw but don't forward it to check_openai_compatible, which matches the existing Cerebras pattern, but it means any caller-supplied keyword arguments are silently dropped — if that's intentional it should probably be documented or the signature simplified to lambda key, **_. Finally, there are no corresponding test cases added for the five new providers; even a basic fixture asserting that a valid/invalid key returns the expected shape would prevent regressions if check_openai_compatible changes its response format.

@Hundao
Copy link
Copy Markdown
Collaborator

Hundao commented Mar 28, 2026

@JiwaniZakir's right about the Perplexity endpoint. /models 404s, should be /v1/models:

$ curl -H "Authorization: Bearer fake-key" "https://api.perplexity.ai/models"
HTTP/2 404
(empty body)

$ curl -H "Authorization: Bearer fake-key" "https://api.perplexity.ai/v1/models"
{"object":"list","data":[{"id":"anthropic/claude-haiku-4-5","object":"model",...}]}

@BHUVANAN8
How did you verify these endpoints? Do you have reference docs?

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@scripts/check_llm_key.py`:
- Around line 321-335: The provider-to-env-var map in the function
_get_api_key_env_var is missing entries for the newly supported providers; add
mappings for "deepseek" -> "DEEPSEEK_API_KEY", "xai" -> "XAI_API_KEY", and
"perplexity" -> "PERPLEXITY_API_KEY" (alongside existing entries like "together"
etc.) so those providers resolve to their own API key env vars instead of
falling back to OPENAI_API_KEY; update the mapping logic in _get_api_key_env_var
to return these specific env var names for the corresponding provider keys.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: bed1a0f3-8f9f-4de9-b6c5-c4626e2c8428

📥 Commits

Reviewing files that changed from the base of the PR and between 86ef6fd and 0901690.

📒 Files selected for processing (1)
  • scripts/check_llm_key.py

Comment on lines +321 to +335
"deepseek": lambda key, **_: check_openai_compatible(
key, "https://api.deepseek.com/v1/models", "DeepSeek"
),
"together": lambda key, **_: check_openai_compatible(
key, "https://api.together.xyz/v1/models", "Together AI"
),
"mistral": lambda key, **_: check_openai_compatible(
key, "https://api.mistral.ai/v1/models", "Mistral"
),
"xai": lambda key, **_: check_openai_compatible(
key, "https://api.x.ai/v1/models", "xAI"
),
"perplexity": lambda key, **_: check_openai_compatible(
key, "https://api.perplexity.ai/v1/models", "Perplexity"
),
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Add matching API-key env-var resolution for the newly supported providers.

These new provider routes are fine, but core/framework/runner/runner.py::_get_api_key_env_var is still missing deepseek, xai, and perplexity mappings, so those models fall back to OPENAI_API_KEY. That can cause incorrect credential detection despite native support being added here.

Suggested follow-up patch (outside this file)
--- a/core/framework/runner/runner.py
+++ b/core/framework/runner/runner.py
@@
     elif model_lower.startswith("openrouter/"):
         return "OPENROUTER_API_KEY"
+    elif model_lower.startswith("deepseek/") or model_lower.startswith("deepseek-"):
+        return "DEEPSEEK_API_KEY"
@@
     elif model_lower.startswith("together/"):
         return "TOGETHER_API_KEY"
+    elif model_lower.startswith("xai/") or model_lower.startswith("grok-"):
+        return "XAI_API_KEY"
+    elif model_lower.startswith("perplexity/"):
+        return "PERPLEXITY_API_KEY"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/check_llm_key.py` around lines 321 - 335, The provider-to-env-var map
in the function _get_api_key_env_var is missing entries for the newly supported
providers; add mappings for "deepseek" -> "DEEPSEEK_API_KEY", "xai" ->
"XAI_API_KEY", and "perplexity" -> "PERPLEXITY_API_KEY" (alongside existing
entries like "together" etc.) so those providers resolve to their own API key
env vars instead of falling back to OPENAI_API_KEY; update the mapping logic in
_get_api_key_env_var to return these specific env var names for the
corresponding provider keys.

@BHUVANAN8
Copy link
Copy Markdown
Contributor Author

Thanks @Hundao and @JiwaniZakir for the careful review and for catching this

To clarify - I verified the endpoints against each provider's official API documentation:

However, I didn't have active API keys for all providers to validate them via live cURL calls, so I relied on their OpenAI-compatible specs.

I missed that Perplexity's hosted endpoint resolves to /v1/models (instead of /models) in practice - that's on me.

To avoid this class of issues going forward, I've opened #6834 to add unittest mock coverage for [check_llm_key.py], so CI can automatically validate endpoints and payload structures without requiring live API keys.

I'll update this PR with the following fixes:

  • Correct Perplexity endpoint → https://api.perplexity.ai/v1/models
  • Update lambda signatures to use **_ to explicitly ignore unused kwargs (as suggested)

Will push the changes shortly and ping once ready

@Hundao Hundao merged commit c889ffd into aden-hive:main Mar 29, 2026
9 checks passed
@JiwaniZakir
Copy link
Copy Markdown

The approach of reusing check_openai_compatible for all five providers is clean — worth double-checking that Perplexity's /models endpoint (without the v1/ prefix) is intentional and consistent with how the function constructs the full URL. Also, it would be worth adding a test entry or two in the README/docs so users know these providers are now natively supported without needing to set api_base manually.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Feature]: Add support for more LLM providers in check_llm_key.py

3 participants