Skip to content

docs: document 5 new natively supported LLM providers#6865

Open
BHUVANAN8 wants to merge 1 commit intoaden-hive:mainfrom
BHUVANAN8:docs/new-llm-providers
Open

docs: document 5 new natively supported LLM providers#6865
BHUVANAN8 wants to merge 1 commit intoaden-hive:mainfrom
BHUVANAN8:docs/new-llm-providers

Conversation

@BHUVANAN8
Copy link
Copy Markdown
Contributor

@BHUVANAN8 BHUVANAN8 commented Mar 30, 2026

Summary

Following maintainer feedback on PR #6833, unit tests for check_llm_key.py were skipped in favour of this documentation update (tracked in #6860).

PR #6833 added native API key validation support for 5 new LLM providers. This PR updates docs/configuration.md to document them for users.

Changes

  • Added a Native Supported Providers section to docs/configuration.md.
  • Documents DeepSeek, Mistral, Together AI, xAI, and Perplexity as natively supported providers.
  • Includes a JSON configuration example and usage notes (no api_base required for these providers).

Related

Closes #6860

Summary by CodeRabbit

  • Documentation
    • Added documentation for Native Supported Providers including DeepSeek, Mistral, Together AI, xAI, and Perplexity with configuration examples and model naming guidance.

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 30, 2026

📝 Walkthrough

Walkthrough

Documentation for five native LLM providers (DeepSeek, Mistral, Together AI, xAI, Perplexity) was added to the configuration guide, including provider identifiers, standard model names, and JSON configuration examples with supporting notes.

Changes

Cohort / File(s) Summary
Documentation - LLM Provider Configuration
docs/configuration.md
Added "Native Supported Providers" section documenting DeepSeek, Mistral, Together AI, xAI, and Perplexity with JSON configuration examples and usage notes.

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~3 minutes

Possibly related PRs

  • PR #6833: Added native support for DeepSeek, Mistral, Together AI, xAI, and Perplexity providers to the LLM provider check script, which this documentation describes.

Poem

🐰 Five providers join the fold,
DeepSeek, Mistral, bold and bright,
Together, xAI, Perplexity told,
In docs so clear, in JSON's light,
Configuration's path shines white! ✨

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The PR title accurately and concisely describes the main change: documenting 5 newly supported LLM providers in the docs.
Linked Issues check ✅ Passed The PR fulfills the linked issue #6860 requirements by updating docs/configuration.md with the new providers, providing JSON configuration examples, and noting that api_base is not required.
Out of Scope Changes check ✅ Passed All changes are directly related to documenting the 5 newly supported LLM providers as specified in issue #6860; no out-of-scope modifications detected.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
docs/configuration.md (1)

74-78: Consider clarifying "standard model name" and mentioning override capability.

The term "standard model name" (line 77) might be unclear to users unfamiliar with LiteLLM. Consider adding a brief clarification, such as "Use the provider's native model identifier" or linking to provider documentation.

Additionally, while the documentation emphasizes that no api_base is required, it might be helpful to mention that users can still provide a custom api_base if needed (e.g., for proxies or custom endpoints), similar to how the OpenRouter section shows this.

📝 Suggested enhancement to the notes
 Notes:
 
 - Set `provider` to `deepseek` (or `mistral`, `together`, `xai`, `perplexity`)
-- Use the standard model name in `model`, for example `deepseek-chat`
+- Use the provider's native model identifier in `model`, for example `deepseek-chat`
-- **No `api_base` is required** for these natively supported providers
+- **No `api_base` is required** for these natively supported providers (though you can override with a custom base URL if needed, e.g., for proxies)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/configuration.md` around lines 74 - 78, Clarify the phrase "standard
model name" and note optional override of api_base: update the Notes so "Use the
standard model name in `model`, for example `deepseek-chat`" is expanded to say
"Use the provider's native model identifier (the model name used by the
provider's API or docs) — e.g., `deepseek-chat`; link or reference provider docs
if available." Also add a short sentence after "No `api_base` is required"
stating that users may still supply a custom `api_base` for proxies or custom
endpoints (as supported in the OpenRouter example), and show that this is
optional rather than required.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@docs/configuration.md`:
- Around line 61-79: The docs example uses incorrect provider identifier and
model format: change the provider identifier "together" to "together_ai" and
update model values to include provider prefixes in the "model" field (e.g., use
"deepseek/deepseek-chat" instead of "deepseek-chat", and similarly
"mistral/mistral-small-latest", "xai/grok-beta", "perplexity/sonar-pro"); also
update the provider list to `deepseek`, `mistral`, `together_ai`, `xai`,
`perplexity` while keeping the note that no api_base is required for these
native providers.

---

Nitpick comments:
In `@docs/configuration.md`:
- Around line 74-78: Clarify the phrase "standard model name" and note optional
override of api_base: update the Notes so "Use the standard model name in
`model`, for example `deepseek-chat`" is expanded to say "Use the provider's
native model identifier (the model name used by the provider's API or docs) —
e.g., `deepseek-chat`; link or reference provider docs if available." Also add a
short sentence after "No `api_base` is required" stating that users may still
supply a custom `api_base` for proxies or custom endpoints (as supported in the
OpenRouter example), and show that this is optional rather than required.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: c4424681-f4ee-4ec6-a436-9eb9396980fd

📥 Commits

Reviewing files that changed from the base of the PR and between eba7524 and 6ff9d57.

📒 Files selected for processing (1)
  • docs/configuration.md

Comment on lines +61 to +79
Native Supported Providers (DeepSeek, Mistral, Together AI, xAI, Perplexity):

```json
{
"llm": {
"provider": "deepseek",
"model": "deepseek-chat",
"max_tokens": 8192,
"api_key_env_var": "DEEPSEEK_API_KEY"
}
}
```

Notes:

- Set `provider` to `deepseek` (or `mistral`, `together`, `xai`, `perplexity`)
- Use the standard model name in `model`, for example `deepseek-chat`
- **No `api_base` is required** for these natively supported providers

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🌐 Web query:

What are the correct provider identifiers, model names, API base URL requirements, and environment variable names for DeepSeek, Mistral, Together AI, xAI, and Perplexity in LiteLLM? Does LiteLLM support these providers natively without requiring explicit api_base configuration?

💡 Result:

LiteLLM natively supports DeepSeek, Mistral, Together AI, xAI, and Perplexity as providers without requiring explicit api_base configuration for their standard endpoints. Users set the provider-specific API key environment variable and use the provider-prefixed model name (e.g., "deepseek/deepseek-chat"). Custom or beta endpoints may need api_base. DeepSeek: - Provider identifier: deepseek - Model names: deepseek-chat, deepseek-coder, deepseek-reasoner (all deepseek/* supported) - API base URL: https://api.deepseek.com (native, DEEPSEEK_API_BASE optional for beta) - Env var: DEEPSEEK_API_KEY Mistral: - Provider identifier: mistral - Model names: mistral-small-latest, mistral-large-latest, mistral/mistral-embed, etc. (all mistral/) - API base URL: https://api.mistral.ai (native) - Env var: MISTRAL_API_KEY Together AI: - Provider identifier: together_ai - Model names: togethercomputer/llama-2-70b-chat, etc. (all together_ai/) - API base URL: https://api.together.xyz (native) - Env var: TOGETHERAI_API_KEY xAI: - Provider identifier: xai - Model names: xai/grok-beta, xai/grok-2, xai/grok-3, xai/grok-4 (all xai/) - API base URL: https://api.x.ai/v1 (native) - Env var: XAI_API_KEY Perplexity: - Provider identifier: perplexity - Model names: perplexity/sonar-pro, perplexity/sonar-deep-research, etc. (perplexity/) - API base URL: https://api.perplexity.ai (native) - Env var: PERPLEXITYAI_API_KEY

Citations:


Fix provider identifier and model name format in the native providers example.

The claim that no api_base is required is correct—LiteLLM natively supports all five providers without explicit api_base for standard endpoints. However, the documentation contains critical configuration errors:

  1. Provider identifier "together" is incorrect. LiteLLM expects "together_ai" as the provider identifier.
  2. Model names require provider-specific prefixes. The example shows "deepseek-chat", but LiteLLM expects "deepseek/deepseek-chat". Similarly, other model names must use their provider prefix (e.g., "mistral/mistral-small-latest", "xai/grok-beta", "perplexity/sonar-pro").

Update the configuration example and the provider list accordingly:

Suggested changes
{
  "llm": {
    "provider": "deepseek",
    "model": "deepseek/deepseek-chat",
    "max_tokens": 8192,
    "api_key_env_var": "DEEPSEEK_API_KEY"
  }
}

Provider identifiers: deepseek, mistral, together_ai (not together), xai, perplexity.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/configuration.md` around lines 61 - 79, The docs example uses incorrect
provider identifier and model format: change the provider identifier "together"
to "together_ai" and update model values to include provider prefixes in the
"model" field (e.g., use "deepseek/deepseek-chat" instead of "deepseek-chat",
and similarly "mistral/mistral-small-latest", "xai/grok-beta",
"perplexity/sonar-pro"); also update the provider list to `deepseek`, `mistral`,
`together_ai`, `xai`, `perplexity` while keeping the note that no api_base is
required for these native providers.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Update documentation for new LLM providers

1 participant