docs: document 5 new natively supported LLM providers#6865
docs: document 5 new natively supported LLM providers#6865BHUVANAN8 wants to merge 1 commit intoaden-hive:mainfrom
Conversation
📝 WalkthroughWalkthroughDocumentation for five native LLM providers (DeepSeek, Mistral, Together AI, xAI, Perplexity) was added to the configuration guide, including provider identifiers, standard model names, and JSON configuration examples with supporting notes. Changes
Estimated code review effort🎯 1 (Trivial) | ⏱️ ~3 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
docs/configuration.md (1)
74-78: Consider clarifying "standard model name" and mentioning override capability.The term "standard model name" (line 77) might be unclear to users unfamiliar with LiteLLM. Consider adding a brief clarification, such as "Use the provider's native model identifier" or linking to provider documentation.
Additionally, while the documentation emphasizes that no
api_baseis required, it might be helpful to mention that users can still provide a customapi_baseif needed (e.g., for proxies or custom endpoints), similar to how the OpenRouter section shows this.📝 Suggested enhancement to the notes
Notes: - Set `provider` to `deepseek` (or `mistral`, `together`, `xai`, `perplexity`) -- Use the standard model name in `model`, for example `deepseek-chat` +- Use the provider's native model identifier in `model`, for example `deepseek-chat` -- **No `api_base` is required** for these natively supported providers +- **No `api_base` is required** for these natively supported providers (though you can override with a custom base URL if needed, e.g., for proxies)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/configuration.md` around lines 74 - 78, Clarify the phrase "standard model name" and note optional override of api_base: update the Notes so "Use the standard model name in `model`, for example `deepseek-chat`" is expanded to say "Use the provider's native model identifier (the model name used by the provider's API or docs) — e.g., `deepseek-chat`; link or reference provider docs if available." Also add a short sentence after "No `api_base` is required" stating that users may still supply a custom `api_base` for proxies or custom endpoints (as supported in the OpenRouter example), and show that this is optional rather than required.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/configuration.md`:
- Around line 61-79: The docs example uses incorrect provider identifier and
model format: change the provider identifier "together" to "together_ai" and
update model values to include provider prefixes in the "model" field (e.g., use
"deepseek/deepseek-chat" instead of "deepseek-chat", and similarly
"mistral/mistral-small-latest", "xai/grok-beta", "perplexity/sonar-pro"); also
update the provider list to `deepseek`, `mistral`, `together_ai`, `xai`,
`perplexity` while keeping the note that no api_base is required for these
native providers.
---
Nitpick comments:
In `@docs/configuration.md`:
- Around line 74-78: Clarify the phrase "standard model name" and note optional
override of api_base: update the Notes so "Use the standard model name in
`model`, for example `deepseek-chat`" is expanded to say "Use the provider's
native model identifier (the model name used by the provider's API or docs) —
e.g., `deepseek-chat`; link or reference provider docs if available." Also add a
short sentence after "No `api_base` is required" stating that users may still
supply a custom `api_base` for proxies or custom endpoints (as supported in the
OpenRouter example), and show that this is optional rather than required.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
| Native Supported Providers (DeepSeek, Mistral, Together AI, xAI, Perplexity): | ||
|
|
||
| ```json | ||
| { | ||
| "llm": { | ||
| "provider": "deepseek", | ||
| "model": "deepseek-chat", | ||
| "max_tokens": 8192, | ||
| "api_key_env_var": "DEEPSEEK_API_KEY" | ||
| } | ||
| } | ||
| ``` | ||
|
|
||
| Notes: | ||
|
|
||
| - Set `provider` to `deepseek` (or `mistral`, `together`, `xai`, `perplexity`) | ||
| - Use the standard model name in `model`, for example `deepseek-chat` | ||
| - **No `api_base` is required** for these natively supported providers | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
What are the correct provider identifiers, model names, API base URL requirements, and environment variable names for DeepSeek, Mistral, Together AI, xAI, and Perplexity in LiteLLM? Does LiteLLM support these providers natively without requiring explicit api_base configuration?
💡 Result:
LiteLLM natively supports DeepSeek, Mistral, Together AI, xAI, and Perplexity as providers without requiring explicit api_base configuration for their standard endpoints. Users set the provider-specific API key environment variable and use the provider-prefixed model name (e.g., "deepseek/deepseek-chat"). Custom or beta endpoints may need api_base. DeepSeek: - Provider identifier: deepseek - Model names: deepseek-chat, deepseek-coder, deepseek-reasoner (all deepseek/* supported) - API base URL: https://api.deepseek.com (native, DEEPSEEK_API_BASE optional for beta) - Env var: DEEPSEEK_API_KEY Mistral: - Provider identifier: mistral - Model names: mistral-small-latest, mistral-large-latest, mistral/mistral-embed, etc. (all mistral/) - API base URL: https://api.mistral.ai (native) - Env var: MISTRAL_API_KEY Together AI: - Provider identifier: together_ai - Model names: togethercomputer/llama-2-70b-chat, etc. (all together_ai/) - API base URL: https://api.together.xyz (native) - Env var: TOGETHERAI_API_KEY xAI: - Provider identifier: xai - Model names: xai/grok-beta, xai/grok-2, xai/grok-3, xai/grok-4 (all xai/) - API base URL: https://api.x.ai/v1 (native) - Env var: XAI_API_KEY Perplexity: - Provider identifier: perplexity - Model names: perplexity/sonar-pro, perplexity/sonar-deep-research, etc. (perplexity/) - API base URL: https://api.perplexity.ai (native) - Env var: PERPLEXITYAI_API_KEY
Citations:
- 1: https://docs.litellm.ai/docs/providers
- 2: https://docs.litellm.ai/docs/providers/deepseek
- 3: https://docs.litellm.ai/docs/providers/mistral
- 4: https://docs.litellm.ai/docs/providers/togetherai
- 5: https://docs.litellm.ai/docs/providers/xai
- 6: https://docs.litellm.ai/docs/providers/perplexity
Fix provider identifier and model name format in the native providers example.
The claim that no api_base is required is correct—LiteLLM natively supports all five providers without explicit api_base for standard endpoints. However, the documentation contains critical configuration errors:
- Provider identifier
"together"is incorrect. LiteLLM expects"together_ai"as the provider identifier. - Model names require provider-specific prefixes. The example shows
"deepseek-chat", but LiteLLM expects"deepseek/deepseek-chat". Similarly, other model names must use their provider prefix (e.g.,"mistral/mistral-small-latest","xai/grok-beta","perplexity/sonar-pro").
Update the configuration example and the provider list accordingly:
Suggested changes
{
"llm": {
"provider": "deepseek",
"model": "deepseek/deepseek-chat",
"max_tokens": 8192,
"api_key_env_var": "DEEPSEEK_API_KEY"
}
}Provider identifiers: deepseek, mistral, together_ai (not together), xai, perplexity.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/configuration.md` around lines 61 - 79, The docs example uses incorrect
provider identifier and model format: change the provider identifier "together"
to "together_ai" and update model values to include provider prefixes in the
"model" field (e.g., use "deepseek/deepseek-chat" instead of "deepseek-chat",
and similarly "mistral/mistral-small-latest", "xai/grok-beta",
"perplexity/sonar-pro"); also update the provider list to `deepseek`, `mistral`,
`together_ai`, `xai`, `perplexity` while keeping the note that no api_base is
required for these native providers.
Summary
Following maintainer feedback on PR #6833, unit tests for
check_llm_key.pywere skipped in favour of this documentation update (tracked in #6860).PR #6833 added native API key validation support for 5 new LLM providers. This PR updates
docs/configuration.mdto document them for users.Changes
docs/configuration.md.api_baserequired for these providers).Related
Closes #6860
Summary by CodeRabbit