Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
19 changes: 19 additions & 0 deletions docs/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,6 +58,25 @@ The framework supports 100+ LLM providers through [LiteLLM](https://docs.litellm

### Provider Examples

Native Supported Providers (DeepSeek, Mistral, Together AI, xAI, Perplexity):

```json
{
"llm": {
"provider": "deepseek",
"model": "deepseek-chat",
"max_tokens": 8192,
"api_key_env_var": "DEEPSEEK_API_KEY"
}
}
```

Notes:

- Set `provider` to `deepseek` (or `mistral`, `together`, `xai`, `perplexity`)
- Use the standard model name in `model`, for example `deepseek-chat`
- **No `api_base` is required** for these natively supported providers

Comment on lines +61 to +79
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🌐 Web query:

What are the correct provider identifiers, model names, API base URL requirements, and environment variable names for DeepSeek, Mistral, Together AI, xAI, and Perplexity in LiteLLM? Does LiteLLM support these providers natively without requiring explicit api_base configuration?

💡 Result:

LiteLLM natively supports DeepSeek, Mistral, Together AI, xAI, and Perplexity as providers without requiring explicit api_base configuration for their standard endpoints. Users set the provider-specific API key environment variable and use the provider-prefixed model name (e.g., "deepseek/deepseek-chat"). Custom or beta endpoints may need api_base. DeepSeek: - Provider identifier: deepseek - Model names: deepseek-chat, deepseek-coder, deepseek-reasoner (all deepseek/* supported) - API base URL: https://api.deepseek.com (native, DEEPSEEK_API_BASE optional for beta) - Env var: DEEPSEEK_API_KEY Mistral: - Provider identifier: mistral - Model names: mistral-small-latest, mistral-large-latest, mistral/mistral-embed, etc. (all mistral/) - API base URL: https://api.mistral.ai (native) - Env var: MISTRAL_API_KEY Together AI: - Provider identifier: together_ai - Model names: togethercomputer/llama-2-70b-chat, etc. (all together_ai/) - API base URL: https://api.together.xyz (native) - Env var: TOGETHERAI_API_KEY xAI: - Provider identifier: xai - Model names: xai/grok-beta, xai/grok-2, xai/grok-3, xai/grok-4 (all xai/) - API base URL: https://api.x.ai/v1 (native) - Env var: XAI_API_KEY Perplexity: - Provider identifier: perplexity - Model names: perplexity/sonar-pro, perplexity/sonar-deep-research, etc. (perplexity/) - API base URL: https://api.perplexity.ai (native) - Env var: PERPLEXITYAI_API_KEY

Citations:


Fix provider identifier and model name format in the native providers example.

The claim that no api_base is required is correct—LiteLLM natively supports all five providers without explicit api_base for standard endpoints. However, the documentation contains critical configuration errors:

  1. Provider identifier "together" is incorrect. LiteLLM expects "together_ai" as the provider identifier.
  2. Model names require provider-specific prefixes. The example shows "deepseek-chat", but LiteLLM expects "deepseek/deepseek-chat". Similarly, other model names must use their provider prefix (e.g., "mistral/mistral-small-latest", "xai/grok-beta", "perplexity/sonar-pro").

Update the configuration example and the provider list accordingly:

Suggested changes
{
  "llm": {
    "provider": "deepseek",
    "model": "deepseek/deepseek-chat",
    "max_tokens": 8192,
    "api_key_env_var": "DEEPSEEK_API_KEY"
  }
}

Provider identifiers: deepseek, mistral, together_ai (not together), xai, perplexity.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/configuration.md` around lines 61 - 79, The docs example uses incorrect
provider identifier and model format: change the provider identifier "together"
to "together_ai" and update model values to include provider prefixes in the
"model" field (e.g., use "deepseek/deepseek-chat" instead of "deepseek-chat",
and similarly "mistral/mistral-small-latest", "xai/grok-beta",
"perplexity/sonar-pro"); also update the provider list to `deepseek`, `mistral`,
`together_ai`, `xai`, `perplexity` while keeping the note that no api_base is
required for these native providers.

OpenRouter:

```json
Expand Down
Loading