diff --git a/docs/en/configuration/providers.md b/docs/en/configuration/providers.md index 5f5b9160c..ba3701ec3 100644 --- a/docs/en/configuration/providers.md +++ b/docs/en/configuration/providers.md @@ -7,33 +7,34 @@ Kimi Code CLI supports multiple LLM platforms, which can be configured via confi The easiest way to configure is to run the `/login` command (alias `/setup`) in shell mode and follow the wizard to select platform and model: 1. Select an API platform -2. Enter your API key +2. For **AWS Bedrock Mantle**, select an AWS Region, then enter your API key; for other platforms, enter your API key 3. Select a model from the available list After configuration, Kimi Code CLI will automatically save settings to `~/.kimi/config.toml` and reload. `/login` currently supports the following platforms: -| Platform | Description | -| --- | --- | -| Kimi Code | Kimi Code platform, supports search and fetch services | -| Moonshot AI Open Platform (moonshot.cn) | China region API endpoint | -| Moonshot AI Open Platform (moonshot.ai) | Global region API endpoint | +| Platform | Description | +| --------------------------------------- | ---------------------------------------------------------------------------- | +| AWS Bedrock Mantle (OpenAI-compatible) | Amazon Bedrock Mantle OpenAI API; uses `openai_legacy` and a Bedrock API key | +| Kimi Code | Kimi Code platform, supports search and fetch services | +| Moonshot AI Open Platform (moonshot.cn) | China region API endpoint | +| Moonshot AI Open Platform (moonshot.ai) | Global region API endpoint | -For other platforms, please manually edit the configuration file. +For other platforms, please manually edit the configuration file. See also [Bedrock Mantle example](../../../examples/bedrock-mantle.md). ## Provider types The `type` field in `providers` configuration specifies the API provider type. Different types use different API protocols and client implementations. -| Type | Description | -| --- | --- | -| `kimi` | Kimi API | -| `openai_legacy` | OpenAI Chat Completions API | -| `openai_responses` | OpenAI Responses API | -| `anthropic` | Anthropic Claude API | -| `gemini` | Google Gemini API | -| `vertexai` | Google Vertex AI | +| Type | Description | +| ------------------ | --------------------------- | +| `kimi` | Kimi API | +| `openai_legacy` | OpenAI Chat Completions API | +| `openai_responses` | OpenAI Responses API | +| `anthropic` | Anthropic Claude API | +| `gemini` | Google Gemini API | +| `vertexai` | Google Vertex AI | ### `kimi` @@ -57,6 +58,20 @@ base_url = "https://api.openai.com/v1" api_key = "sk-xxx" ``` +#### AWS Bedrock Mantle (OpenAI-compatible API) + +[Bedrock Mantle](https://docs.aws.amazon.com/bedrock/latest/userguide/bedrock-mantle.html) exposes an OpenAI-compatible endpoint per AWS Region, for example: + +`https://bedrock-mantle..api.aws/v1` + +Use a **Bedrock API key** (not IAM access keys) with `type = "openai_legacy"`. Model IDs look like `moonshotai.kimi-k2.5` (catalog varies by region). + +**`/login` flow:** choose **AWS Bedrock Mantle (OpenAI-compatible)**, pick a region, enter the API key, then select a model. This writes a managed provider `managed:bedrock-mantle` and clears Moonshot search/fetch (those tools are Kimi Code–specific). + +**Environment overrides** (optional): `OPENAI_BASE_URL` and `OPENAI_API_KEY` override the saved `base_url` and `api_key` for `openai_legacy` providers only when set; they do not change other providers’ URLs. + +**Example:** see [`examples/bedrock-mantle.md`](../../../examples/bedrock-mantle.md). + ### `openai_responses` For OpenAI Responses API (newer API format). @@ -106,12 +121,12 @@ env = { GOOGLE_CLOUD_PROJECT = "your-project-id" } The `capabilities` field in model configuration declares the capabilities supported by the model. This affects feature availability in Kimi Code CLI. -| Capability | Description | -| --- | --- | -| `thinking` | Supports thinking mode (deep reasoning), can be toggled | -| `always_thinking` | Always uses thinking mode (cannot be disabled) | -| `image_in` | Supports image input | -| `video_in` | Supports video input | +| Capability | Description | +| ----------------- | ------------------------------------------------------- | +| `thinking` | Supports thinking mode (deep reasoning), can be toggled | +| `always_thinking` | Always uses thinking mode (cannot be disabled) | +| `image_in` | Supports image input | +| `video_in` | Supports video input | ```toml [models.gemini-3-pro-preview] @@ -143,9 +158,9 @@ The `SearchWeb` and `FetchURL` tools depend on external services, currently only When selecting the Kimi Code platform using `/login`, search and fetch services are automatically configured. -| Service | Corresponding tool | Behavior when not configured | -| --- | --- | --- | -| `moonshot_search` | `SearchWeb` | Tool unavailable | -| `moonshot_fetch` | `FetchURL` | Falls back to local fetching | +| Service | Corresponding tool | Behavior when not configured | +| ----------------- | ------------------ | ---------------------------- | +| `moonshot_search` | `SearchWeb` | Tool unavailable | +| `moonshot_fetch` | `FetchURL` | Falls back to local fetching | When using other platforms, the `FetchURL` tool is still available but will fall back to local fetching. diff --git a/docs/zh/configuration/providers.md b/docs/zh/configuration/providers.md index 7e5c34655..54dd5fd46 100644 --- a/docs/zh/configuration/providers.md +++ b/docs/zh/configuration/providers.md @@ -7,33 +7,34 @@ Kimi Code CLI 支持多种 LLM 平台,可以通过配置文件或 `/login` 命 最简单的配置方式是在 Shell 模式下运行 `/login` 命令(别名 `/setup`),按照向导完成平台和模型的选择: 1. 选择 API 平台 -2. 输入 API 密钥 +2. 若选择 **AWS Bedrock Mantle(OpenAI 兼容)**,先选择 AWS 区域,再输入 API 密钥;其他平台直接输入 API 密钥 3. 从可用模型列表中选择模型 配置完成后,Kimi Code CLI 会自动保存设置到 `~/.kimi/config.toml` 并重新加载。 `/login` 目前支持以下平台: -| 平台 | 说明 | -| --- | --- | -| Kimi Code | Kimi Code 平台,支持搜索和抓取服务 | -| Moonshot AI 开放平台 (moonshot.cn) | 中国区 API 端点 | -| Moonshot AI Open Platform (moonshot.ai) | 全球区 API 端点 | +| 平台 | 说明 | +| --------------------------------------- | ---------------------------------------------------------------------------------- | +| AWS Bedrock Mantle(OpenAI 兼容) | Amazon Bedrock Mantle 的 OpenAI 兼容 API;使用 `openai_legacy` 与 Bedrock API 密钥 | +| Kimi Code | Kimi Code 平台,支持搜索和抓取服务 | +| Moonshot AI 开放平台 (moonshot.cn) | 中国区 API 端点 | +| Moonshot AI Open Platform (moonshot.ai) | 全球区 API 端点 | -如需使用其他平台,请手动编辑配置文件。 +如需使用其他平台,请手动编辑配置文件。示例见仓库内 [`examples/bedrock-mantle.md`](../../../examples/bedrock-mantle.md)。 ## 供应商类型 `providers` 配置中的 `type` 字段指定 API 供应商类型。不同类型使用不同的 API 协议和客户端实现。 -| 类型 | 说明 | -| --- | --- | -| `kimi` | Kimi API | -| `openai_legacy` | OpenAI Chat Completions API | -| `openai_responses` | OpenAI Responses API | -| `anthropic` | Anthropic Claude API | -| `gemini` | Google Gemini API | -| `vertexai` | Google Vertex AI | +| 类型 | 说明 | +| ------------------ | --------------------------- | +| `kimi` | Kimi API | +| `openai_legacy` | OpenAI Chat Completions API | +| `openai_responses` | OpenAI Responses API | +| `anthropic` | Anthropic Claude API | +| `gemini` | Google Gemini API | +| `vertexai` | Google Vertex AI | ### `kimi` @@ -57,6 +58,20 @@ base_url = "https://api.openai.com/v1" api_key = "sk-xxx" ``` +#### AWS Bedrock Mantle(OpenAI 兼容 API) + +[Bedrock Mantle](https://docs.aws.amazon.com/bedrock/latest/userguide/bedrock-mantle.html) 在每个 AWS 区域提供 OpenAI 兼容端点,例如: + +`https://bedrock-mantle..api.aws/v1` + +请使用 **Bedrock API 密钥**(不是 IAM 访问密钥),`type` 设为 `openai_legacy`。模型 ID 形如 `moonshotai.kimi-k2.5`(实际目录随区域变化)。 + +**`/login` 流程:** 选择 **AWS Bedrock Mantle (OpenAI-compatible)**,选择区域,输入 API 密钥,再选模型。将写入托管供应商 `managed:bedrock-mantle`,并清除 Moonshot 搜索/抓取配置(这些能力依赖 Kimi Code)。 + +**环境变量覆盖(可选):** 若设置了 `OPENAI_BASE_URL` 与 `OPENAI_API_KEY`,会覆盖已保存的 `openai_legacy` 供应商的 `base_url` 与 `api_key`,**不会**影响其他供应商的 URL。 + +**示例:** 见仓库 [`examples/bedrock-mantle.md`](../../../examples/bedrock-mantle.md)(英文说明)。 + ### `openai_responses` 用于 OpenAI Responses API(较新的 API 格式)。 @@ -106,12 +121,12 @@ env = { GOOGLE_CLOUD_PROJECT = "your-project-id" } 模型配置中的 `capabilities` 字段声明模型支持的能力。这会影响 Kimi Code CLI 的功能可用性。 -| 能力 | 说明 | -| --- | --- | -| `thinking` | 支持 Thinking 模式(深度思考),可开关 | -| `always_thinking` | 始终使用 Thinking 模式(不可关闭) | -| `image_in` | 支持图片输入 | -| `video_in` | 支持视频输入 | +| 能力 | 说明 | +| ----------------- | -------------------------------------- | +| `thinking` | 支持 Thinking 模式(深度思考),可开关 | +| `always_thinking` | 始终使用 Thinking 模式(不可关闭) | +| `image_in` | 支持图片输入 | +| `video_in` | 支持视频输入 | ```toml [models.gemini-3-pro-preview] @@ -143,10 +158,9 @@ capabilities = ["thinking", "image_in"] 使用 `/login` 选择 Kimi Code 平台时,搜索和抓取服务会自动配置。 -| 服务 | 对应工具 | 未配置时的行为 | -| --- | --- | --- | -| `moonshot_search` | `SearchWeb` | 工具不可用 | -| `moonshot_fetch` | `FetchURL` | 回退到本地抓取 | +| 服务 | 对应工具 | 未配置时的行为 | +| ----------------- | ----------- | -------------- | +| `moonshot_search` | `SearchWeb` | 工具不可用 | +| `moonshot_fetch` | `FetchURL` | 回退到本地抓取 | 使用其他平台时,`FetchURL` 工具仍可使用,但会回退到本地抓取。 - diff --git a/examples/bedrock-mantle.md b/examples/bedrock-mantle.md new file mode 100644 index 000000000..5dde69285 --- /dev/null +++ b/examples/bedrock-mantle.md @@ -0,0 +1,53 @@ +# AWS Bedrock Mantle with Kimi Code CLI + +Use Kimi models through [Amazon Bedrock Mantle](https://docs.aws.amazon.com/bedrock/latest/userguide/bedrock-mantle.html)’s OpenAI-compatible API. + +## Quick setup (recommended) + +1. Create a Bedrock API key in the AWS console. +2. Start Kimi Code CLI shell mode and run `/login` (or `/setup`). +3. Choose **AWS Bedrock Mantle (OpenAI-compatible)**. +4. Pick an AWS Region that supports Mantle and lists the models you need (for Kimi, regions such as `eu-west-2` or `us-east-1` often expose `moonshotai.*` IDs; availability varies by region). +5. Paste your API key and select a model (for example `moonshotai.kimi-k2.5`). + +Configuration is written to `~/.kimi/config.toml` under the managed provider `managed:bedrock-mantle`. + +## Verify (non-interactive) + +```sh +kimi --print --prompt "Say hello in one short sentence." +``` + +## Manual configuration + +If you prefer not to use `/login`, use `openai_legacy` with the Mantle `base_url`: + +```toml +default_model = "bedrock-mantle/moonshotai.kimi-k2.5" + +[providers."managed:bedrock-mantle"] +type = "openai_legacy" +base_url = "https://bedrock-mantle.eu-west-2.api.aws/v1" +api_key = "ABSK..." + +[models."bedrock-mantle/moonshotai.kimi-k2.5"] +provider = "managed:bedrock-mantle" +model = "moonshotai.kimi-k2.5" +max_context_size = 131072 +capabilities = ["thinking", "image_in"] +``` + +Model alias keys must match what `/login` would generate (`/`). + +## Environment overrides + +For any `openai_legacy` provider, Kimi CLI can override the saved URL and key from the environment: + +- `OPENAI_BASE_URL` — replaces `base_url` when set. +- `OPENAI_API_KEY` — replaces `api_key` when set. + +These apply per run and are useful for CI or switching regions without editing TOML. + +## Search and fetch + +Mantle setup does **not** configure Moonshot Search/Fetch. The `SearchWeb` and `FetchURL` tools behave like other non–Kimi Code providers (search unavailable; fetch may fall back locally). Use Kimi Code via `/login` if you need those services. diff --git a/src/kimi_cli/auth/platforms.py b/src/kimi_cli/auth/platforms.py index a474dd5a0..405a45cc5 100644 --- a/src/kimi_cli/auth/platforms.py +++ b/src/kimi_cli/auth/platforms.py @@ -1,7 +1,7 @@ from __future__ import annotations import os -from typing import Any, NamedTuple, cast +from typing import Any, Literal, NamedTuple, cast import aiohttp from pydantic import BaseModel @@ -12,6 +12,28 @@ from kimi_cli.utils.aiohttp import new_client_session from kimi_cli.utils.logging import logger +BEDROCK_MANTLE_PLATFORM_ID = "bedrock-mantle" + +BEDROCK_MANTLE_REGIONS: list[tuple[str, str]] = [ + ("us-east-1", "US East (N. Virginia)"), + ("us-east-2", "US East (Ohio)"), + ("us-west-2", "US West (Oregon)"), + ("eu-west-1", "Europe (Ireland)"), + ("eu-west-2", "Europe (London)"), + ("eu-central-1", "Europe (Frankfurt)"), + ("ap-south-1", "Asia Pacific (Mumbai)"), + ("ap-northeast-1", "Asia Pacific (Tokyo)"), + ("ap-southeast-3", "Asia Pacific (Jakarta)"), + ("sa-east-1", "South America (São Paulo)"), + ("eu-south-1", "Europe (Milan)"), + ("eu-north-1", "Europe (Stockholm)"), +] + + +def bedrock_mantle_base_url(region: str) -> str: + """OpenAI-compatible Bedrock Mantle base URL (includes ``/v1`` prefix).""" + return f"https://bedrock-mantle.{region}.api.aws/v1" + class ModelInfo(BaseModel): """Model information returned from the API.""" @@ -47,6 +69,7 @@ class Platform(NamedTuple): search_url: str | None = None fetch_url: str | None = None allowed_prefixes: list[str] | None = None + llm_provider_type: Literal["kimi", "openai_legacy"] = "kimi" def _kimi_code_base_url() -> str: @@ -56,6 +79,12 @@ def _kimi_code_base_url() -> str: PLATFORMS: list[Platform] = [ + Platform( + id=BEDROCK_MANTLE_PLATFORM_ID, + name="AWS Bedrock Mantle (OpenAI-compatible)", + base_url="", + llm_provider_type="openai_legacy", + ), Platform( id=KIMI_CODE_PLATFORM_ID, name="Kimi Code", @@ -152,8 +181,15 @@ async def refresh_managed_models(config: Config) -> bool: provider=provider_key, ) continue + list_url = (provider.base_url or "").strip() or (platform.base_url or "").strip() + if not list_url: + logger.warning( + "Missing base URL for managed provider: {provider}", + provider=provider_key, + ) + continue try: - models = await list_models(platform, api_key) + models = await list_models(platform, api_key, list_base_url=list_url) except Exception as exc: logger.error( "Failed to refresh models for {platform}: {error}", @@ -177,12 +213,22 @@ async def refresh_managed_models(config: Config) -> bool: return changed -async def list_models(platform: Platform, api_key: str) -> list[ModelInfo]: +async def list_models( + platform: Platform, + api_key: str, + *, + list_base_url: str | None = None, +) -> list[ModelInfo]: + effective_base = (list_base_url if list_base_url is not None else platform.base_url).strip() + if not effective_base: + raise ValueError("base URL is required to list models") + openai_compatible = platform.llm_provider_type == "openai_legacy" async with new_client_session() as session: models = await _list_models( session, - base_url=platform.base_url, + base_url=effective_base, api_key=api_key, + openai_compatible=openai_compatible, ) if platform.allowed_prefixes is None: return models @@ -190,11 +236,48 @@ async def list_models(platform: Platform, api_key: str) -> list[ModelInfo]: return [model for model in models if model.id.startswith(prefixes)] +def _model_info_from_models_payload_item( + item: dict[str, Any], *, openai_compatible: bool +) -> ModelInfo | None: + model_id = item.get("id") + if not model_id: + return None + mid = str(model_id) + if openai_compatible: + raw_ctx = item.get("context_length") + context_length = 0 + if raw_ctx is not None and str(raw_ctx).strip(): + try: + context_length = max(0, int(raw_ctx)) + except (TypeError, ValueError): + context_length = 0 + if context_length == 0: + lower = mid.lower() + context_length = ( + 131_072 if "kimi" in lower or "moonshotai" in lower else 128_000 + ) + return ModelInfo( + id=mid, + context_length=context_length, + supports_reasoning=bool(item.get("supports_reasoning")), + supports_image_in=bool(item.get("supports_image_in")), + supports_video_in=bool(item.get("supports_video_in")), + ) + return ModelInfo( + id=mid, + context_length=int(item.get("context_length") or 0), + supports_reasoning=bool(item.get("supports_reasoning")), + supports_image_in=bool(item.get("supports_image_in")), + supports_video_in=bool(item.get("supports_video_in")), + ) + + async def _list_models( session: aiohttp.ClientSession, *, base_url: str, api_key: str, + openai_compatible: bool = False, ) -> list[ModelInfo]: models_url = f"{base_url.rstrip('/')}/models" try: @@ -213,18 +296,9 @@ async def _list_models( result: list[ModelInfo] = [] for item in cast(list[dict[str, Any]], data): - model_id = item.get("id") - if not model_id: - continue - result.append( - ModelInfo( - id=str(model_id), - context_length=int(item.get("context_length") or 0), - supports_reasoning=bool(item.get("supports_reasoning")), - supports_image_in=bool(item.get("supports_image_in")), - supports_video_in=bool(item.get("supports_video_in")), - ) - ) + info = _model_info_from_models_payload_item(item, openai_compatible=openai_compatible) + if info is not None: + result.append(info) return result diff --git a/src/kimi_cli/ui/shell/setup.py b/src/kimi_cli/ui/shell/setup.py index e44d398c4..89052197a 100644 --- a/src/kimi_cli/ui/shell/setup.py +++ b/src/kimi_cli/ui/shell/setup.py @@ -10,9 +10,12 @@ from kimi_cli import logger from kimi_cli.auth import KIMI_CODE_PLATFORM_ID from kimi_cli.auth.platforms import ( + BEDROCK_MANTLE_PLATFORM_ID, + BEDROCK_MANTLE_REGIONS, PLATFORMS, ModelInfo, Platform, + bedrock_mantle_base_url, get_platform_by_name, list_models, managed_model_key, @@ -59,6 +62,8 @@ async def setup_platform(platform: Platform) -> bool: thinking_label = "on" if result.thinking else "off" console.print("[green]✓ Setup complete![/green]") console.print(f" Platform: [bold]{result.platform.name}[/bold]") + if result.mantle_region: + console.print(f" Region: [bold]{result.mantle_region}[/bold]") console.print(f" Model: [bold]{result.selected_model.id}[/bold]") console.print(f" Thinking: [bold]{thinking_label}[/bold]") console.print(" Reloading...") @@ -71,15 +76,65 @@ class _SetupResult(NamedTuple): selected_model: ModelInfo models: list[ModelInfo] thinking: bool + resolved_base_url: str + mantle_region: str | None = None async def _setup_platform(platform: Platform) -> _SetupResult | None: - # enter the API key + if platform.id == BEDROCK_MANTLE_PLATFORM_ID: + return await _setup_bedrock_mantle(platform) + return await _setup_kimi_like_platform(platform) + + +async def _setup_bedrock_mantle(platform: Platform) -> _SetupResult | None: + region_labels = [f"{code} — {title}" for code, title in BEDROCK_MANTLE_REGIONS] + label = await _prompt_choice( + header="Select AWS Region (↑↓ navigate, Enter select, Ctrl+C cancel):", + choices=region_labels, + ) + if not label: + console.print("[red]No region selected[/red]") + return None + # Parse region code from label (e.g., "us-east-1 — US East (N. Virginia)" -> "us-east-1") + # Handles both em-dash (—) and regular hyphen (-) separators + region = label.split(" ", 1)[0] if " " in label else label + resolved_base_url = bedrock_mantle_base_url(region) + + api_key = await _prompt_text("Enter your Bedrock API key", is_password=True) + if not api_key: + return None + + try: + with console.status("[cyan]Verifying API key...[/cyan]"): + models = await list_models(platform, api_key, list_base_url=resolved_base_url) + except aiohttp.ClientResponseError as e: + logger.error("Failed to get models: {error}", error=e) + console.print(f"[red]Failed to get models: {e.message}[/red]") + if e.status == 401: + console.print( + "[yellow]Hint: Create a Bedrock API key in the AWS console and ensure " + "this region supports Bedrock Mantle.[/yellow]" + ) + return None + except Exception as e: + logger.error("Failed to get models: {error}", error=e) + console.print(f"[red]Failed to get models: {e}[/red]") + return None + + return await _finalize_model_and_thinking( + platform, + api_key=api_key, + models=models, + resolved_base_url=resolved_base_url, + mantle_region=region, + ) + + +async def _setup_kimi_like_platform(platform: Platform) -> _SetupResult | None: api_key = await _prompt_text("Enter your API key", is_password=True) if not api_key: return None - # list models try: with console.status("[cyan]Verifying API key...[/cyan]"): models = await list_models(platform, api_key) @@ -97,7 +152,23 @@ async def _setup_platform(platform: Platform) -> _SetupResult | None: console.print(f"[red]Failed to get models: {e}[/red]") return None - # select the model + return await _finalize_model_and_thinking( + platform, + api_key=api_key, + models=models, + resolved_base_url=platform.base_url, + mantle_region=None, + ) + + +async def _finalize_model_and_thinking( + platform: Platform, + *, + api_key: str, + models: list[ModelInfo], + resolved_base_url: str, + mantle_region: str | None, +) -> _SetupResult | None: if not models: console.print("[red]No models available for the selected platform[/red]") return None @@ -113,7 +184,6 @@ async def _setup_platform(platform: Platform) -> _SetupResult | None: selected_model = model_map[model_id] - # Determine thinking mode based on model capabilities capabilities = selected_model.capabilities thinking: bool @@ -136,6 +206,8 @@ async def _setup_platform(platform: Platform) -> _SetupResult | None: selected_model=selected_model, models=models, thinking=thinking, + resolved_base_url=resolved_base_url, + mantle_region=mantle_region, ) @@ -144,8 +216,8 @@ def _apply_setup_result(result: _SetupResult) -> None: provider_key = managed_provider_key(result.platform.id) model_key = managed_model_key(result.platform.id, result.selected_model.id) config.providers[provider_key] = LLMProvider( - type="kimi", - base_url=result.platform.base_url, + type=result.platform.llm_provider_type, + base_url=result.resolved_base_url, api_key=result.api_key, ) for key, model in list(config.models.items()): @@ -162,17 +234,21 @@ def _apply_setup_result(result: _SetupResult) -> None: config.default_model = model_key config.default_thinking = result.thinking - if result.platform.search_url: - config.services.moonshot_search = MoonshotSearchConfig( - base_url=result.platform.search_url, - api_key=result.api_key, - ) + if result.platform.id == BEDROCK_MANTLE_PLATFORM_ID: + config.services.moonshot_search = None + config.services.moonshot_fetch = None + else: + if result.platform.search_url: + config.services.moonshot_search = MoonshotSearchConfig( + base_url=result.platform.search_url, + api_key=result.api_key, + ) - if result.platform.fetch_url: - config.services.moonshot_fetch = MoonshotFetchConfig( - base_url=result.platform.fetch_url, - api_key=result.api_key, - ) + if result.platform.fetch_url: + config.services.moonshot_fetch = MoonshotFetchConfig( + base_url=result.platform.fetch_url, + api_key=result.api_key, + ) save_config(config) diff --git a/tests/auth/test_bedrock_mantle_platform.py b/tests/auth/test_bedrock_mantle_platform.py new file mode 100644 index 000000000..5159a88b3 --- /dev/null +++ b/tests/auth/test_bedrock_mantle_platform.py @@ -0,0 +1,52 @@ +"""Tests for Bedrock Mantle platform helpers and OpenAI-style model list parsing.""" + +from kimi_cli.auth import platforms as auth_platforms +from kimi_cli.auth.platforms import ( + BEDROCK_MANTLE_PLATFORM_ID, + bedrock_mantle_base_url, + get_platform_by_id, +) + + +def test_bedrock_mantle_base_url() -> None: + assert bedrock_mantle_base_url("eu-west-2") == "https://bedrock-mantle.eu-west-2.api.aws/v1" + + +def test_bedrock_mantle_platform_registered() -> None: + p = get_platform_by_id(BEDROCK_MANTLE_PLATFORM_ID) + assert p is not None + assert p.llm_provider_type == "openai_legacy" + assert p.base_url == "" + + +def test_openai_compatible_model_sparse_payload_kimi() -> None: + info = auth_platforms._model_info_from_models_payload_item( + {"id": "moonshotai.kimi-k2.5"}, openai_compatible=True + ) + assert info is not None + assert info.id == "moonshotai.kimi-k2.5" + assert info.context_length == 131_072 + + +def test_openai_compatible_model_sparse_payload_other() -> None: + info = auth_platforms._model_info_from_models_payload_item( + {"id": "openai.gpt-oss-120b"}, openai_compatible=True + ) + assert info is not None + assert info.context_length == 128_000 + + +def test_kimi_payload_unchanged() -> None: + info = auth_platforms._model_info_from_models_payload_item( + { + "id": "kimi-k2-turbo-preview", + "context_length": 65536, + "supports_reasoning": True, + "supports_image_in": False, + "supports_video_in": False, + }, + openai_compatible=False, + ) + assert info is not None + assert info.context_length == 65536 + assert info.supports_reasoning is True