|
1 | 1 | --- |
2 | | -name: nemoclaw-configure-inference |
3 | | -description: Changes the active inference model without restarting the sandbox. Use when change inference runtime, inference routing, openclaw, openshell, switch nemoclaw inference model, switch nemoclaw inference models. |
| 2 | +name: "nemoclaw-configure-inference" |
| 3 | +description: "Lists all inference providers offered during NemoClaw onboarding. Use when explaining which providers are available, what the onboard wizard presents, or how inference routing works. Changes the active inference model without restarting the sandbox. Use when switching inference providers, changing the model runtime, or reconfiguring inference routing. Connects NemoClaw to a local inference server. Use when setting up Ollama, vLLM, TensorRT-LLM, NIM, or any OpenAI-compatible local model server with NemoClaw." |
4 | 4 | --- |
5 | 5 |
|
6 | | -# Nemoclaw Configure Inference |
| 6 | +# NemoClaw Configure Inference |
7 | 7 |
|
8 | | -Change the active inference model without restarting the sandbox. |
| 8 | +Lists all inference providers offered during NemoClaw onboarding. Use when explaining which providers are available, what the onboard wizard presents, or how inference routing works. |
| 9 | + |
| 10 | +## Context |
| 11 | + |
| 12 | +NemoClaw supports multiple inference providers. |
| 13 | +During onboarding, the `nemoclaw onboard` wizard presents a numbered list of providers to choose from. |
| 14 | +Your selection determines where the agent's inference traffic is routed. |
| 15 | + |
| 16 | +## How Inference Routing Works |
| 17 | + |
| 18 | +The agent inside the sandbox talks to `inference.local`. |
| 19 | +It never connects to a provider directly. |
| 20 | +OpenShell intercepts inference traffic on the host and forwards it to the provider you selected. |
| 21 | + |
| 22 | +Provider credentials stay on the host. |
| 23 | +The sandbox does not receive your API key. |
| 24 | + |
| 25 | +## Provider Options |
| 26 | + |
| 27 | +The onboard wizard presents the following provider options by default. |
| 28 | +The first six are always available. |
| 29 | +Ollama appears when it is installed or running on the host. |
| 30 | + |
| 31 | +| Option | Description | Curated models | |
| 32 | +|--------|-------------|----------------| |
| 33 | +| NVIDIA Endpoints | Routes to models hosted on [build.nvidia.com](https://build.nvidia.com). You can also enter any model ID from the catalog. Set `NVIDIA_API_KEY`. | Nemotron 3 Super 120B, Kimi K2.5, GLM-5, MiniMax M2.5, GPT-OSS 120B | |
| 34 | +| OpenAI | Routes to the OpenAI API. Set `OPENAI_API_KEY`. | `gpt-5.4`, `gpt-5.4-mini`, `gpt-5.4-nano`, `gpt-5.4-pro-2026-03-05` | |
| 35 | +| Other OpenAI-compatible endpoint | Routes to any server that implements `/v1/chat/completions`. The wizard prompts for a base URL and model name. Works with OpenRouter, LocalAI, llama.cpp, or any compatible proxy. Set `COMPATIBLE_API_KEY`. | You provide the model name. | |
| 36 | +| Anthropic | Routes to the Anthropic Messages API. Set `ANTHROPIC_API_KEY`. | `claude-sonnet-4-6`, `claude-haiku-4-5`, `claude-opus-4-6` | |
| 37 | +| Other Anthropic-compatible endpoint | Routes to any server that implements the Anthropic Messages API (`/v1/messages`). The wizard prompts for a base URL and model name. Set `COMPATIBLE_ANTHROPIC_API_KEY`. | You provide the model name. | |
| 38 | +| Google Gemini | Routes to Google's OpenAI-compatible endpoint. Set `GEMINI_API_KEY`. | `gemini-3.1-pro-preview`, `gemini-3.1-flash-lite-preview`, `gemini-3-flash-preview`, `gemini-2.5-pro`, `gemini-2.5-flash`, `gemini-2.5-flash-lite` | |
| 39 | +| Local Ollama | Routes to a local Ollama instance on `localhost:11434`. NemoClaw detects installed models, offers starter models if none are present, pulls and warms the selected model, and validates it. | Selected during onboarding. For more information, refer to Use a Local Inference Server (see the `nemoclaw-configure-inference` skill). | |
| 40 | + |
| 41 | +## Experimental Options |
| 42 | + |
| 43 | +The following local inference options require `NEMOCLAW_EXPERIMENTAL=1` and, when prerequisites are met, appear in the onboarding selection list. |
| 44 | + |
| 45 | +| Option | Condition | Notes | |
| 46 | +|--------|-----------|-------| |
| 47 | +| Local NVIDIA NIM | NIM-capable GPU detected | Pulls and manages a NIM container. | |
| 48 | +| Local vLLM | vLLM running on `localhost:8000` | Auto-detects the loaded model. | |
| 49 | + |
| 50 | +For setup instructions, refer to Use a Local Inference Server (see the `nemoclaw-configure-inference` skill). |
| 51 | + |
| 52 | +## Validation |
| 53 | + |
| 54 | +NemoClaw validates the selected provider and model before creating the sandbox. |
| 55 | +If validation fails, the wizard returns to provider selection. |
| 56 | + |
| 57 | +| Provider type | Validation method | |
| 58 | +|---|---| |
| 59 | +| OpenAI-compatible | Tries `/responses` first, then `/chat/completions`. | |
| 60 | +| Anthropic-compatible | Tries `/v1/messages`. | |
| 61 | +| NVIDIA Endpoints (manual model entry) | Validates the model name against the catalog API. | |
| 62 | +| Compatible endpoints | Sends a real inference request because many proxies do not expose a `/models` endpoint. | |
9 | 63 |
|
10 | 64 | ## Prerequisites |
11 | 65 |
|
12 | 66 | - A running NemoClaw sandbox. |
13 | 67 | - The OpenShell CLI on your `PATH`. |
| 68 | +- NemoClaw installed. |
| 69 | +- A local model server running, or Ollama installed. The NemoClaw onboard wizard can also start Ollama for you. |
14 | 70 |
|
15 | 71 | Change the active inference model while the sandbox is running. |
16 | 72 | No restart is required. |
@@ -80,6 +136,200 @@ The output includes the active provider, model, and endpoint. |
80 | 136 | - The sandbox continues to use `inference.local`. |
81 | 137 | - Runtime switching changes the OpenShell route. It does not rewrite your stored credentials. |
82 | 138 |
|
| 139 | +--- |
| 140 | + |
| 141 | +NemoClaw can route inference to a model server running on your machine instead of a cloud API. |
| 142 | +This page covers Ollama, compatible-endpoint paths for other servers, and two experimental options for vLLM and NVIDIA NIM. |
| 143 | + |
| 144 | +All approaches use the same `inference.local` routing model. |
| 145 | +The agent inside the sandbox never connects to your model server directly. |
| 146 | +OpenShell intercepts inference traffic and forwards it to the local endpoint you configure. |
| 147 | + |
| 148 | +## Step 4: Ollama |
| 149 | + |
| 150 | +Ollama is the default local inference option. |
| 151 | +The onboard wizard detects Ollama automatically when it is installed or running on the host. |
| 152 | + |
| 153 | +If Ollama is not running, NemoClaw starts it for you. |
| 154 | +On macOS, the wizard also offers to install Ollama through Homebrew if it is not present. |
| 155 | + |
| 156 | +Run the onboard wizard. |
| 157 | + |
| 158 | +```console |
| 159 | +$ nemoclaw onboard |
| 160 | +``` |
| 161 | + |
| 162 | +Select **Local Ollama** from the provider list. |
| 163 | +NemoClaw lists installed models or offers starter models if none are installed. |
| 164 | +It pulls the selected model, loads it into memory, and validates it before continuing. |
| 165 | + |
| 166 | +### Linux with Docker |
| 167 | + |
| 168 | +On Linux hosts that run NemoClaw with Docker, the sandbox reaches Ollama through |
| 169 | +`http://host.openshell.internal:11434`, not the host shell's `localhost` socket. |
| 170 | +If Ollama is already running, make sure it listens on `0.0.0.0:11434` instead of |
| 171 | +`127.0.0.1:11434`. |
| 172 | + |
| 173 | +```console |
| 174 | +$ OLLAMA_HOST=0.0.0.0:11434 ollama serve |
| 175 | +``` |
| 176 | + |
| 177 | +If Ollama only binds loopback, NemoClaw can detect it on the host, but the |
| 178 | +sandbox-side validation step fails because containers cannot reach it. |
| 179 | + |
| 180 | +### Non-Interactive Setup |
| 181 | + |
| 182 | +```console |
| 183 | +$ NEMOCLAW_PROVIDER=ollama \ |
| 184 | + NEMOCLAW_MODEL=qwen2.5:14b \ |
| 185 | + nemoclaw onboard --non-interactive |
| 186 | +``` |
| 187 | + |
| 188 | +If `NEMOCLAW_MODEL` is not set, NemoClaw selects a default model based on available memory. |
| 189 | + |
| 190 | +| Variable | Purpose | |
| 191 | +|---|---| |
| 192 | +| `NEMOCLAW_PROVIDER` | Set to `ollama`. | |
| 193 | +| `NEMOCLAW_MODEL` | Ollama model tag to use. Optional. | |
| 194 | + |
| 195 | +## Step 5: OpenAI-Compatible Server |
| 196 | + |
| 197 | +This option works with any server that implements `/v1/chat/completions`, including vLLM, TensorRT-LLM, llama.cpp, LocalAI, and others. |
| 198 | + |
| 199 | +Start your model server. |
| 200 | +The examples below use vLLM, but any OpenAI-compatible server works. |
| 201 | + |
| 202 | +```console |
| 203 | +$ vllm serve meta-llama/Llama-3.1-8B-Instruct --port 8000 |
| 204 | +``` |
| 205 | + |
| 206 | +Run the onboard wizard. |
| 207 | + |
| 208 | +```console |
| 209 | +$ nemoclaw onboard |
| 210 | +``` |
| 211 | + |
| 212 | +When the wizard asks you to choose an inference provider, select **Other OpenAI-compatible endpoint**. |
| 213 | +Enter the base URL of your local server, for example `http://localhost:8000/v1`. |
| 214 | + |
| 215 | +The wizard prompts for an API key. |
| 216 | +If your server does not require authentication, enter any non-empty string (for example, `dummy`). |
| 217 | + |
| 218 | +NemoClaw validates the endpoint by sending a test inference request before continuing. |
| 219 | + |
| 220 | +### Non-Interactive Setup |
| 221 | + |
| 222 | +Set the following environment variables for scripted or CI/CD deployments. |
| 223 | + |
| 224 | +```console |
| 225 | +$ NEMOCLAW_PROVIDER=custom \ |
| 226 | + NEMOCLAW_ENDPOINT_URL=http://localhost:8000/v1 \ |
| 227 | + NEMOCLAW_MODEL=meta-llama/Llama-3.1-8B-Instruct \ |
| 228 | + COMPATIBLE_API_KEY=dummy \ |
| 229 | + nemoclaw onboard --non-interactive |
| 230 | +``` |
| 231 | + |
| 232 | +| Variable | Purpose | |
| 233 | +|---|---| |
| 234 | +| `NEMOCLAW_PROVIDER` | Set to `custom` for an OpenAI-compatible endpoint. | |
| 235 | +| `NEMOCLAW_ENDPOINT_URL` | Base URL of the local server. | |
| 236 | +| `NEMOCLAW_MODEL` | Model ID as reported by the server. | |
| 237 | +| `COMPATIBLE_API_KEY` | API key for the endpoint. Use any non-empty value if authentication is not required. | |
| 238 | + |
| 239 | +## Step 6: Anthropic-Compatible Server |
| 240 | + |
| 241 | +If your local server implements the Anthropic Messages API (`/v1/messages`), choose **Other Anthropic-compatible endpoint** during onboarding instead. |
| 242 | + |
| 243 | +```console |
| 244 | +$ nemoclaw onboard |
| 245 | +``` |
| 246 | + |
| 247 | +For non-interactive setup, use `NEMOCLAW_PROVIDER=anthropicCompatible` and set `COMPATIBLE_ANTHROPIC_API_KEY`. |
| 248 | + |
| 249 | +```console |
| 250 | +$ NEMOCLAW_PROVIDER=anthropicCompatible \ |
| 251 | + NEMOCLAW_ENDPOINT_URL=http://localhost:8080 \ |
| 252 | + NEMOCLAW_MODEL=my-model \ |
| 253 | + COMPATIBLE_ANTHROPIC_API_KEY=dummy \ |
| 254 | + nemoclaw onboard --non-interactive |
| 255 | +``` |
| 256 | + |
| 257 | +## Step 7: vLLM Auto-Detection (Experimental) |
| 258 | + |
| 259 | +When vLLM is already running on `localhost:8000`, NemoClaw can detect it automatically and query the `/v1/models` endpoint to determine the loaded model. |
| 260 | + |
| 261 | +Set the experimental flag and run onboard. |
| 262 | + |
| 263 | +```console |
| 264 | +$ NEMOCLAW_EXPERIMENTAL=1 nemoclaw onboard |
| 265 | +``` |
| 266 | + |
| 267 | +Select **Local vLLM [experimental]** from the provider list. |
| 268 | +NemoClaw detects the running model and validates the endpoint. |
| 269 | + |
| 270 | +> **Note:** NemoClaw forces the `chat/completions` API path for vLLM. |
| 271 | +> The vLLM `/v1/responses` endpoint does not run the `--tool-call-parser`, so tool calls arrive as raw text. |
| 272 | +
|
| 273 | +### Non-Interactive Setup |
| 274 | + |
| 275 | +```console |
| 276 | +$ NEMOCLAW_EXPERIMENTAL=1 \ |
| 277 | + NEMOCLAW_PROVIDER=vllm \ |
| 278 | + nemoclaw onboard --non-interactive |
| 279 | +``` |
| 280 | + |
| 281 | +NemoClaw auto-detects the model from the running vLLM instance. |
| 282 | +To override the model, set `NEMOCLAW_MODEL`. |
| 283 | + |
| 284 | +## Step 8: NVIDIA NIM (Experimental) |
| 285 | + |
| 286 | +NemoClaw can pull, start, and manage a NIM container on hosts with a NIM-capable NVIDIA GPU. |
| 287 | + |
| 288 | +Set the experimental flag and run onboard. |
| 289 | + |
| 290 | +```console |
| 291 | +$ NEMOCLAW_EXPERIMENTAL=1 nemoclaw onboard |
| 292 | +``` |
| 293 | + |
| 294 | +Select **Local NVIDIA NIM [experimental]** from the provider list. |
| 295 | +NemoClaw filters available models by GPU VRAM, pulls the NIM container image, starts it, and waits for it to become healthy before continuing. |
| 296 | + |
| 297 | +> **Note:** NIM uses vLLM internally. |
| 298 | +> The same `chat/completions` API path restriction applies. |
| 299 | +
|
| 300 | +### Non-Interactive Setup |
| 301 | + |
| 302 | +```console |
| 303 | +$ NEMOCLAW_EXPERIMENTAL=1 \ |
| 304 | + NEMOCLAW_PROVIDER=nim \ |
| 305 | + nemoclaw onboard --non-interactive |
| 306 | +``` |
| 307 | + |
| 308 | +To select a specific model, set `NEMOCLAW_MODEL`. |
| 309 | + |
| 310 | +## Step 9: Verify the Configuration |
| 311 | + |
| 312 | +After onboarding completes, confirm the active provider and model. |
| 313 | + |
| 314 | +```console |
| 315 | +$ nemoclaw <name> status |
| 316 | +``` |
| 317 | + |
| 318 | +The output shows the provider label (for example, "Local vLLM" or "Other OpenAI-compatible endpoint") and the active model. |
| 319 | + |
| 320 | +## Step 10: Switch Models at Runtime |
| 321 | + |
| 322 | +You can change the model without re-running onboard. |
| 323 | +Refer to Switch Inference Models (see the `nemoclaw-configure-inference` skill) for the full procedure. |
| 324 | + |
| 325 | +For compatible endpoints, the command is: |
| 326 | + |
| 327 | +```console |
| 328 | +$ openshell inference set --provider compatible-endpoint --model <model-name> |
| 329 | +``` |
| 330 | + |
| 331 | +If the provider itself needs to change (for example, switching from vLLM to a cloud API), rerun `nemoclaw onboard`. |
| 332 | + |
83 | 333 | ## Related Skills |
84 | 334 |
|
85 | | -- `nemoclaw-reference` — Inference Profiles for full profile configuration details |
| 335 | +- `nemoclaw-get-started` — Quickstart for first-time installation |
0 commit comments