|
1 |
| -# Next Gen UI MCP Server |
| 1 | +# Next Gen UI MCP Server Container |
2 | 2 |
|
3 | 3 | Next Gen UI MCP Server container image.
|
4 | 4 |
|
@@ -32,7 +32,42 @@ The MCP server container can be configured via environment variables. All config
|
32 | 32 | | `NGUI_MODEL` | `gpt-4o` | Model name (required for non-MCP providers) |
|
33 | 33 | | `NGUI_PROVIDER_API_BASE_URL` | - | Base URL for OpenAI-compatible API |
|
34 | 34 | | `NGUI_PROVIDER_API_KEY` | - | API key for the LLM provider |
|
35 |
| -| `NGUI_PROVIDER_LLAMA_URL` | - | LlamaStack server URL (if `llamastac` is used) | |
| 35 | +| `NGUI_PROVIDER_LLAMA_URL` | - | LlamaStack server URL (if `llamastack` is used) | |
| 36 | + |
| 37 | +### Providers |
| 38 | + |
| 39 | +The Next Gen UI MCP server supports three inference providers, controlled by the `NGUI_PROVIDER` environment variable: |
| 40 | + |
| 41 | +Selects the inference provider to use for generating UI components: |
| 42 | + |
| 43 | +#### Provider **`mcp`** |
| 44 | + |
| 45 | +Uses Model Context Protocol sampling to leverage the client's LLM capabilities. No additional configuration required as it uses the connected MCP client's model. |
| 46 | + |
| 47 | +#### Provider **`langchain`**: |
| 48 | + |
| 49 | +Uses LangChain with OpenAI-compatible APIs. |
| 50 | + |
| 51 | +Requires: |
| 52 | + |
| 53 | +- `NGUI_MODEL`: Model name (e.g., `gpt-4o`, `llama3.2`) |
| 54 | +- `NGUI_PROVIDER_API_KEY`: API key for the provider |
| 55 | +- `NGUI_PROVIDER_API_BASE_URL` (optional): Custom base URL for OpenAI-compatible APIs like Ollama |
| 56 | + |
| 57 | +Examples: |
| 58 | + |
| 59 | +- OpenAI: `https://api.openai.com/v1` (default) |
| 60 | +- Ollama: `http://host.containers.internal:11434/v1` |
| 61 | + |
| 62 | +#### Provider **`llamastack`**: |
| 63 | + |
| 64 | +Uses LlamaStack server for inference. |
| 65 | + |
| 66 | +Requires: |
| 67 | + |
| 68 | + - `NGUI_MODEL`: Model name available on the LlamaStack server |
| 69 | + - `NGUI_PROVIDER_LLAMA_URL`: URL of the LlamaStack server |
| 70 | + |
36 | 71 |
|
37 | 72 | ### Usage Examples
|
38 | 73 |
|
@@ -84,6 +119,7 @@ podman run --rm -it -p 5000:5000 \
|
84 | 119 | ### Network Configuration
|
85 | 120 |
|
86 | 121 | For local development connecting to services running on the host machine:
|
| 122 | + |
87 | 123 | - Use `host.containers.internal` to access host services (works with Podman and Docker Desktop)
|
88 | 124 | - For Linux with Podman, you may need to use `host.docker.internal` or the host's IP address
|
89 | 125 | - Ensure the target services (like Ollama) are accessible from containers
|
0 commit comments