@@ -15,3 +15,75 @@ podman run --rm -it -p 5100:5100 --env MCP_PORT="5100" \
15
15
--env NGUI_MODEL=" llama3.2" --env NGUI_PROVIDER_API_BASE_URL=http://host.containers.internal:11434 --env NGUI_PROVIDER_API_KEY=" ollama" \
16
16
quay.io/next-gen-ui/mcp
17
17
```
18
+
19
+ ## Configuration
20
+
21
+ The MCP server container can be configured via environment variables. All configuration options available to the standalone server are supported.
22
+
23
+ ### Environment Variables Reference
24
+
25
+ | Environment Variable | Default Value | Description |
26
+ | ---------------------------- | ----------------- | ------------------------------------------------------ |
27
+ | ` MCP_TRANSPORT ` | ` streamable-http ` | Transport protocol (` stdio ` , ` sse ` , ` streamable-http ` ) |
28
+ | ` MCP_HOST ` | ` 0.0.0.0 ` | Host to bind to (for HTTP transports) |
29
+ | ` MCP_PORT ` | ` 5000 ` | Port to bind to (for HTTP transports) |
30
+ | ` NGUI_COMPONENT_SYSTEM ` | ` json ` | Component system (` json ` , ` rhds ` ) |
31
+ | ` NGUI_PROVIDER ` | ` langchain ` | Inference provider (` mcp ` , ` llamastack ` , ` langchain ` ) |
32
+ | ` NGUI_MODEL ` | ` gpt-4o ` | Model name (required for non-MCP providers) |
33
+ | ` NGUI_PROVIDER_API_BASE_URL ` | - | Base URL for OpenAI-compatible API |
34
+ | ` NGUI_PROVIDER_API_KEY ` | - | API key for the LLM provider |
35
+ | ` NGUI_PROVIDER_LLAMA_URL ` | - | LlamaStack server URL (if ` llamastac ` is used) |
36
+
37
+ ### Usage Examples
38
+
39
+ #### Basic Usage with Ollama (Local LLM)
40
+ ``` bash
41
+ podman run --rm -it -p 5000:5000 \
42
+ --env MCP_PORT=" 5000" \
43
+ --env NGUI_MODEL=" llama3.2" \
44
+ --env NGUI_PROVIDER_API_BASE_URL=" http://host.containers.internal:11434/v1" \
45
+ --env NGUI_PROVIDER_API_KEY=" ollama" \
46
+ quay.io/next-gen-ui/mcp
47
+ ```
48
+
49
+ #### OpenAI Configuration
50
+ ``` bash
51
+ podman run --rm -it -p 5000:5000 \
52
+ --env NGUI_MODEL=" gpt-4o" \
53
+ --env NGUI_PROVIDER_API_KEY=" your-openai-api-key" \
54
+ quay.io/next-gen-ui/mcp
55
+ ```
56
+
57
+ #### Using with Environment File
58
+ Create a ` .env ` file:
59
+ ``` bash
60
+ # .env file
61
+ MCP_PORT=5000
62
+ MCP_HOST=0.0.0.0
63
+ MCP_TRANSPORT=streamable-http
64
+ NGUI_COMPONENT_SYSTEM=json
65
+ NGUI_PROVIDER=langchain
66
+ NGUI_MODEL=gpt-4o
67
+ NGUI_PROVIDER_API_KEY=your-api-key-here
68
+ ```
69
+
70
+ Run with environment file:
71
+ ``` bash
72
+ podman run --rm -it -p 5000:5000 --env-file .env quay.io/next-gen-ui/mcp
73
+ ```
74
+
75
+ #### LlamaStack Provider
76
+ ``` bash
77
+ podman run --rm -it -p 5000:5000 \
78
+ --env NGUI_PROVIDER=" llamastack" \
79
+ --env NGUI_MODEL=" llama3.2-3b" \
80
+ --env NGUI_PROVIDER_LLAMA_URL=" http://host.containers.internal:5001" \
81
+ quay.io/next-gen-ui/mcp
82
+ ```
83
+
84
+ ### Network Configuration
85
+
86
+ For local development connecting to services running on the host machine:
87
+ - Use ` host.containers.internal ` to access host services (works with Podman and Docker Desktop)
88
+ - For Linux with Podman, you may need to use ` host.docker.internal ` or the host's IP address
89
+ - Ensure the target services (like Ollama) are accessible from containers
0 commit comments