Skip to content

Commit c38f116

Browse files
committed
tweaks
1 parent 29a747b commit c38f116

File tree

3 files changed

+74
-4
lines changed

3 files changed

+74
-4
lines changed

libs/next_gen_ui_mcp/BUILD

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -81,6 +81,7 @@ docker_image(
8181
dependencies=[
8282
":dist",
8383
"libs/next_gen_ui_agent:dist",
84+
"libs/next_gen_ui_llama_stack:dist",
8485
"libs/next_gen_ui_rhds_renderer:dist",
8586
],
8687
)

libs/next_gen_ui_mcp/Containerfile

Lines changed: 1 addition & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -15,21 +15,18 @@ ENV PYTHONDONTWRITEBYTECODE=1 \
1515
PYTHONUNBUFFERED=1 \
1616
PIP_NO_CACHE_DIR=1
1717

18-
1918
# Copy Python Project Files (Container context must be the `python` directory)
2019
COPY --chown=1001:root . /opt/app-root/src/
2120

2221
USER root
2322

2423
# Install dependencies
2524
RUN pip install *.whl \
26-
llama-stack-client==0.2.20 \
2725
langchain_openai
2826

2927
# Allow non-root user to access the everything in app-root
3028
RUN chgrp -R root /opt/app-root/src && chmod -R g+rwx /opt/app-root/src
3129

32-
3330
ENV MCP_TRANSPORT="streamable-http"
3431
ENV MCP_HOST="0.0.0.0"
3532
ENV MCP_PORT="5000"
@@ -40,13 +37,13 @@ ENV NGUI_PROVIDER_API_BASE_URL=""
4037
ENV NGUI_MODEL="gpt-4o"
4138
ENV NGUI_PROVIDER_LLAMA_URL=""
4239

43-
4440
# Expose default port (change if needed)
4541
EXPOSE $MCP_PORT
4642

4743
USER 1001
4844

4945
# Run the MCP
46+
# TODO: Support natively env variables in the next_gen_ui_mcp module
5047
CMD python -m next_gen_ui_mcp --host "$MCP_HOST" --port "$MCP_PORT" --transport "$MCP_TRANSPORT" --component-system "$NGUI_COMPONENT_SYSTEM" \
5148
--provider "$NGUI_PROVIDER" --model "$NGUI_MODEL" --api-key "$NGUI_PROVIDER_API_KEY" --base-url "$NGUI_PROVIDER_API_BASE_URL" \
5249
--llama-url "$NGUI_PROVIDER_LLAMA_URL"

libs/next_gen_ui_mcp/README-containers.md

Lines changed: 72 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -15,3 +15,75 @@ podman run --rm -it -p 5100:5100 --env MCP_PORT="5100" \
1515
--env NGUI_MODEL="llama3.2" --env NGUI_PROVIDER_API_BASE_URL=http://host.containers.internal:11434 --env NGUI_PROVIDER_API_KEY="ollama" \
1616
quay.io/next-gen-ui/mcp
1717
```
18+
19+
## Configuration
20+
21+
The MCP server container can be configured via environment variables. All configuration options available to the standalone server are supported.
22+
23+
### Environment Variables Reference
24+
25+
| Environment Variable | Default Value | Description |
26+
| ---------------------------- | ----------------- | ------------------------------------------------------ |
27+
| `MCP_TRANSPORT` | `streamable-http` | Transport protocol (`stdio`, `sse`, `streamable-http`) |
28+
| `MCP_HOST` | `0.0.0.0` | Host to bind to (for HTTP transports) |
29+
| `MCP_PORT` | `5000` | Port to bind to (for HTTP transports) |
30+
| `NGUI_COMPONENT_SYSTEM` | `json` | Component system (`json`, `rhds`) |
31+
| `NGUI_PROVIDER` | `langchain` | Inference provider (`mcp`, `llamastack`, `langchain`) |
32+
| `NGUI_MODEL` | `gpt-4o` | Model name (required for non-MCP providers) |
33+
| `NGUI_PROVIDER_API_BASE_URL` | - | Base URL for OpenAI-compatible API |
34+
| `NGUI_PROVIDER_API_KEY` | - | API key for the LLM provider |
35+
| `NGUI_PROVIDER_LLAMA_URL` | - | LlamaStack server URL (if `llamastac` is used) |
36+
37+
### Usage Examples
38+
39+
#### Basic Usage with Ollama (Local LLM)
40+
```bash
41+
podman run --rm -it -p 5000:5000 \
42+
--env MCP_PORT="5000" \
43+
--env NGUI_MODEL="llama3.2" \
44+
--env NGUI_PROVIDER_API_BASE_URL="http://host.containers.internal:11434/v1" \
45+
--env NGUI_PROVIDER_API_KEY="ollama" \
46+
quay.io/next-gen-ui/mcp
47+
```
48+
49+
#### OpenAI Configuration
50+
```bash
51+
podman run --rm -it -p 5000:5000 \
52+
--env NGUI_MODEL="gpt-4o" \
53+
--env NGUI_PROVIDER_API_KEY="your-openai-api-key" \
54+
quay.io/next-gen-ui/mcp
55+
```
56+
57+
#### Using with Environment File
58+
Create a `.env` file:
59+
```bash
60+
# .env file
61+
MCP_PORT=5000
62+
MCP_HOST=0.0.0.0
63+
MCP_TRANSPORT=streamable-http
64+
NGUI_COMPONENT_SYSTEM=json
65+
NGUI_PROVIDER=langchain
66+
NGUI_MODEL=gpt-4o
67+
NGUI_PROVIDER_API_KEY=your-api-key-here
68+
```
69+
70+
Run with environment file:
71+
```bash
72+
podman run --rm -it -p 5000:5000 --env-file .env quay.io/next-gen-ui/mcp
73+
```
74+
75+
#### LlamaStack Provider
76+
```bash
77+
podman run --rm -it -p 5000:5000 \
78+
--env NGUI_PROVIDER="llamastack" \
79+
--env NGUI_MODEL="llama3.2-3b" \
80+
--env NGUI_PROVIDER_LLAMA_URL="http://host.containers.internal:5001" \
81+
quay.io/next-gen-ui/mcp
82+
```
83+
84+
### Network Configuration
85+
86+
For local development connecting to services running on the host machine:
87+
- Use `host.containers.internal` to access host services (works with Podman and Docker Desktop)
88+
- For Linux with Podman, you may need to use `host.docker.internal` or the host's IP address
89+
- Ensure the target services (like Ollama) are accessible from containers

0 commit comments

Comments
 (0)