Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 17 additions & 1 deletion .github/workflows/pants.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ jobs:
- name: Package / Run
run: |
# We also smoke test that our release process will work by running `package`.
pants package ::
pants package --filter-target-type=python_distribution ::
- name: Test / distribution-all
run: |
# We also smoke test that our release process will work by running `package`.
Expand Down Expand Up @@ -130,6 +130,22 @@ jobs:
with:
name: distribution
path: dist/
- name: Push image to quay.io
id: push-to-quay
uses: redhat-actions/push-to-registry@v2
with:
tags: quay.io/next-gen-ui/mcp:dev
username: ${{ secrets.QUAY_USERNAME }}
password: ${{ secrets.QUAY_TOKEN }}
# - name: push README to quay.io
# uses: christian-korneck/update-container-description-action@v1
# env:
# DOCKER_APIKEY: ${{ secrets.QUAY_TOKEN }}
# with:
# destination_container_repo: quay.io/next-gen-ui/mcp
# provider: quay
# short_description: 'Next Gen UI MCP Server'
# readme_file: 'libs/next_gen_ui_mcp/README-containers.md'
# Publish to test PyPI
publish_test:
name: "Publish to test PyPI"
Expand Down
9 changes: 9 additions & 0 deletions .github/workflows/publish.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,15 @@ jobs:
with:
name: dist
path: dist/
- name: Push image to quay.io
id: push-to-quay
uses: redhat-actions/push-to-registry@v2
env:
VERSION: ${{github.ref_name}}
with:
tags: quay.io/next-gen-ui/mcp:latest quay.io/next-gen-ui/mcp:$VERSION
username: ${{ secrets.QUAY_USERNAME }}
password: ${{ secrets.QUAY_TOKEN }}
publish:
name: "Publish to PyPI"
needs:
Expand Down
8 changes: 8 additions & 0 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,8 @@ Install [Pants Build](https://www.pantsbuild.org/stable/docs/getting-started/ins

On Linux, you can run `./get-pants.sh` available in the repo root, as described/recommended in the Pants Installation docs.

Install [Podman](https://podman.io/) to be able build images like `Next Gen UI MCP Server`.

### VSCode

Run Pants export to create a virtual env
Expand Down Expand Up @@ -88,6 +90,12 @@ pants run libs/next_gen_ui_llama_stack/agent_test.py

# Run formatter, linter, check
pants fmt lint check ::

# Build all packages including python and docker
pants package ::

# Build only Python packages
pants package --filter-target-type=python_distribution ::
```

### Dependency Management
Expand Down
5 changes: 3 additions & 2 deletions docs/guide/ai_apps_binding/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,13 +5,14 @@ Binding UI Agent core functionality into your AI application (AI assistant backe
## AI Framework bindings

* [Llama Stack](llamastack.md)
* [Llama Stack Embedded Server Inference](llamastack_embedded.md)
* [LangGraph](langgraph.md)
* [Embedded Llama Stack Server Inference](llamastack_embedded.md)
* [BeeAI](beeai.md) - Tech Preview

## AI Protocol bindings/servers

* MCP - WIP
* [MCP Library](mcp-library.md)
* [MCP Container](mcp-container.md)
* A2A - WIP
* [ACP](acp.md) - Tech Preview

Expand Down
3 changes: 3 additions & 0 deletions docs/guide/ai_apps_binding/mcp-container.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
{%
include-markdown "../../../libs/next_gen_ui_mcp/README-containers.md"
%}
3 changes: 3 additions & 0 deletions docs/guide/ai_apps_binding/mcp-library.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
{%
include-markdown "../../../libs/next_gen_ui_mcp/README.md"
%}
16 changes: 16 additions & 0 deletions libs/next_gen_ui_mcp/BUILD
Original file line number Diff line number Diff line change
Expand Up @@ -69,3 +69,19 @@ python_distribution(
long_description_path="libs/next_gen_ui_mcp/README.md",
generate_setup=True,
)

docker_image(
name="docker",
source="Containerfile",
repository="next-gen-ui/mcp",
image_tags=[env("VERSION"), "latest", "dev"],
registries=[
"quay.io",
],
dependencies=[
":dist",
"libs/next_gen_ui_agent:dist",
"libs/next_gen_ui_llama_stack:dist",
"libs/next_gen_ui_rhds_renderer:dist",
],
)
49 changes: 49 additions & 0 deletions libs/next_gen_ui_mcp/Containerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
# Next Gen UI MCP Server

# Build locally
# pants package ::
#
# Run:
# podman run --rm -it -p 5000:5000 --env MCP_PORT="5000" --env NGUI_MODEL="llama3.2" --env NGUI_PROVIDER_API_BASE_URL=http://host.containers.internal:11434 --env NGUI_PROVIDER_API_KEY="ollama" quay.io/next-gen-ui/mcp
FROM registry.access.redhat.com/ubi9/python-312-minimal:9.6

# Set work directory
WORKDIR /opt/app-root/src

# Set environment variables for build
ENV PYTHONDONTWRITEBYTECODE=1 \
PYTHONUNBUFFERED=1 \
PIP_NO_CACHE_DIR=1

# Copy Python Project Files (Container context must be the `python` directory)
COPY --chown=1001:root . /opt/app-root/src/

USER root

# Install dependencies
RUN pip install *.whl \
langchain_openai

# Allow non-root user to access the everything in app-root
RUN chgrp -R root /opt/app-root/src && chmod -R g+rwx /opt/app-root/src

ENV MCP_TRANSPORT="streamable-http"
ENV MCP_HOST="0.0.0.0"
ENV MCP_PORT="5000"
ENV NGUI_COMPONENT_SYSTEM="json"
ENV NGUI_PROVIDER="langchain"
ENV NGUI_PROVIDER_API_KEY=""
ENV NGUI_PROVIDER_API_BASE_URL=""
ENV NGUI_MODEL="gpt-4o"
ENV NGUI_PROVIDER_LLAMA_URL=""

# Expose default port (change if needed)
EXPOSE $MCP_PORT

USER 1001

# Run the MCP
# TODO: Support natively env variables in the next_gen_ui_mcp module
CMD python -m next_gen_ui_mcp --host "$MCP_HOST" --port "$MCP_PORT" --transport "$MCP_TRANSPORT" --component-system "$NGUI_COMPONENT_SYSTEM" \
--provider "$NGUI_PROVIDER" --model "$NGUI_MODEL" --api-key "$NGUI_PROVIDER_API_KEY" --base-url "$NGUI_PROVIDER_API_BASE_URL" \
--llama-url "$NGUI_PROVIDER_LLAMA_URL"
125 changes: 125 additions & 0 deletions libs/next_gen_ui_mcp/README-containers.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,125 @@
# Next Gen UI MCP Server Container

Next Gen UI MCP Server container image.

For more information visit [GitHub](https://github.com/RedHat-UX/next-gen-ui-agent).

```sh
podman pull quay.io/next-gen-ui/mcp
```

## Usage

```sh
podman run --rm -it -p 5100:5100 --env MCP_PORT="5100" \
--env NGUI_MODEL="llama3.2" --env NGUI_PROVIDER_API_BASE_URL=http://host.containers.internal:11434 --env NGUI_PROVIDER_API_KEY="ollama" \
quay.io/next-gen-ui/mcp
```

## Configuration

The MCP server container can be configured via environment variables. All configuration options available to the standalone server are supported.

### Environment Variables Reference

| Environment Variable | Default Value | Description |
| ---------------------------- | ----------------- | ------------------------------------------------------ |
| `MCP_TRANSPORT` | `streamable-http` | Transport protocol (`stdio`, `sse`, `streamable-http`) |
| `MCP_HOST` | `0.0.0.0` | Host to bind to (for HTTP transports) |
| `MCP_PORT` | `5000` | Port to bind to (for HTTP transports) |
| `NGUI_COMPONENT_SYSTEM` | `json` | Component system (`json`, `rhds`) |
| `NGUI_PROVIDER` | `langchain` | Inference provider (`mcp`, `llamastack`, `langchain`) |
| `NGUI_MODEL` | `gpt-4o` | Model name (required for non-MCP providers) |
| `NGUI_PROVIDER_API_BASE_URL` | - | Base URL for OpenAI-compatible API |
| `NGUI_PROVIDER_API_KEY` | - | API key for the LLM provider |
| `NGUI_PROVIDER_LLAMA_URL` | - | LlamaStack server URL (if `llamastack` is used) |

### Providers

The Next Gen UI MCP server supports three inference providers, controlled by the `NGUI_PROVIDER` environment variable:

Selects the inference provider to use for generating UI components:

#### Provider **`mcp`**

Uses Model Context Protocol sampling to leverage the client's LLM capabilities. No additional configuration required as it uses the connected MCP client's model.

#### Provider **`langchain`**:

Uses LangChain with OpenAI-compatible APIs.

Requires:

- `NGUI_MODEL`: Model name (e.g., `gpt-4o`, `llama3.2`)
- `NGUI_PROVIDER_API_KEY`: API key for the provider
- `NGUI_PROVIDER_API_BASE_URL` (optional): Custom base URL for OpenAI-compatible APIs like Ollama

Examples:

- OpenAI: `https://api.openai.com/v1` (default)
- Ollama: `http://host.containers.internal:11434/v1`

#### Provider **`llamastack`**:

Uses LlamaStack server for inference.

Requires:

- `NGUI_MODEL`: Model name available on the LlamaStack server
- `NGUI_PROVIDER_LLAMA_URL`: URL of the LlamaStack server


### Usage Examples

#### Basic Usage with Ollama (Local LLM)
```bash
podman run --rm -it -p 5000:5000 \
--env MCP_PORT="5000" \
--env NGUI_MODEL="llama3.2" \
--env NGUI_PROVIDER_API_BASE_URL="http://host.containers.internal:11434/v1" \
--env NGUI_PROVIDER_API_KEY="ollama" \
quay.io/next-gen-ui/mcp
```

#### OpenAI Configuration
```bash
podman run --rm -it -p 5000:5000 \
--env NGUI_MODEL="gpt-4o" \
--env NGUI_PROVIDER_API_KEY="your-openai-api-key" \
quay.io/next-gen-ui/mcp
```

#### Using with Environment File
Create a `.env` file:
```bash
# .env file
MCP_PORT=5000
MCP_HOST=0.0.0.0
MCP_TRANSPORT=streamable-http
NGUI_COMPONENT_SYSTEM=json
NGUI_PROVIDER=langchain
NGUI_MODEL=gpt-4o
NGUI_PROVIDER_API_KEY=your-api-key-here
```

Run with environment file:
```bash
podman run --rm -it -p 5000:5000 --env-file .env quay.io/next-gen-ui/mcp
```

#### LlamaStack Provider
```bash
podman run --rm -it -p 5000:5000 \
--env NGUI_PROVIDER="llamastack" \
--env NGUI_MODEL="llama3.2-3b" \
--env NGUI_PROVIDER_LLAMA_URL="http://host.containers.internal:5001" \
quay.io/next-gen-ui/mcp
```

### Network Configuration

For local development connecting to services running on the host machine:

- Use `host.containers.internal` to access host services (works with Podman and Docker Desktop)
- For Linux with Podman, you may need to use `host.docker.internal` or the host's IP address
- Ensure the target services (like Ollama) are accessible from containers
9 changes: 7 additions & 2 deletions libs/next_gen_ui_mcp/README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Next Gen UI MCP Server
# Next Gen UI MCP Server Library

This package wraps our NextGenUI agent in a Model Context Protocol (MCP) tool using the standard MCP SDK. Since MCP adoption is so strong these days and there is an apetite to use this protocol also for handling agentic AI, we wanted to also deliver this way of consuming our agent. The most common way of utilising MCP tools is to provide them to LLM to choose and execute with certain parameters. This approach doesn't make sense for NextGenUI agent as you want to call it at specific moment after gathering data for response and also you don't want LLM to try to pass the prompt and JSON content as it may lead to unnecessary errors in the content. It's more natural and reliable to invoke this MCP tool directly with the parameters as part of your main application logic.

Expand Down Expand Up @@ -80,16 +80,21 @@ result = client.tool_runtime.invoke_tool(tool_name="generate_ui", kwargs=input_d
## Available MCP Tools

### `generate_ui`
The main tool that wraps the entire Next Gen UI Agent functionality. This single tool handles:
The main tool that wraps the entire Next Gen UI Agent functionality.

This single tool handles:

- Component selection based on user prompt and data
- Data transformation to match selected components
- Design system rendering to produce final UI

**Parameters:**

- `user_prompt` (str): User's prompt which we want to enrich with UI components
- `input_data` (List[Dict]): List of input data to render within the UI components

**Returns:**

- List of rendered UI components ready for display

## Available MCP Resources
Expand Down
2 changes: 1 addition & 1 deletion libs/next_gen_ui_mcp/__main__.py
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ def create_llamastack_inference(model: str, llama_url: str) -> InferenceBase:
except ImportError as e:
raise ImportError(
"LlamaStack dependencies not found. Install with: "
"pip install llama-stack-client>=0.1.9,<=0.2.15"
"pip install llama-stack-client==0.2.20"
) from e

try:
Expand Down
4 changes: 3 additions & 1 deletion mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -76,10 +76,12 @@ nav:
- Binding into AI application:
- guide/ai_apps_binding/index.md
- Llama Stack: guide/ai_apps_binding/llamastack.md
- Llama Stack Embedded Server: guide/ai_apps_binding/llamastack_embedded.md
- LangGraph: guide/ai_apps_binding/langgraph.md
- Embedded Llama Stack Server: guide/ai_apps_binding/llamastack_embedded.md
- BeeAI: guide/ai_apps_binding/beeai.md
- ACP: guide/ai_apps_binding/acp.md
- MCP Library: guide/ai_apps_binding/mcp-library.md
- MCP Container: guide/ai_apps_binding/mcp-container.md
- Python Library: guide/ai_apps_binding/pythonlib.md
- Binding into UI:
- guide/renderer/index.md
Expand Down
1 change: 1 addition & 0 deletions pants.toml
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ backend_packages.add = [
"pants.backend.python.lint.flake8",
"pants.backend.python.lint.isort",
"pants.backend.python.typecheck.mypy",
"pants.backend.docker",
]
pants_ignore = [
"requirements_dev.txt"
Expand Down