Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
37 changes: 37 additions & 0 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
name: CI

on:
push:
branches: [main]
pull_request:
branches: [main]

jobs:
test:
runs-on: ubuntu-latest
strategy:
matrix:
node-version: [18, 20, 22]

steps:
- uses: actions/checkout@v4

- uses: pnpm/action-setup@v4
with:
version: 9

- uses: actions/setup-node@v4
with:
node-version: ${{ matrix.node-version }}
cache: pnpm

- run: pnpm install --frozen-lockfile

- run: pnpm lint
name: Type check

- run: pnpm test
name: Run tests

- run: pnpm build
name: Build
205 changes: 180 additions & 25 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,44 +8,57 @@ AceTeam CLI - Run AI workflows locally from your terminal.
## Install

```bash
# From npm (once published)
npm install -g @aceteam/ace
# or

# Or run without installing
npx @aceteam/ace

# Or build from source
git clone https://github.com/aceteam-ai/ace.git
cd ace
pnpm install && pnpm build
node dist/index.js # run directly
npm link # or install globally as `ace`
```

## Quick Start

```bash
# Set up config and check dependencies
# 1. Set up Python venv, install dependencies, create config
ace init

# Run a workflow
ace workflow run hello-llm.json --input prompt="Explain AI in one sentence"
# 2. Browse available workflow templates
ace workflow list-templates

# Validate a workflow file
ace workflow validate hello-llm.json
# 3. Create a workflow from a template
ace workflow create hello-llm -o my-workflow.json

# List available node types
ace workflow list-nodes
# 4. Run it
ace workflow run my-workflow.json --input prompt="Explain AI in one sentence"
```

## How It Works

```
ace CLI (TypeScript)
├── ace init ────────> Detect Python, install aceteam-nodes, create config
├── ace init ──────────────> Detect Python 3.12+, create ~/.ace/venv,
│ install aceteam-nodes, save config
└── ace workflow run ─> Validate input
python -m aceteam_nodes.cli
aceteam-nodes (Python)
├── litellm (100+ LLM providers)
├── httpx (API calls)
└── workflow-engine (DAG execution)
├── ace workflow create ──> Pick a bundled template, customize params,
│ write workflow JSON
└── ace workflow run ─────> Validate input, show real-time progress
python -m aceteam_nodes.cli
aceteam-nodes (Python)
├── litellm (100+ LLM providers)
├── httpx (API calls)
└── workflow-engine (DAG execution)
```

The TypeScript CLI handles file validation, Python detection, and output formatting. Workflow execution is delegated to the `aceteam-nodes` Python package via subprocess, which uses `litellm` for multi-provider LLM support (OpenAI, Anthropic, Google, and 100+ more).
Expand All @@ -54,26 +67,93 @@ The TypeScript CLI handles file validation, Python detection, and output formatt

- Node.js 18+
- Python 3.12+ (for workflow execution)
- `aceteam-nodes` Python package (auto-installed on first run)
- An LLM provider — cloud API key **or** a local model server (see below)

## Commands

### `ace init`

Interactive setup: checks Python, installs `aceteam-nodes`, and creates `~/.ace/config.yaml`.
Interactive setup that:
1. Detects Python 3.12+ (shows specific version error if too old)
2. Creates a managed virtual environment at `~/.ace/venv/`
3. Installs `aceteam-nodes` into the venv
4. Prompts for default model and saves `~/.ace/config.yaml`

```bash
$ ace init

AceTeam CLI Setup

1. Prerequisites
✓ Python 3.12.3 (/usr/bin/python3)

2. Virtual environment
✓ Created venv: /home/user/.ace/venv

3. Dependencies
✓ aceteam-nodes installed

4. Configuration
Default model [gpt-4o-mini]:

Setup complete:
✓ Python 3.12.3 (/home/user/.ace/venv/bin/python)
✓ aceteam-nodes installed
✓ Config: /home/user/.ace/config.yaml
✓ Model: gpt-4o-mini
```

### `ace workflow list-templates`

List bundled workflow templates.

```bash
$ ace workflow list-templates
ID Name Category Inputs
────────────────────────────────────────────────────────────
hello-llm Hello LLM basics prompt
text-transform Text Transform basics text, instructions
llm-chain LLM Chain chains prompt
api-to-llm API to LLM chains url

# Filter by category
$ ace workflow list-templates --category basics
```

### `ace workflow create [template-id] [-o file]`

Create a workflow from a bundled template. Prompts for template selection if no ID given, then lets you customize node parameters.

```bash
# Interactive: pick a template and customize
ace workflow create

# Direct: use a specific template
ace workflow create hello-llm -o my-workflow.json
```

### `ace workflow run <file> [options]`

Run a workflow from a JSON file.
Run a workflow from a JSON file. Shows real-time progress as nodes execute.

```bash
ace workflow run workflow.json --input prompt="Hello" --verbose
ace workflow run workflow.json --input prompt="Hello"
```

Options:
- `-i, --input <key=value...>` - Input values
- `-v, --verbose` - Show progress messages
- `-v, --verbose` - Show raw stderr debug output
- `--config <path>` - Custom config file path
- `--remote` - Run on remote Fabric node instead of locally

Errors are automatically classified with suggested fixes:
```
✗ Missing module: aceteam_nodes
Run `ace init` to install dependencies

✗ Authentication failed
Set OPENAI_API_KEY or ANTHROPIC_API_KEY environment variable
```

### `ace workflow validate <file>`

Expand All @@ -83,10 +163,73 @@ Validate a workflow JSON file against the schema.

List all available node types with descriptions.

### `ace fabric login`

Authenticate with the AceTeam Sovereign Compute Fabric for remote workflow execution.

### `ace fabric discover [--capability <tag>]`

Discover available Citadel nodes on the Fabric.

### `ace fabric status`

Show connected node load metrics.

## Using Local LLMs (Ollama, vLLM, etc.)

Workflows use [litellm](https://docs.litellm.ai/) under the hood, which supports 100+ LLM providers — including local model servers. No API key needed for local models.

### Ollama

```bash
# 1. Start Ollama (https://ollama.com)
ollama serve
ollama pull llama3

# 2. Create a workflow using the Ollama model
ace workflow create hello-llm -o local-chat.json
# When prompted for "model", enter: ollama/llama3

# 3. Run it
ace workflow run local-chat.json --input prompt="Hello from local LLM"
```

### vLLM

```bash
# 1. Start vLLM server
vllm serve meta-llama/Llama-3-8b --port 8000

# 2. Set the base URL and create a workflow
export OPENAI_API_BASE=http://localhost:8000/v1
ace workflow create hello-llm -o vllm-chat.json
# When prompted for "model", enter: openai/meta-llama/Llama-3-8b

# 3. Run it
ace workflow run vllm-chat.json --input prompt="Hello from vLLM"
```

### Cloud APIs

```bash
export OPENAI_API_KEY=sk-... # OpenAI
export ANTHROPIC_API_KEY=sk-ant-... # Anthropic
export GEMINI_API_KEY=... # Google Gemini
```

The model name in your workflow JSON determines which provider is used. Examples:
- `gpt-4o-mini` — OpenAI
- `claude-3-haiku-20240307` — Anthropic
- `gemini/gemini-pro` — Google
- `ollama/llama3` — Ollama (local)
- `openai/model-name` + `OPENAI_API_BASE` — vLLM, LM Studio, or any OpenAI-compatible server

See [litellm provider docs](https://docs.litellm.ai/docs/providers) for the full list.

## Development

```bash
# Setup
# Install dependencies
pnpm install

# Build
Expand All @@ -97,6 +240,18 @@ pnpm dev

# Type check
pnpm lint

# Run tests
pnpm test

# Run tests in watch mode
pnpm test:watch

# Run tests with coverage
pnpm test:coverage

# Run integration tests only
pnpm test:integration
```

## Related
Expand Down
6 changes: 5 additions & 1 deletion package.json
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,10 @@
"build": "tsup",
"dev": "tsup --watch",
"lint": "tsc --noEmit",
"test": "vitest run"
"test": "vitest run",
"test:watch": "vitest",
"test:coverage": "vitest run --coverage",
"test:integration": "vitest run tests/integration"
},
"files": [
"dist"
Expand All @@ -27,6 +30,7 @@
"@types/which": "^3.0.0",
"tsup": "^8.0.0",
"typescript": "^5.4.0",
"@vitest/coverage-v8": "^3.2.4",
"vitest": "^3.2.4"
},
"license": "MIT",
Expand Down
Loading