diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml new file mode 100644 index 0000000..e9dcd0b --- /dev/null +++ b/.github/workflows/ci.yml @@ -0,0 +1,37 @@ +name: CI + +on: + push: + branches: [main] + pull_request: + branches: [main] + +jobs: + test: + runs-on: ubuntu-latest + strategy: + matrix: + node-version: [18, 20, 22] + + steps: + - uses: actions/checkout@v4 + + - uses: pnpm/action-setup@v4 + with: + version: 9 + + - uses: actions/setup-node@v4 + with: + node-version: ${{ matrix.node-version }} + cache: pnpm + + - run: pnpm install --frozen-lockfile + + - run: pnpm lint + name: Type check + + - run: pnpm test + name: Run tests + + - run: pnpm build + name: Build diff --git a/README.md b/README.md index 49933eb..60be9df 100644 --- a/README.md +++ b/README.md @@ -8,25 +8,34 @@ AceTeam CLI - Run AI workflows locally from your terminal. ## Install ```bash +# From npm (once published) npm install -g @aceteam/ace -# or + +# Or run without installing npx @aceteam/ace + +# Or build from source +git clone https://github.com/aceteam-ai/ace.git +cd ace +pnpm install && pnpm build +node dist/index.js # run directly +npm link # or install globally as `ace` ``` ## Quick Start ```bash -# Set up config and check dependencies +# 1. Set up Python venv, install dependencies, create config ace init -# Run a workflow -ace workflow run hello-llm.json --input prompt="Explain AI in one sentence" +# 2. Browse available workflow templates +ace workflow list-templates -# Validate a workflow file -ace workflow validate hello-llm.json +# 3. Create a workflow from a template +ace workflow create hello-llm -o my-workflow.json -# List available node types -ace workflow list-nodes +# 4. Run it +ace workflow run my-workflow.json --input prompt="Explain AI in one sentence" ``` ## How It Works @@ -34,18 +43,22 @@ ace workflow list-nodes ``` ace CLI (TypeScript) │ - ├── ace init ────────> Detect Python, install aceteam-nodes, create config + ├── ace init ──────────────> Detect Python 3.12+, create ~/.ace/venv, + │ install aceteam-nodes, save config │ - └── ace workflow run ─> Validate input - │ - ▼ - python -m aceteam_nodes.cli - │ - ▼ - aceteam-nodes (Python) - ├── litellm (100+ LLM providers) - ├── httpx (API calls) - └── workflow-engine (DAG execution) + ├── ace workflow create ──> Pick a bundled template, customize params, + │ write workflow JSON + │ + └── ace workflow run ─────> Validate input, show real-time progress + │ + ▼ + python -m aceteam_nodes.cli + │ + ▼ + aceteam-nodes (Python) + ├── litellm (100+ LLM providers) + ├── httpx (API calls) + └── workflow-engine (DAG execution) ``` The TypeScript CLI handles file validation, Python detection, and output formatting. Workflow execution is delegated to the `aceteam-nodes` Python package via subprocess, which uses `litellm` for multi-provider LLM support (OpenAI, Anthropic, Google, and 100+ more). @@ -54,26 +67,93 @@ The TypeScript CLI handles file validation, Python detection, and output formatt - Node.js 18+ - Python 3.12+ (for workflow execution) -- `aceteam-nodes` Python package (auto-installed on first run) +- An LLM provider — cloud API key **or** a local model server (see below) ## Commands ### `ace init` -Interactive setup: checks Python, installs `aceteam-nodes`, and creates `~/.ace/config.yaml`. +Interactive setup that: +1. Detects Python 3.12+ (shows specific version error if too old) +2. Creates a managed virtual environment at `~/.ace/venv/` +3. Installs `aceteam-nodes` into the venv +4. Prompts for default model and saves `~/.ace/config.yaml` + +```bash +$ ace init + +AceTeam CLI Setup + +1. Prerequisites +✓ Python 3.12.3 (/usr/bin/python3) + +2. Virtual environment +✓ Created venv: /home/user/.ace/venv + +3. Dependencies +✓ aceteam-nodes installed + +4. Configuration +Default model [gpt-4o-mini]: + +Setup complete: + ✓ Python 3.12.3 (/home/user/.ace/venv/bin/python) + ✓ aceteam-nodes installed + ✓ Config: /home/user/.ace/config.yaml + ✓ Model: gpt-4o-mini +``` + +### `ace workflow list-templates` + +List bundled workflow templates. + +```bash +$ ace workflow list-templates +ID Name Category Inputs +──────────────────────────────────────────────────────────── +hello-llm Hello LLM basics prompt +text-transform Text Transform basics text, instructions +llm-chain LLM Chain chains prompt +api-to-llm API to LLM chains url + +# Filter by category +$ ace workflow list-templates --category basics +``` + +### `ace workflow create [template-id] [-o file]` + +Create a workflow from a bundled template. Prompts for template selection if no ID given, then lets you customize node parameters. + +```bash +# Interactive: pick a template and customize +ace workflow create + +# Direct: use a specific template +ace workflow create hello-llm -o my-workflow.json +``` ### `ace workflow run [options]` -Run a workflow from a JSON file. +Run a workflow from a JSON file. Shows real-time progress as nodes execute. ```bash -ace workflow run workflow.json --input prompt="Hello" --verbose +ace workflow run workflow.json --input prompt="Hello" ``` Options: - `-i, --input ` - Input values -- `-v, --verbose` - Show progress messages +- `-v, --verbose` - Show raw stderr debug output - `--config ` - Custom config file path +- `--remote` - Run on remote Fabric node instead of locally + +Errors are automatically classified with suggested fixes: +``` +✗ Missing module: aceteam_nodes + Run `ace init` to install dependencies + +✗ Authentication failed + Set OPENAI_API_KEY or ANTHROPIC_API_KEY environment variable +``` ### `ace workflow validate ` @@ -83,10 +163,73 @@ Validate a workflow JSON file against the schema. List all available node types with descriptions. +### `ace fabric login` + +Authenticate with the AceTeam Sovereign Compute Fabric for remote workflow execution. + +### `ace fabric discover [--capability ]` + +Discover available Citadel nodes on the Fabric. + +### `ace fabric status` + +Show connected node load metrics. + +## Using Local LLMs (Ollama, vLLM, etc.) + +Workflows use [litellm](https://docs.litellm.ai/) under the hood, which supports 100+ LLM providers — including local model servers. No API key needed for local models. + +### Ollama + +```bash +# 1. Start Ollama (https://ollama.com) +ollama serve +ollama pull llama3 + +# 2. Create a workflow using the Ollama model +ace workflow create hello-llm -o local-chat.json +# When prompted for "model", enter: ollama/llama3 + +# 3. Run it +ace workflow run local-chat.json --input prompt="Hello from local LLM" +``` + +### vLLM + +```bash +# 1. Start vLLM server +vllm serve meta-llama/Llama-3-8b --port 8000 + +# 2. Set the base URL and create a workflow +export OPENAI_API_BASE=http://localhost:8000/v1 +ace workflow create hello-llm -o vllm-chat.json +# When prompted for "model", enter: openai/meta-llama/Llama-3-8b + +# 3. Run it +ace workflow run vllm-chat.json --input prompt="Hello from vLLM" +``` + +### Cloud APIs + +```bash +export OPENAI_API_KEY=sk-... # OpenAI +export ANTHROPIC_API_KEY=sk-ant-... # Anthropic +export GEMINI_API_KEY=... # Google Gemini +``` + +The model name in your workflow JSON determines which provider is used. Examples: +- `gpt-4o-mini` — OpenAI +- `claude-3-haiku-20240307` — Anthropic +- `gemini/gemini-pro` — Google +- `ollama/llama3` — Ollama (local) +- `openai/model-name` + `OPENAI_API_BASE` — vLLM, LM Studio, or any OpenAI-compatible server + +See [litellm provider docs](https://docs.litellm.ai/docs/providers) for the full list. + ## Development ```bash -# Setup +# Install dependencies pnpm install # Build @@ -97,6 +240,18 @@ pnpm dev # Type check pnpm lint + +# Run tests +pnpm test + +# Run tests in watch mode +pnpm test:watch + +# Run tests with coverage +pnpm test:coverage + +# Run integration tests only +pnpm test:integration ``` ## Related diff --git a/package.json b/package.json index f65d260..e8a64d5 100644 --- a/package.json +++ b/package.json @@ -10,7 +10,10 @@ "build": "tsup", "dev": "tsup --watch", "lint": "tsc --noEmit", - "test": "vitest run" + "test": "vitest run", + "test:watch": "vitest", + "test:coverage": "vitest run --coverage", + "test:integration": "vitest run tests/integration" }, "files": [ "dist" @@ -27,6 +30,7 @@ "@types/which": "^3.0.0", "tsup": "^8.0.0", "typescript": "^5.4.0", + "@vitest/coverage-v8": "^3.2.4", "vitest": "^3.2.4" }, "license": "MIT", diff --git a/pnpm-lock.yaml b/pnpm-lock.yaml index 17801b2..394a6ea 100644 --- a/pnpm-lock.yaml +++ b/pnpm-lock.yaml @@ -30,6 +30,9 @@ importers: '@types/which': specifier: ^3.0.0 version: 3.0.4 + '@vitest/coverage-v8': + specifier: ^3.2.4 + version: 3.2.4(vitest@3.2.4(@types/node@22.19.7)(yaml@2.8.2)) tsup: specifier: ^8.0.0 version: 8.5.1(postcss@8.5.6)(typescript@5.9.3)(yaml@2.8.2) @@ -42,6 +45,31 @@ importers: packages: + '@ampproject/remapping@2.3.0': + resolution: {integrity: sha512-30iZtAPgz+LTIYoeivqYo853f02jBYSd5uGnGpkFV0M3xOt9aN73erkgYAmZU43x4VfqcnLxW9Kpg3R5LC4YYw==} + engines: {node: '>=6.0.0'} + + '@babel/helper-string-parser@7.27.1': + resolution: {integrity: sha512-qMlSxKbpRlAridDExk92nSobyDdpPijUq2DW6oDnUqd0iOGxmQjyqhMIihI9+zv4LPyZdRje2cavWPbCbWm3eA==} + engines: {node: '>=6.9.0'} + + '@babel/helper-validator-identifier@7.28.5': + resolution: {integrity: sha512-qSs4ifwzKJSV39ucNjsvc6WVHs6b7S03sOh2OcHF9UHfVPqWWALUsNUVzhSBiItjRZoLHx7nIarVjqKVusUZ1Q==} + engines: {node: '>=6.9.0'} + + '@babel/parser@7.29.0': + resolution: {integrity: sha512-IyDgFV5GeDUVX4YdF/3CPULtVGSXXMLh1xVIgdCgxApktqnQV0r7/8Nqthg+8YLGaAtdyIlo2qIdZrbCv4+7ww==} + engines: {node: '>=6.0.0'} + hasBin: true + + '@babel/types@7.29.0': + resolution: {integrity: sha512-LwdZHpScM4Qz8Xw2iKSzS+cfglZzJGvofQICy7W7v4caru4EaAmyUuO6BGrbyQ2mYV11W0U8j5mBhd14dd3B0A==} + engines: {node: '>=6.9.0'} + + '@bcoe/v8-coverage@1.0.2': + resolution: {integrity: sha512-6zABk/ECA/QYSCQ1NGiVwwbQerUCZ+TQbp64Q3AgmfNvurHH0j8TtXa1qbShXA6qqkpAj4V5W8pP6mLe1mcMqA==} + engines: {node: '>=18'} + '@esbuild/aix-ppc64@0.27.2': resolution: {integrity: sha512-GZMB+a0mOMZs4MpDbj8RJp4cw+w1WV5NYD6xzgvzUJ5Ek2jerwfO2eADyI6ExDSUED+1X8aMbegahsJi+8mgpw==} engines: {node: '>=18'} @@ -198,6 +226,14 @@ packages: cpu: [x64] os: [win32] + '@isaacs/cliui@8.0.2': + resolution: {integrity: sha512-O8jcjabXaleOG9DQ0+ARXWZBTfnP4WNAqzuiJK7ll44AmxGKv/J2M4TPjxjY3znBCfvBXFzucm1twdyFybFqEA==} + engines: {node: '>=12'} + + '@istanbuljs/schema@0.1.3': + resolution: {integrity: sha512-ZXRY4jNvVgSVQ8DL3LTcakaAtXwTVUxE81hslsyD2AtoXW/wVob10HkOJ1X/pAlcI7D+2YoZKg5do8G/w6RYgA==} + engines: {node: '>=8'} + '@jridgewell/gen-mapping@0.3.13': resolution: {integrity: sha512-2kkt/7niJ6MgEPxF0bYdQ6etZaA+fQvDcLKckhy1yIQOzaoKjBBjSj63/aLVjYE3qhRt5dvM+uUyfCg6UKCBbA==} @@ -211,6 +247,10 @@ packages: '@jridgewell/trace-mapping@0.3.31': resolution: {integrity: sha512-zzNR+SdQSDJzc8joaeP8QQoCQr8NuYx2dIIytl1QeBEZHJ9uW6hebsrYgbz8hJwUQao3TWCMtmfV8Nu1twOLAw==} + '@pkgjs/parseargs@0.11.0': + resolution: {integrity: sha512-+1VkjdD0QBLPodGrJUeqarH8VAIvQODIbwh9XpP5Syisf7YoQgsJKPNFoqqLQlu+VQ/tVSshMR6loPMn8U+dPg==} + engines: {node: '>=14'} + '@rollup/rollup-android-arm-eabi@4.57.1': resolution: {integrity: sha512-A6ehUVSiSaaliTxai040ZpZ2zTevHYbvu/lDoeAteHI8QnaosIzm4qwtezfRg1jOYaUmnzLX1AOD6Z+UJjtifg==} cpu: [arm] @@ -351,6 +391,15 @@ packages: '@types/which@3.0.4': resolution: {integrity: sha512-liyfuo/106JdlgSchJzXEQCVArk0CvevqPote8F8HgWgJ3dRCcTHgJIsLDuee0kxk/mhbInzIZk3QWSZJ8R+2w==} + '@vitest/coverage-v8@3.2.4': + resolution: {integrity: sha512-EyF9SXU6kS5Ku/U82E259WSnvg6c8KTjppUncuNdm5QHpe17mwREHnjDzozC8x9MZ0xfBUFSaLkRv4TMA75ALQ==} + peerDependencies: + '@vitest/browser': 3.2.4 + vitest: 3.2.4 + peerDependenciesMeta: + '@vitest/browser': + optional: true + '@vitest/expect@3.2.4': resolution: {integrity: sha512-Io0yyORnB6sikFlt8QW5K7slY4OjqNX9jmJQ02QDda8lyM6B5oNgVWoSoKPac8/kgnCUzuHQKrSLtu/uOqqrig==} @@ -385,10 +434,22 @@ packages: engines: {node: '>=0.4.0'} hasBin: true + ansi-regex@5.0.1: + resolution: {integrity: sha512-quJQXlTSUGL2LH9SUXo8VwsY4soanhgo6LNSm84E1LBcE8s3O0wpdiRzyR9z/ZZJMlMWv37qOOb9pdJlMUEKFQ==} + engines: {node: '>=8'} + ansi-regex@6.2.2: resolution: {integrity: sha512-Bq3SmSpyFHaWjPk8If9yc6svM8c56dB5BAtW4Qbw5jHTwwXXcTLoRMkpDJp6VL0XzlWaCHTXrkFURMYmD0sLqg==} engines: {node: '>=12'} + ansi-styles@4.3.0: + resolution: {integrity: sha512-zbB9rCJAT1rbjiVDb2hqKFHNYLxgtk8NURxZ3IZwD3F6NtxbXZQCnnSi1Lkx+IDohdPlFp222wVALIheZJQSEg==} + engines: {node: '>=8'} + + ansi-styles@6.2.3: + resolution: {integrity: sha512-4Dj6M28JB+oAH8kFkTLUo+a2jwOFkuqb3yucU0CANcRRUbxS0cP0nZYCGjcc3BNXwRIsUVmDGgzawme7zvJHvg==} + engines: {node: '>=12'} + any-promise@1.3.0: resolution: {integrity: sha512-7UvmKalWRt1wgjL1RrGxoSJW/0QZFIegpeGvZG9kjp8vrRu55XTHbwnqq2GpXm9uLbcuhxm3IqX9OB4MZR1b2A==} @@ -396,6 +457,15 @@ packages: resolution: {integrity: sha512-Izi8RQcffqCeNVgFigKli1ssklIbpHnCYc6AknXGYoB6grJqyeby7jv12JUQgmTAnIDnbck1uxksT4dzN3PWBA==} engines: {node: '>=12'} + ast-v8-to-istanbul@0.3.11: + resolution: {integrity: sha512-Qya9fkoofMjCBNVdWINMjB5KZvkYfaO9/anwkWnjxibpWUxo5iHl2sOdP7/uAqaRuUYuoo8rDwnbaaKVFxoUvw==} + + balanced-match@1.0.2: + resolution: {integrity: sha512-3oSeUO0TMV67hN1AmbXsK4yaqU7tjiHlbxRDZOpH0KW9+CeX4bRAaX0Anxt0tx2MrpRpWwQaPwIlISEJhYU5Pw==} + + brace-expansion@2.0.2: + resolution: {integrity: sha512-Jt0vHyM+jmUBqojB7E1NIYadt0vI0Qxjxd2TErW94wDz+E2LAm5vKMXXwg6ZZBTHPuUlDgQHKXvjGBdfcF1ZDQ==} + bundle-require@5.1.0: resolution: {integrity: sha512-3WrrOuZiyaaZPWiEt4G3+IffISVC9HYlWueJEBWED4ZH4aIAC2PnkdnuRrR94M+w6yGWn4AglWtJtBI8YqvgoA==} engines: {node: ^12.20.0 || ^14.13.1 || >=16.0.0} @@ -430,6 +500,13 @@ packages: resolution: {integrity: sha512-ywqV+5MmyL4E7ybXgKys4DugZbX0FC6LnwrhjuykIjnK9k8OQacQ7axGKnjDXWNhns0xot3bZI5h55H8yo9cJg==} engines: {node: '>=6'} + color-convert@2.0.1: + resolution: {integrity: sha512-RRECPsj7iu/xb5oKYcsFHSppFNnsj/52OVTRKb4zP5onXwVF3zVmmToNcOfGC+CRDpfK/U584fMg38ZHCaElKQ==} + engines: {node: '>=7.0.0'} + + color-name@1.1.4: + resolution: {integrity: sha512-dOy+3AuW3a2wNbZHIuMZpTcgjGuLU/uBL/ubcZF9OXbDo8ff4O8yVp5Bf0efS8uEoYo5q4Fx7dY9OgQGXgAsQA==} + commander@12.1.0: resolution: {integrity: sha512-Vw8qHK3bZM9y/P10u3Vib8o/DdkvA2OtPtZvD871QKjy74Wj1WSKFILMPRPSdUSx5RFK1arlJzEtA4PkFgnbuA==} engines: {node: '>=18'} @@ -445,6 +522,10 @@ packages: resolution: {integrity: sha512-5IKcdX0nnYavi6G7TtOhwkYzyjfJlatbjMjuLSfE2kYT5pMDOilZ4OvMhi637CcDICTmz3wARPoyhqyX1Y+XvA==} engines: {node: ^14.18.0 || >=16.10.0} + cross-spawn@7.0.6: + resolution: {integrity: sha512-uV2QOWP2nWzsy2aMp8aRibhi9dlzF5Hgh5SHaB9OiTGEyDTiJJyx0uy51QXdyWbtAHNua4XJzUKca3OzKUd3vA==} + engines: {node: '>= 8'} + debug@4.4.3: resolution: {integrity: sha512-RGwwWnwQvkVfavKVt22FGLw+xYSdzARwm0ru6DhTVA3umU5hZc28V3kO4stgYryrTlLpuvgI9GiijltAjNbcqA==} engines: {node: '>=6.0'} @@ -458,9 +539,18 @@ packages: resolution: {integrity: sha512-h5k/5U50IJJFpzfL6nO9jaaumfjO/f2NjK/oYB2Djzm4p9L+3T9qWpZqZ2hAbLPuuYq9wrU08WQyBTL5GbPk5Q==} engines: {node: '>=6'} + eastasianwidth@0.2.0: + resolution: {integrity: sha512-I88TYZWc9XiYHRQ4/3c5rjjfgkjhLyW2luGIheGERbNQ6OY7yTybanSpDXZa8y7VUP9YmDcYa+eyq4ca7iLqWA==} + emoji-regex@10.6.0: resolution: {integrity: sha512-toUI84YS5YmxW219erniWD0CIVOo46xGKColeNQRgOzDorgBi1v4D71/OFzgD9GO2UGKIv1C3Sp8DAn0+j5w7A==} + emoji-regex@8.0.0: + resolution: {integrity: sha512-MSjYzcWNOA0ewAHpz0MxpYFvwg6yjy1NG3xteoqz644VCo/RPgnr1/GGt+ic3iJTzQ8Eu3TdM14SawnVUmGE6A==} + + emoji-regex@9.2.2: + resolution: {integrity: sha512-L18DaJsXSUk2+42pv8mLs5jJT2hqFkFE4j21wOmgbUqsZ2hL72NsUU785g9RXgo3s0ZNgVl42TiHp3ZtOv/Vyg==} + es-module-lexer@1.7.0: resolution: {integrity: sha512-jEQoCwk8hyb2AZziIOLhDqpm5+2ww5uIE6lkO/6jcOCusfk6LhMHpXXfBLXTZ7Ydyt0j4VoUQv6uGNYbdW+kBA==} @@ -488,6 +578,10 @@ packages: fix-dts-default-cjs-exports@1.0.1: resolution: {integrity: sha512-pVIECanWFC61Hzl2+oOCtoJ3F17kglZC/6N94eRWycFgBH35hHx0Li604ZIzhseh97mf2p0cv7vVrOZGoqhlEg==} + foreground-child@3.3.1: + resolution: {integrity: sha512-gIXjKqtFuWEgzFRJA9WCQeSJLZDjgJUOMCMzxtvFq/37KojM1BFGufqsCy0r4qSQmYLsZYMeyRqzIWOMup03sw==} + engines: {node: '>=14'} + fsevents@2.3.3: resolution: {integrity: sha512-5xoDfX+fL7faATnagmWPpbFtwh/R77WmMMqqHGS65C3vvB0YHrgF+B1YmZ3441tMj5n63k0212XNoJwzlhffQw==} engines: {node: ^8.16.0 || ^10.6.0 || >=11.0.0} @@ -497,6 +591,22 @@ packages: resolution: {integrity: sha512-QZjmEOC+IT1uk6Rx0sX22V6uHWVwbdbxf1faPqJ1QhLdGgsRGCZoyaQBm/piRdJy/D2um6hM1UP7ZEeQ4EkP+Q==} engines: {node: '>=18'} + glob@10.5.0: + resolution: {integrity: sha512-DfXN8DfhJ7NH3Oe7cFmu3NCu1wKbkReJ8TorzSAFbSKrlNaQSKfIzqYqVY8zlbs2NLBbWpRiU52GX2PbaBVNkg==} + deprecated: Old versions of glob are not supported, and contain widely publicized security vulnerabilities, which have been fixed in the current version. Please update. Support for old versions may be purchased (at exorbitant rates) by contacting i@izs.me + hasBin: true + + has-flag@4.0.0: + resolution: {integrity: sha512-EykJT/Q1KjTWctppgIAgfSO0tKVuZUjhgMr17kqTumMl6Afv3EISleU7qZUzoXDFTAHTDC4NOoG/ZxU3EvlMPQ==} + engines: {node: '>=8'} + + html-escaper@2.0.2: + resolution: {integrity: sha512-H2iMtd0I4Mt5eYiapRdIDjp+XzelXQ0tFE4JS7YFwFevXXMmOp9myNrUvCg0D6ws8iqkRPBfKHgbwig1SmlLfg==} + + is-fullwidth-code-point@3.0.0: + resolution: {integrity: sha512-zymm5+u+sCsSWyD9qNaejV3DFvhCKclKdizYaJUuHA83RLjb7nSuGnddCHGv0hk+KY7BMAlsWeK4Ueg6EV6XQg==} + engines: {node: '>=8'} + is-interactive@2.0.0: resolution: {integrity: sha512-qP1vozQRI+BMOPcjFzrjXuQvdak2pHNUMZoeG2eRbiSqyvbEf/wQtEOTOX1guk6E3t36RkaqiSt8A/6YElNxLQ==} engines: {node: '>=12'} @@ -509,14 +619,39 @@ packages: resolution: {integrity: sha512-mE00Gnza5EEB3Ds0HfMyllZzbBrmLOX3vfWoj9A9PEnTfratQ/BcaJOuMhnkhjXvb2+FkY3VuHqtAGpTPmglFQ==} engines: {node: '>=18'} + isexe@2.0.0: + resolution: {integrity: sha512-RHxMLp9lnKHGHRng9QFhRCMbYAcVpn69smSGcq3f36xjgVVWThj4qqLbTLlq7Ssj8B+fIQ1EuCEGI2lKsyQeIw==} + isexe@3.1.1: resolution: {integrity: sha512-LpB/54B+/2J5hqQ7imZHfdU31OlgQqx7ZicVlkm9kzg9/w8GKLEcFfJl/t7DCEDueOyBAD6zCCwTO6Fzs0NoEQ==} engines: {node: '>=16'} + istanbul-lib-coverage@3.2.2: + resolution: {integrity: sha512-O8dpsF+r0WV/8MNRKfnmrtCWhuKjxrq2w+jpzBL5UZKTi2LeVWnWOmWRxFlesJONmc+wLAGvKQZEOanko0LFTg==} + engines: {node: '>=8'} + + istanbul-lib-report@3.0.1: + resolution: {integrity: sha512-GCfE1mtsHGOELCU8e/Z7YWzpmybrx/+dSTfLrvY8qRmaY6zXTKWn6WQIjaAFw069icm6GVMNkgu0NzI4iPZUNw==} + engines: {node: '>=10'} + + istanbul-lib-source-maps@5.0.6: + resolution: {integrity: sha512-yg2d+Em4KizZC5niWhQaIomgf5WlL4vOOjZ5xGCmF8SnPE/mDWWXgvRExdcpCgh9lLRRa1/fSYp2ymmbJ1pI+A==} + engines: {node: '>=10'} + + istanbul-reports@3.2.0: + resolution: {integrity: sha512-HGYWWS/ehqTV3xN10i23tkPkpH46MLCIMFNCaaKNavAXTF1RkqxawEPtnjnGZ6XKSInBKkiOA5BKS+aZiY3AvA==} + engines: {node: '>=8'} + + jackspeak@3.4.3: + resolution: {integrity: sha512-OGlZQpz2yfahA/Rd1Y8Cd9SIEsqvXkLVoSw/cgwhnhFMDbsQFeZYoJJ7bIZBS9BcamUW96asq/npPWugM+RQBw==} + joycon@3.1.1: resolution: {integrity: sha512-34wB/Y7MW7bzjKRjUKTa46I2Z7eV62Rkhva+KkopW7Qvv/OSWBqvkSY7vusOPrNuZcUG3tApvdVgNB8POj3SPw==} engines: {node: '>=10'} + js-tokens@10.0.0: + resolution: {integrity: sha512-lM/UBzQmfJRo9ABXbPWemivdCW8V2G8FHaHdypQaIy523snUjog0W71ayWXTjiR+ixeMyVHN2XcpnTd/liPg/Q==} + js-tokens@9.0.1: resolution: {integrity: sha512-mxa9E9ITFOt0ban3j6L5MpjwegGz6lBQmM1IJkWeBZGcMxto50+eWdjC/52xDbS2vy0k7vIMK0Fe2wfL9OQSpQ==} @@ -538,13 +673,31 @@ packages: loupe@3.2.1: resolution: {integrity: sha512-CdzqowRJCeLU72bHvWqwRBBlLcMEtIvGrlvef74kMnV2AolS9Y8xUv1I0U/MNAWMhBlKIoyuEgoJ0t/bbwHbLQ==} + lru-cache@10.4.3: + resolution: {integrity: sha512-JNAzZcXrCt42VGLuYz0zfAzDfAvJWW6AfYlDBQyDV5DClI2m5sAmK+OIO7s59XfsRsWHp02jAJrRadPRGTt6SQ==} + magic-string@0.30.21: resolution: {integrity: sha512-vd2F4YUyEXKGcLHoq+TEyCjxueSeHnFxyyjNp80yg0XV4vUhnDer/lvvlqM/arB5bXQN5K2/3oinyCRyx8T2CQ==} + magicast@0.3.5: + resolution: {integrity: sha512-L0WhttDl+2BOsybvEOLK7fW3UA0OQ0IQ2d6Zl2x/a6vVRs3bAY0ECOSHHeL5jD+SbOpOCUEi0y1DgHEn9Qn1AQ==} + + make-dir@4.0.0: + resolution: {integrity: sha512-hXdUTZYIVOt1Ex//jAQi+wTZZpUpwBj/0QsOzqegb3rGMMeJiSEu5xLHnYfBrRV4RH2+OCSOO95Is/7x1WJ4bw==} + engines: {node: '>=10'} + mimic-function@5.0.1: resolution: {integrity: sha512-VP79XUPxV2CigYP3jWwAUFSku2aKqBH7uTAapFWCBqutsbmDo96KY5o8uh6U+/YSIn5OxJnXp73beVkpqMIGhA==} engines: {node: '>=18'} + minimatch@9.0.5: + resolution: {integrity: sha512-G6T0ZX48xgozx7587koeX9Ys2NYy6Gmv//P89sEte9V9whIapMNF4idKxnW2QtCcLiTWlb/wfCabAtAFWhhBow==} + engines: {node: '>=16 || 14 >=14.17'} + + minipass@7.1.2: + resolution: {integrity: sha512-qOOzS1cBTWYF4BH8fVePDBOO9iptMnGUEZwNc/cMWnTV2nVLZ7VoNWEPHkYczZA0pdoA7dl6e7FL659nX9S2aw==} + engines: {node: '>=16 || 14 >=14.17'} + mlly@1.8.0: resolution: {integrity: sha512-l8D9ODSRWLe2KHJSifWGwBqpTZXIXTeo8mlKjY+E2HAakaTeNpqAyBZ8GSqLzHgw4XmHmC8whvpjJNMbFZN7/g==} @@ -571,6 +724,17 @@ packages: resolution: {integrity: sha512-weP+BZ8MVNnlCm8c0Qdc1WSWq4Qn7I+9CJGm7Qali6g44e/PUzbjNqJX5NJ9ljlNMosfJvg1fKEGILklK9cwnw==} engines: {node: '>=18'} + package-json-from-dist@1.0.1: + resolution: {integrity: sha512-UEZIS3/by4OC8vL3P2dTXRETpebLI2NiI5vIrjaD/5UtrkFX/tNbwjTSRAGC/+7CAo2pIcBaRgWmcBBHcsaCIw==} + + path-key@3.1.1: + resolution: {integrity: sha512-ojmeN0qd+y0jszEtoY48r0Peq5dwMEkIlCOu6Q5f41lfkswXuKtYrhgoTpLnyIcHm24Uhqx+5Tqm2InSwLhE6Q==} + engines: {node: '>=8'} + + path-scurry@1.11.1: + resolution: {integrity: sha512-Xa4Nw17FS9ApQFJ9umLiJS4orGjm7ZzwUrwamcGQuHSzDyth9boKDaycYdDcZDuqYATXw4HFXgaqWTctW/v1HA==} + engines: {node: '>=16 || 14 >=14.18'} + pathe@2.0.3: resolution: {integrity: sha512-WUjGcAqP1gQacoQe+OBJsFA7Ld4DyXuUIjZ5cc75cLHvJ7dtNsTugphxIADwspS+AraAUePCKrSVtPLFj/F88w==} @@ -631,6 +795,19 @@ packages: engines: {node: '>=18.0.0', npm: '>=8.0.0'} hasBin: true + semver@7.7.4: + resolution: {integrity: sha512-vFKC2IEtQnVhpT78h1Yp8wzwrf8CM+MzKMHGJZfBtzhZNycRFnXsHk6E5TxIkkMsgNS7mdX3AGB7x2QM2di4lA==} + engines: {node: '>=10'} + hasBin: true + + shebang-command@2.0.0: + resolution: {integrity: sha512-kHxr2zZpYtdmrN1qDjrrX/Z1rR1kG8Dx+gkpK1G4eXmvXswmcE1hTWBWYUzlraYw1/yZp6YuDY77YtvbN0dmDA==} + engines: {node: '>=8'} + + shebang-regex@3.0.0: + resolution: {integrity: sha512-7++dFhtcx3353uBaq8DDR4NuxBetBzC7ZQOhmTQInHEd6bSrXdiEyzCvG07Z44UYdLShWUyXt5M/yhz8ekcb1A==} + engines: {node: '>=8'} + siginfo@2.0.0: resolution: {integrity: sha512-ybx0WO1/8bSBLEWXZvEd7gMW3Sn3JFlW3TvX1nREbDLRNQNaeNN8WK0meBwPdAaOI7TtRRRJn/Es1zhrrCHu7g==} @@ -656,10 +833,22 @@ packages: resolution: {integrity: sha512-UhDfHmA92YAlNnCfhmq0VeNL5bDbiZGg7sZ2IvPsXubGkiNa9EC+tUTsjBRsYUAz87btI6/1wf4XoVvQ3uRnmQ==} engines: {node: '>=18'} + string-width@4.2.3: + resolution: {integrity: sha512-wKyQRQpjJ0sIp62ErSZdGsjMJWsap5oRNihHhu6G7JVO/9jIB6UyevL+tXuOqrng8j/cxKTWyWUwvSTriiZz/g==} + engines: {node: '>=8'} + + string-width@5.1.2: + resolution: {integrity: sha512-HnLOCR3vjcY8beoNLtcjZ5/nxn2afmME6lhrDrebokqMap+XbeW8n9TXpPDOqdGK5qcI3oT0GKTW6wC7EMiVqA==} + engines: {node: '>=12'} + string-width@7.2.0: resolution: {integrity: sha512-tsaTIkKW9b4N+AEj+SVA+WhJzV7/zMhcSu78mLKWSk7cXMOSHsBKFWUs0fWwq8QyK3MgJBQRX6Gbi4kYbdvGkQ==} engines: {node: '>=18'} + strip-ansi@6.0.1: + resolution: {integrity: sha512-Y38VPSHcqkFrCpFnQ9vuSXmquuv5oXOKpGeT6aGrr3o3Gc9AlVa6JBfUSOCnbxGGZF+/0ooI7KrPuUSztUdU5A==} + engines: {node: '>=8'} + strip-ansi@7.1.2: resolution: {integrity: sha512-gmBGslpoQJtgnMAvOVqGZpEz9dyoKTCzy2nfz/n8aIFhN/jCE/rCmcxabB6jOOHV+0WNnylOxaxBQPSvcWklhA==} engines: {node: '>=12'} @@ -672,6 +861,14 @@ packages: engines: {node: '>=16 || 14 >=14.17'} hasBin: true + supports-color@7.2.0: + resolution: {integrity: sha512-qpCAvRl9stuOHveKsn7HncJRvv501qIacKzQlO/+Lwxc9+0q2wLyv4Dfvt80/DPn2pqOBsJdDiogXGR9+OvwRw==} + engines: {node: '>=8'} + + test-exclude@7.0.1: + resolution: {integrity: sha512-pFYqmTw68LXVjeWJMST4+borgQP2AyMNbg1BpZh9LbyhUeNkeaPF9gzfPGUAnSMV3qPYdWUwDIjjCLiSDOl7vg==} + engines: {node: '>=18'} + thenify-all@1.6.0: resolution: {integrity: sha512-RNxQH/qI8/t3thXJDwcstUO4zeqo64+Uy/+sNVRBx4Xn2OX+OZ9oP+iJnNFqplFra2ZUVeKCSa2oVWi3T4uVmA==} engines: {node: '>=0.8'} @@ -811,6 +1008,11 @@ packages: jsdom: optional: true + which@2.0.2: + resolution: {integrity: sha512-BLI3Tl1TW3Pvl70l3yq3Y64i+awpwXqsGBYWkkqMtnbXgrMD+yj7rhW0kuEDxzJaYXGjEW5ogapKNMEKNMjibA==} + engines: {node: '>= 8'} + hasBin: true + which@4.0.0: resolution: {integrity: sha512-GlaYyEb07DPxYCKhKzplCWBJtvxZcZMrL+4UkrTSJHHPyZU4mYYTv3qaOe77H7EODLSSopAUFAc6W8U4yqvscg==} engines: {node: ^16.13.0 || >=18.0.0} @@ -821,6 +1023,14 @@ packages: engines: {node: '>=8'} hasBin: true + wrap-ansi@7.0.0: + resolution: {integrity: sha512-YVGIj2kamLSTxw6NsZjoBxfSwsn0ycdesmc4p+Q21c5zPuZ1pl+NfxVdxPtdHvmNVOQ6XSYG4AUtyt/Fi7D16Q==} + engines: {node: '>=10'} + + wrap-ansi@8.1.0: + resolution: {integrity: sha512-si7QWI6zUMq56bESFvagtmzMdGOtoxfR+Sez11Mobfc7tm+VkUckk9bW2UeffTGVUbOksxmSw0AA2gs8g71NCQ==} + engines: {node: '>=12'} + yaml@2.8.2: resolution: {integrity: sha512-mplynKqc1C2hTVYxd0PU2xQAc22TI1vShAYGksCCfxbn/dFwnHTNi1bvYsBTkhdUNtGIf5xNOg938rrSSYvS9A==} engines: {node: '>= 14.6'} @@ -828,6 +1038,26 @@ packages: snapshots: + '@ampproject/remapping@2.3.0': + dependencies: + '@jridgewell/gen-mapping': 0.3.13 + '@jridgewell/trace-mapping': 0.3.31 + + '@babel/helper-string-parser@7.27.1': {} + + '@babel/helper-validator-identifier@7.28.5': {} + + '@babel/parser@7.29.0': + dependencies: + '@babel/types': 7.29.0 + + '@babel/types@7.29.0': + dependencies: + '@babel/helper-string-parser': 7.27.1 + '@babel/helper-validator-identifier': 7.28.5 + + '@bcoe/v8-coverage@1.0.2': {} + '@esbuild/aix-ppc64@0.27.2': optional: true @@ -906,6 +1136,17 @@ snapshots: '@esbuild/win32-x64@0.27.2': optional: true + '@isaacs/cliui@8.0.2': + dependencies: + string-width: 5.1.2 + string-width-cjs: string-width@4.2.3 + strip-ansi: 7.1.2 + strip-ansi-cjs: strip-ansi@6.0.1 + wrap-ansi: 8.1.0 + wrap-ansi-cjs: wrap-ansi@7.0.0 + + '@istanbuljs/schema@0.1.3': {} + '@jridgewell/gen-mapping@0.3.13': dependencies: '@jridgewell/sourcemap-codec': 1.5.5 @@ -920,6 +1161,9 @@ snapshots: '@jridgewell/resolve-uri': 3.1.2 '@jridgewell/sourcemap-codec': 1.5.5 + '@pkgjs/parseargs@0.11.0': + optional: true + '@rollup/rollup-android-arm-eabi@4.57.1': optional: true @@ -1010,6 +1254,25 @@ snapshots: '@types/which@3.0.4': {} + '@vitest/coverage-v8@3.2.4(vitest@3.2.4(@types/node@22.19.7)(yaml@2.8.2))': + dependencies: + '@ampproject/remapping': 2.3.0 + '@bcoe/v8-coverage': 1.0.2 + ast-v8-to-istanbul: 0.3.11 + debug: 4.4.3 + istanbul-lib-coverage: 3.2.2 + istanbul-lib-report: 3.0.1 + istanbul-lib-source-maps: 5.0.6 + istanbul-reports: 3.2.0 + magic-string: 0.30.21 + magicast: 0.3.5 + std-env: 3.10.0 + test-exclude: 7.0.1 + tinyrainbow: 2.0.0 + vitest: 3.2.4(@types/node@22.19.7)(yaml@2.8.2) + transitivePeerDependencies: + - supports-color + '@vitest/expect@3.2.4': dependencies: '@types/chai': 5.2.3 @@ -1054,12 +1317,32 @@ snapshots: acorn@8.15.0: {} + ansi-regex@5.0.1: {} + ansi-regex@6.2.2: {} + ansi-styles@4.3.0: + dependencies: + color-convert: 2.0.1 + + ansi-styles@6.2.3: {} + any-promise@1.3.0: {} assertion-error@2.0.1: {} + ast-v8-to-istanbul@0.3.11: + dependencies: + '@jridgewell/trace-mapping': 0.3.31 + estree-walker: 3.0.3 + js-tokens: 10.0.0 + + balanced-match@1.0.2: {} + + brace-expansion@2.0.2: + dependencies: + balanced-match: 1.0.2 + bundle-require@5.1.0(esbuild@0.27.2): dependencies: esbuild: 0.27.2 @@ -1089,6 +1372,12 @@ snapshots: cli-spinners@2.9.2: {} + color-convert@2.0.1: + dependencies: + color-name: 1.1.4 + + color-name@1.1.4: {} + commander@12.1.0: {} commander@4.1.1: {} @@ -1097,14 +1386,26 @@ snapshots: consola@3.4.2: {} + cross-spawn@7.0.6: + dependencies: + path-key: 3.1.1 + shebang-command: 2.0.0 + which: 2.0.2 + debug@4.4.3: dependencies: ms: 2.1.3 deep-eql@5.0.2: {} + eastasianwidth@0.2.0: {} + emoji-regex@10.6.0: {} + emoji-regex@8.0.0: {} + + emoji-regex@9.2.2: {} + es-module-lexer@1.7.0: {} esbuild@0.27.2: @@ -1152,21 +1453,72 @@ snapshots: mlly: 1.8.0 rollup: 4.57.1 + foreground-child@3.3.1: + dependencies: + cross-spawn: 7.0.6 + signal-exit: 4.1.0 + fsevents@2.3.3: optional: true get-east-asian-width@1.4.0: {} + glob@10.5.0: + dependencies: + foreground-child: 3.3.1 + jackspeak: 3.4.3 + minimatch: 9.0.5 + minipass: 7.1.2 + package-json-from-dist: 1.0.1 + path-scurry: 1.11.1 + + has-flag@4.0.0: {} + + html-escaper@2.0.2: {} + + is-fullwidth-code-point@3.0.0: {} + is-interactive@2.0.0: {} is-unicode-supported@1.3.0: {} is-unicode-supported@2.1.0: {} + isexe@2.0.0: {} + isexe@3.1.1: {} + istanbul-lib-coverage@3.2.2: {} + + istanbul-lib-report@3.0.1: + dependencies: + istanbul-lib-coverage: 3.2.2 + make-dir: 4.0.0 + supports-color: 7.2.0 + + istanbul-lib-source-maps@5.0.6: + dependencies: + '@jridgewell/trace-mapping': 0.3.31 + debug: 4.4.3 + istanbul-lib-coverage: 3.2.2 + transitivePeerDependencies: + - supports-color + + istanbul-reports@3.2.0: + dependencies: + html-escaper: 2.0.2 + istanbul-lib-report: 3.0.1 + + jackspeak@3.4.3: + dependencies: + '@isaacs/cliui': 8.0.2 + optionalDependencies: + '@pkgjs/parseargs': 0.11.0 + joycon@3.1.1: {} + js-tokens@10.0.0: {} + js-tokens@9.0.1: {} lilconfig@3.1.3: {} @@ -1182,12 +1534,30 @@ snapshots: loupe@3.2.1: {} + lru-cache@10.4.3: {} + magic-string@0.30.21: dependencies: '@jridgewell/sourcemap-codec': 1.5.5 + magicast@0.3.5: + dependencies: + '@babel/parser': 7.29.0 + '@babel/types': 7.29.0 + source-map-js: 1.2.1 + + make-dir@4.0.0: + dependencies: + semver: 7.7.4 + mimic-function@5.0.1: {} + minimatch@9.0.5: + dependencies: + brace-expansion: 2.0.2 + + minipass@7.1.2: {} + mlly@1.8.0: dependencies: acorn: 8.15.0 @@ -1223,6 +1593,15 @@ snapshots: string-width: 7.2.0 strip-ansi: 7.1.2 + package-json-from-dist@1.0.1: {} + + path-key@3.1.1: {} + + path-scurry@1.11.1: + dependencies: + lru-cache: 10.4.3 + minipass: 7.1.2 + pathe@2.0.3: {} pathval@2.0.1: {} @@ -1292,6 +1671,14 @@ snapshots: '@rollup/rollup-win32-x64-msvc': 4.57.1 fsevents: 2.3.3 + semver@7.7.4: {} + + shebang-command@2.0.0: + dependencies: + shebang-regex: 3.0.0 + + shebang-regex@3.0.0: {} + siginfo@2.0.0: {} signal-exit@4.1.0: {} @@ -1306,12 +1693,28 @@ snapshots: stdin-discarder@0.2.2: {} + string-width@4.2.3: + dependencies: + emoji-regex: 8.0.0 + is-fullwidth-code-point: 3.0.0 + strip-ansi: 6.0.1 + + string-width@5.1.2: + dependencies: + eastasianwidth: 0.2.0 + emoji-regex: 9.2.2 + strip-ansi: 7.1.2 + string-width@7.2.0: dependencies: emoji-regex: 10.6.0 get-east-asian-width: 1.4.0 strip-ansi: 7.1.2 + strip-ansi@6.0.1: + dependencies: + ansi-regex: 5.0.1 + strip-ansi@7.1.2: dependencies: ansi-regex: 6.2.2 @@ -1330,6 +1733,16 @@ snapshots: tinyglobby: 0.2.15 ts-interface-checker: 0.1.13 + supports-color@7.2.0: + dependencies: + has-flag: 4.0.0 + + test-exclude@7.0.1: + dependencies: + '@istanbuljs/schema': 0.1.3 + glob: 10.5.0 + minimatch: 9.0.5 + thenify-all@1.6.0: dependencies: thenify: 3.3.1 @@ -1466,6 +1879,10 @@ snapshots: - tsx - yaml + which@2.0.2: + dependencies: + isexe: 2.0.0 + which@4.0.0: dependencies: isexe: 3.1.1 @@ -1475,4 +1892,16 @@ snapshots: siginfo: 2.0.0 stackback: 0.0.2 + wrap-ansi@7.0.0: + dependencies: + ansi-styles: 4.3.0 + string-width: 4.2.3 + strip-ansi: 6.0.1 + + wrap-ansi@8.1.0: + dependencies: + ansi-styles: 6.2.3 + string-width: 5.1.2 + strip-ansi: 7.1.2 + yaml@2.8.2: {} diff --git a/src/commands/init.ts b/src/commands/init.ts index 7fb67cb..1a3fcad 100644 --- a/src/commands/init.ts +++ b/src/commands/init.ts @@ -2,11 +2,24 @@ import { Command } from "commander"; import { createInterface } from "node:readline/promises"; import { stdin, stdout } from "node:process"; import { existsSync } from "node:fs"; +import { homedir } from "node:os"; +import { join } from "node:path"; import chalk from "chalk"; +import ora from "ora"; import { getConfigPath, loadConfig, saveConfig } from "../utils/config.js"; -import { findPython, installAceteamNodes, isAceteamNodesInstalled } from "../utils/python.js"; +import { + findPython, + getPythonVersion, + createVenv, + getVenvPythonPath, + isVenvValid, + installAceteamNodes, + isAceteamNodesInstalled, +} from "../utils/python.js"; import * as output from "../utils/output.js"; +const DEFAULT_VENV_DIR = join(homedir(), ".ace", "venv"); + export const initCommand = new Command("init") .description("Initialize AceTeam CLI configuration") .action(async () => { @@ -15,71 +28,132 @@ export const initCommand = new Command("init") console.log(chalk.bold("\nAceTeam CLI Setup\n")); - // Check Python - console.log("Checking Python installation..."); - const pythonPath = await findPython(); - if (!pythonPath) { + // Step 1: Prerequisites — detect Python + console.log(chalk.bold("1. Prerequisites")); + const systemPython = await findPython(); + + if (!systemPython) { + // Try to find any Python to give a better error + const candidates = ["python3", "python"]; + for (const name of candidates) { + try { + const { execSync } = await import("node:child_process"); + const version = execSync(`${name} --version`, { + encoding: "utf-8", + }).trim(); + const match = version.match(/Python (\d+\.\d+)/); + if (match) { + output.error( + `Found ${version} at ${name}. Python 3.12+ required.` + ); + rl.close(); + process.exit(1); + } + } catch { + // Not found + } + } + output.error( - "Python 3.12+ not found. Please install Python 3.12 or later." + "Python not found. Please install Python 3.12 or later." ); rl.close(); process.exit(1); } - output.success(`Python found: ${pythonPath}`); - // Check aceteam-nodes - console.log("Checking aceteam-nodes..."); - if (isAceteamNodesInstalled(pythonPath)) { + const version = getPythonVersion(systemPython); + const versionStr = version + ? `${version.major}.${version.minor}.${version.patch}` + : "unknown"; + output.success(`Python ${versionStr} (${systemPython})`); + + // Step 2: Venv setup + console.log(chalk.bold("\n2. Virtual environment")); + + const config = existsSync(configPath) ? loadConfig() : {}; + const venvDir = config.venv_dir || DEFAULT_VENV_DIR; + + if (isVenvValid(venvDir)) { + const venvPython = getVenvPythonPath(venvDir); + output.success(`Existing venv: ${venvDir}`); + config.venv_dir = venvDir; + config.python_path = venvPython; + } else { + const spinner = ora(`Creating venv at ${venvDir}...`).start(); + try { + createVenv(systemPython, venvDir); + const venvPython = getVenvPythonPath(venvDir); + config.venv_dir = venvDir; + config.python_path = venvPython; + spinner.succeed(`Created venv: ${venvDir}`); + } catch (err) { + spinner.fail("Failed to create virtual environment"); + output.error(String(err)); + rl.close(); + process.exit(1); + } + } + + // Step 3: Install aceteam-nodes + console.log(chalk.bold("\n3. Dependencies")); + + const venvPython = config.python_path!; + + if (isAceteamNodesInstalled(venvPython)) { output.success("aceteam-nodes is installed"); } else { - output.warn("aceteam-nodes is not installed"); - const answer = await rl.question( - "Install aceteam-nodes now? (Y/n) " - ); - if (answer.toLowerCase() !== "n") { - console.log("Installing aceteam-nodes..."); - try { - installAceteamNodes(pythonPath); - output.success("aceteam-nodes installed"); - } catch { - output.error( - "Failed to install aceteam-nodes. Try: pip install aceteam-nodes" - ); - } + const spinner = ora("Installing aceteam-nodes...").start(); + try { + installAceteamNodes(venvPython); + spinner.succeed("aceteam-nodes installed"); + } catch { + spinner.fail("Failed to install aceteam-nodes"); + output.error("Try manually: pip install aceteam-nodes"); + rl.close(); + process.exit(1); } } - // Config file - if (existsSync(configPath)) { - output.info(`Config file already exists: ${configPath}`); - const existing = loadConfig(); - console.log( - ` Current model: ${existing.default_model || "(not set)"}` - ); + // Step 4: Configure + console.log(chalk.bold("\n4. Configuration")); + + if (!config.default_model) { + const model = await rl.question(`Default model [gpt-4o-mini]: `); + config.default_model = model.trim() || "gpt-4o-mini"; } else { - console.log(`\nCreating config file: ${configPath}`); - const model = await rl.question( - `Default model [gpt-4o-mini]: ` - ); - const config = { - default_model: model.trim() || "gpt-4o-mini", - }; - saveConfig(config); - output.success(`Config saved to ${configPath}`); + output.info(`Default model: ${config.default_model}`); } - // API key reminder + saveConfig(config); + + // Step 5: Summary + console.log(chalk.bold("\nSetup complete:")); + console.log( + ` ${chalk.green("✓")} Python ${versionStr} (${config.python_path})` + ); + console.log(` ${chalk.green("✓")} aceteam-nodes installed`); + console.log(` ${chalk.green("✓")} Config: ${configPath}`); + console.log(` ${chalk.green("✓")} Model: ${config.default_model}`); + + // LLM provider reminder console.log( "\n" + - chalk.bold("API Keys:") + - "\n Set your API key as an environment variable:" + + chalk.bold("LLM Provider:") + + "\n Cloud — set an API key:" + "\n " + chalk.dim("export OPENAI_API_KEY=sk-...") + "\n " + chalk.dim("export ANTHROPIC_API_KEY=sk-ant-...") + + "\n" + + "\n Local — use Ollama or any OpenAI-compatible server:" + + "\n " + + chalk.dim("ollama serve && ollama pull llama3") + + "\n " + + chalk.dim("# then set model to ollama/llama3 in your workflow") + "\n" ); - output.success("Setup complete! Try: ace workflow run "); + console.log(chalk.dim("Try: ace workflow list-templates")); + rl.close(); }); diff --git a/src/commands/workflow.ts b/src/commands/workflow.ts index e6538e3..95dca8b 100644 --- a/src/commands/workflow.ts +++ b/src/commands/workflow.ts @@ -1,5 +1,7 @@ import { Command } from "commander"; -import { existsSync, readFileSync } from "node:fs"; +import { existsSync, readFileSync, writeFileSync } from "node:fs"; +import { createInterface } from "node:readline/promises"; +import { stdin, stdout } from "node:process"; import chalk from "chalk"; import ora from "ora"; import { @@ -9,12 +11,34 @@ import { runWorkflow, validateWorkflow, listNodes, + getVenvPythonPath, + isVenvValid, } from "../utils/python.js"; import { loadConfig } from "../utils/config.js"; import { FabricClient } from "../utils/fabric.js"; +import { classifyPythonError } from "../utils/errors.js"; +import { TEMPLATES, getTemplateById } from "../templates/index.js"; import * as output from "../utils/output.js"; async function ensurePython(): Promise { + const config = loadConfig(); + + // Check config python_path first (managed venv) + if (config.python_path && existsSync(config.python_path)) { + if (isAceteamNodesInstalled(config.python_path)) { + return config.python_path; + } + } + + // Check managed venv + if (config.venv_dir && isVenvValid(config.venv_dir)) { + const venvPython = getVenvPythonPath(config.venv_dir); + if (isAceteamNodesInstalled(venvPython)) { + return venvPython; + } + } + + // Fallback to PATH detection const pythonPath = await findPython(); if (!pythonPath) { output.error( @@ -110,7 +134,7 @@ workflowCommand .command("run ") .description("Run a workflow from a JSON file") .option("-i, --input ", "Input values", []) - .option("-v, --verbose", "Show progress messages") + .option("-v, --verbose", "Show raw stderr debug output") .option("--config ", "Config file path") .option("--remote", "Run on remote Fabric node instead of locally") .action( @@ -154,6 +178,26 @@ workflowCommand const result = await runWorkflow(pythonPath, file, input, { verbose: options.verbose, config: options.config, + onProgress: (event) => { + switch (event.type) { + case "started": + spinner.text = `Running workflow (${event.totalNodes} nodes)...`; + break; + case "node_running": + if (event.totalNodes && event.currentNode) { + spinner.text = `Running node ${event.currentNode}/${event.totalNodes}: ${event.nodeName}...`; + } else { + spinner.text = `Running ${event.nodeName}...`; + } + break; + case "node_done": + // Keep spinner going, text will update on next event + break; + case "node_error": + spinner.text = `Error in ${event.nodeName}: ${event.message}`; + break; + } + }, }); if (result.success) { @@ -163,19 +207,26 @@ workflowCommand console.log(JSON.stringify(result.output, null, 2)); } else { spinner.fail("Workflow failed"); - if (result.errors) { - console.error( - chalk.red(JSON.stringify(result.errors, null, 2)) - ); - } - if (result.error) { - console.error(chalk.red(result.error)); + + const rawError = + result.error || + (result.errors ? JSON.stringify(result.errors) : "Unknown error"); + const classified = classifyPythonError(rawError); + + console.error(chalk.red(classified.message)); + if (classified.suggestion) { + console.error(chalk.dim(classified.suggestion)); } process.exit(1); } } catch (err) { spinner.fail("Workflow execution error"); - console.error(chalk.red(String(err))); + + const classified = classifyPythonError(String(err)); + console.error(chalk.red(classified.message)); + if (classified.suggestion) { + console.error(chalk.dim(classified.suggestion)); + } process.exit(1); } } @@ -251,3 +302,120 @@ workflowCommand nodes.map((n) => [n.type, n.display_name, n.description]) ); }); + +workflowCommand + .command("list-templates") + .description("List available workflow templates") + .option("--category ", "Filter by category") + .action((options: { category?: string }) => { + let templates = TEMPLATES; + + if (options.category) { + const cat = options.category.toLowerCase(); + templates = templates.filter((t) => t.category.toLowerCase() === cat); + } + + if (templates.length === 0) { + output.warn("No templates found"); + if (options.category) { + const categories = [...new Set(TEMPLATES.map((t) => t.category))]; + console.log(` Available categories: ${categories.join(", ")}`); + } + return; + } + + output.printTable( + ["ID", "Name", "Category", "Inputs"], + templates.map((t) => [ + t.id, + t.name, + t.category, + t.inputs.join(", "), + ]) + ); + }); + +workflowCommand + .command("create [template-id]") + .description("Create a workflow from a template") + .option("-o, --output ", "Output file path", "workflow.json") + .action(async (templateId: string | undefined, options: { output: string }) => { + const rl = createInterface({ input: stdin, output: stdout }); + + try { + // If no template ID, show list and prompt + if (!templateId) { + console.log(chalk.bold("\nAvailable templates:\n")); + TEMPLATES.forEach((t, i) => { + console.log(` ${chalk.cyan(`${i + 1})`)} ${t.name} ${chalk.dim(`- ${t.description}`)}`); + }); + console.log(); + + const answer = await rl.question("Select template (number): "); + const index = parseInt(answer, 10) - 1; + + if (isNaN(index) || index < 0 || index >= TEMPLATES.length) { + output.error("Invalid selection"); + return; + } + + templateId = TEMPLATES[index].id; + } + + const template = getTemplateById(templateId); + if (!template) { + output.error(`Template not found: ${templateId}`); + const ids = TEMPLATES.map((t) => t.id).join(", "); + console.log(` Available: ${ids}`); + return; + } + + // Load template workflow JSON + const workflow = structuredClone(template.workflow); + + // Prompt for node parameter customization + const nodes = workflow.nodes as Array<{ + id: string; + type: string; + params: Record; + }>; + + if (nodes.length > 0) { + console.log(chalk.bold("\nCustomize node parameters (Enter to keep default):\n")); + + for (const node of nodes) { + if (node.params && Object.keys(node.params).length > 0) { + console.log(` ${chalk.cyan(node.type)} (${node.id}):`); + for (const [key, defaultVal] of Object.entries(node.params)) { + const answer = await rl.question( + ` ${key} [${defaultVal}]: ` + ); + if (answer.trim()) { + node.params[key] = answer.trim(); + } + } + } + } + } + + // Write output + const outputPath = options.output; + writeFileSync(outputPath, JSON.stringify(workflow, null, 2) + "\n", "utf-8"); + + output.success(`Created ${outputPath}`); + + // Build a helpful run command + const inputNames = (workflow.inputs as Array<{ name: string }>).map( + (i) => i.name + ); + const inputArgs = inputNames + .map((name) => `${name}='...'`) + .join(" --input "); + + console.log( + chalk.dim(`\nRun: ace workflow run ${outputPath} --input ${inputArgs}`) + ); + } finally { + rl.close(); + } + }); diff --git a/src/templates/api-to-llm.json b/src/templates/api-to-llm.json new file mode 100644 index 0000000..aa0093f --- /dev/null +++ b/src/templates/api-to-llm.json @@ -0,0 +1,64 @@ +{ + "name": "API to LLM", + "description": "Fetch a URL then summarize the content with an LLM.", + "nodes": [ + { + "id": "fetch", + "type": "APICall", + "params": { + "method": "GET", + "headers": "{}" + }, + "position": { "x": 300, "y": 200 } + }, + { + "id": "summarize", + "type": "LLM", + "params": { + "model": "gpt-4o-mini", + "temperature": "0.5", + "max_tokens": "1024", + "system_prompt": "Summarize the following content concisely." + }, + "position": { "x": 600, "y": 200 } + } + ], + "edges": [ + { + "source_id": "fetch", + "source_key": "response_body", + "target_id": "summarize", + "target_key": "prompt" + } + ], + "input_edges": [ + { + "input_key": "url", + "target_id": "fetch", + "target_key": "url" + } + ], + "output_edges": [ + { + "source_id": "summarize", + "source_key": "response", + "output_key": "summary" + } + ], + "inputs": [ + { + "name": "url", + "type": "TEXT", + "display_name": "URL", + "description": "The URL to fetch and summarize" + } + ], + "outputs": [ + { + "name": "summary", + "type": "LONG_TEXT", + "display_name": "Summary", + "description": "LLM summary of the fetched content" + } + ] +} diff --git a/src/templates/hello-llm.json b/src/templates/hello-llm.json new file mode 100644 index 0000000..840af05 --- /dev/null +++ b/src/templates/hello-llm.json @@ -0,0 +1,47 @@ +{ + "name": "Hello LLM", + "description": "Send a prompt to an LLM and get a response.", + "nodes": [ + { + "id": "llm", + "type": "LLM", + "params": { + "model": "gpt-4o-mini", + "temperature": "0.7", + "max_tokens": "1024" + }, + "position": { "x": 400, "y": 200 } + } + ], + "edges": [], + "input_edges": [ + { + "input_key": "prompt", + "target_id": "llm", + "target_key": "prompt" + } + ], + "output_edges": [ + { + "source_id": "llm", + "source_key": "response", + "output_key": "response" + } + ], + "inputs": [ + { + "name": "prompt", + "type": "LONG_TEXT", + "display_name": "Prompt", + "description": "The prompt to send to the LLM" + } + ], + "outputs": [ + { + "name": "response", + "type": "LONG_TEXT", + "display_name": "Response", + "description": "The LLM response" + } + ] +} diff --git a/src/templates/index.ts b/src/templates/index.ts new file mode 100644 index 0000000..f1c280b --- /dev/null +++ b/src/templates/index.ts @@ -0,0 +1,42 @@ +import helloLlm from "./hello-llm.json" with { type: "json" }; +import textTransform from "./text-transform.json" with { type: "json" }; +import llmChain from "./llm-chain.json" with { type: "json" }; +import apiToLlm from "./api-to-llm.json" with { type: "json" }; + +export interface TemplateMetadata { + id: string; + name: string; + description: string; + category: string; + inputs: string[]; + workflow: Record; +} + +function defineTemplate( + id: string, + category: string, + workflow: Record +): TemplateMetadata { + const inputs = (workflow.inputs as Array<{ name: string }>).map( + (i) => i.name + ); + return { + id, + name: workflow.name as string, + description: workflow.description as string, + category, + inputs, + workflow, + }; +} + +export const TEMPLATES: TemplateMetadata[] = [ + defineTemplate("hello-llm", "basics", helloLlm), + defineTemplate("text-transform", "basics", textTransform), + defineTemplate("llm-chain", "chains", llmChain), + defineTemplate("api-to-llm", "chains", apiToLlm), +]; + +export function getTemplateById(id: string): TemplateMetadata | undefined { + return TEMPLATES.find((t) => t.id === id); +} diff --git a/src/templates/llm-chain.json b/src/templates/llm-chain.json new file mode 100644 index 0000000..1c0a382 --- /dev/null +++ b/src/templates/llm-chain.json @@ -0,0 +1,66 @@ +{ + "name": "LLM Chain", + "description": "Two-step LLM pipeline: draft then refine.", + "nodes": [ + { + "id": "draft", + "type": "LLM", + "params": { + "model": "gpt-4o-mini", + "temperature": "0.8", + "max_tokens": "1024", + "system_prompt": "You are a helpful assistant. Write a first draft based on the prompt." + }, + "position": { "x": 300, "y": 200 } + }, + { + "id": "refine", + "type": "LLM", + "params": { + "model": "gpt-4o-mini", + "temperature": "0.3", + "max_tokens": "1024", + "system_prompt": "You are an editor. Refine and improve the following draft. Make it clearer and more concise." + }, + "position": { "x": 600, "y": 200 } + } + ], + "edges": [ + { + "source_id": "draft", + "source_key": "response", + "target_id": "refine", + "target_key": "prompt" + } + ], + "input_edges": [ + { + "input_key": "prompt", + "target_id": "draft", + "target_key": "prompt" + } + ], + "output_edges": [ + { + "source_id": "refine", + "source_key": "response", + "output_key": "response" + } + ], + "inputs": [ + { + "name": "prompt", + "type": "LONG_TEXT", + "display_name": "Prompt", + "description": "The initial prompt for the draft" + } + ], + "outputs": [ + { + "name": "response", + "type": "LONG_TEXT", + "display_name": "Refined Response", + "description": "The refined LLM response" + } + ] +} diff --git a/src/templates/text-transform.json b/src/templates/text-transform.json new file mode 100644 index 0000000..640c77f --- /dev/null +++ b/src/templates/text-transform.json @@ -0,0 +1,69 @@ +{ + "name": "Text Transform", + "description": "Transform text using custom instructions.", + "nodes": [ + { + "id": "input", + "type": "TextInput", + "params": {}, + "position": { "x": 200, "y": 200 } + }, + { + "id": "transform", + "type": "DataTransform", + "params": { + "instructions": "Transform the text as requested" + }, + "position": { "x": 500, "y": 200 } + } + ], + "edges": [ + { + "source_id": "input", + "source_key": "text", + "target_id": "transform", + "target_key": "input" + } + ], + "input_edges": [ + { + "input_key": "text", + "target_id": "input", + "target_key": "text" + }, + { + "input_key": "instructions", + "target_id": "transform", + "target_key": "instructions" + } + ], + "output_edges": [ + { + "source_id": "transform", + "source_key": "output", + "output_key": "result" + } + ], + "inputs": [ + { + "name": "text", + "type": "LONG_TEXT", + "display_name": "Input Text", + "description": "The text to transform" + }, + { + "name": "instructions", + "type": "TEXT", + "display_name": "Instructions", + "description": "Transformation instructions" + } + ], + "outputs": [ + { + "name": "result", + "type": "LONG_TEXT", + "display_name": "Result", + "description": "The transformed text" + } + ] +} diff --git a/src/utils/config.ts b/src/utils/config.ts index 854fa83..f0661f5 100644 --- a/src/utils/config.ts +++ b/src/utils/config.ts @@ -10,6 +10,8 @@ export interface AceConfig { default_model?: string; fabric_url?: string; fabric_api_key?: string; + python_path?: string; + venv_dir?: string; } export function getConfigPath(): string { diff --git a/src/utils/errors.ts b/src/utils/errors.ts new file mode 100644 index 0000000..3f66b00 --- /dev/null +++ b/src/utils/errors.ts @@ -0,0 +1,144 @@ +export interface ClassifiedError { + message: string; + suggestion?: string; +} + +/** + * Classify raw Python stderr/error output into human-readable messages. + */ +export function classifyPythonError(raw: string): ClassifiedError { + // ModuleNotFoundError — missing aceteam-nodes or dependencies + if (raw.includes("ModuleNotFoundError")) { + const moduleMatch = raw.match(/ModuleNotFoundError: No module named '([^']+)'/); + const moduleName = moduleMatch?.[1] ?? "unknown"; + if (moduleName.includes("aceteam_nodes")) { + return { + message: `Python module "aceteam_nodes" is not installed.`, + suggestion: "Run `ace init` to install dependencies.", + }; + } + return { + message: `Missing Python module: ${moduleName}`, + suggestion: "Run `ace init` to reinstall dependencies.", + }; + } + + // Authentication errors — missing or invalid API key + if ( + raw.includes("AuthenticationError") || + raw.includes("api_key") || + raw.includes("API key") || + raw.includes("Incorrect API key") + ) { + return { + message: "API authentication failed.", + suggestion: + "Set your API key: export OPENAI_API_KEY=sk-... or export ANTHROPIC_API_KEY=sk-ant-...", + }; + } + + // Pydantic validation errors + if (raw.includes("ValidationError") && raw.includes("validation error")) { + const fieldErrors = extractPydanticFields(raw); + if (fieldErrors.length > 0) { + return { + message: `Validation failed:\n${fieldErrors.map((f) => ` - ${f}`).join("\n")}`, + }; + } + return { + message: "Input validation failed. Check your workflow inputs.", + }; + } + + // Connection errors + if ( + raw.includes("ConnectionError") || + raw.includes("ConnectError") || + raw.includes("ECONNREFUSED") || + raw.includes("httpx.ConnectError") + ) { + return { + message: "Network connection failed.", + suggestion: "Check your internet connection and try again.", + }; + } + + // File not found + if ( + raw.includes("FileNotFoundError") || + raw.includes("WorkflowFileNotFoundError") + ) { + const pathMatch = raw.match(/(?:No such file or directory|not found)[:\s]*'?([^'\n]+)'?/); + return { + message: pathMatch + ? `File not found: ${pathMatch[1].trim()}` + : "Workflow file not found.", + suggestion: "Verify the file path and try again.", + }; + } + + // Timeout errors + if (raw.includes("TimeoutError") || raw.includes("timed out")) { + return { + message: "Operation timed out.", + suggestion: "Try again or check if the model endpoint is responding.", + }; + } + + // Rate limiting + if (raw.includes("RateLimitError") || raw.includes("rate_limit") || raw.includes("429")) { + return { + message: "Rate limited by the API provider.", + suggestion: "Wait a moment and try again.", + }; + } + + // Default: strip Python traceback, show last meaningful line + return { + message: extractLastMeaningfulLine(raw), + }; +} + +/** + * Extract field-level messages from Pydantic ValidationError output. + */ +function extractPydanticFields(raw: string): string[] { + const fields: string[] = []; + // Pydantic v2 format: " field_name\n Error message [type=..., ...]" + const lines = raw.split("\n"); + for (let i = 0; i < lines.length; i++) { + const line = lines[i]; + if (line.match(/^\s{2}\S/) && i + 1 < lines.length) { + const fieldName = line.trim(); + const errorLine = lines[i + 1]?.trim(); + if (errorLine && !errorLine.startsWith("For further")) { + fields.push(`${fieldName}: ${errorLine.replace(/\s*\[type=.*\]/, "")}`); + } + } + } + return fields; +} + +/** + * Strip Python traceback and return the last meaningful error line. + */ +function extractLastMeaningfulLine(raw: string): string { + const lines = raw.trim().split("\n"); + + // Walk backwards to find the last non-traceback line + for (let i = lines.length - 1; i >= 0; i--) { + const line = lines[i].trim(); + if ( + line && + !line.startsWith("Traceback") && + !line.startsWith("File ") && + !line.startsWith("^") && + !line.startsWith("~~~") && + !line.startsWith("During handling") + ) { + return line; + } + } + + return raw.trim().slice(0, 200); +} diff --git a/src/utils/fabric.ts b/src/utils/fabric.ts index 56ac51f..2e8e057 100644 --- a/src/utils/fabric.ts +++ b/src/utils/fabric.ts @@ -1,33 +1,39 @@ +import { withRetry, type RetryOptions } from "./retry.js"; + export class FabricClient { private url: string; private apiKey: string; + private retryOptions: RetryOptions; - constructor(url: string, apiKey: string) { + constructor(url: string, apiKey: string, retryOptions: RetryOptions = {}) { this.url = url.replace(/\/+$/, ""); this.apiKey = apiKey; + this.retryOptions = retryOptions; } private async request( path: string, options: RequestInit = {} ): Promise { - const response = await fetch(`${this.url}${path}`, { - ...options, - headers: { - Authorization: `Bearer ${this.apiKey}`, - "Content-Type": "application/json", - ...options.headers, - }, - }); + return withRetry(async () => { + const response = await fetch(`${this.url}${path}`, { + ...options, + headers: { + Authorization: `Bearer ${this.apiKey}`, + "Content-Type": "application/json", + ...options.headers, + }, + }); - if (!response.ok) { - const body = await response.text(); - throw new Error( - `Fabric API error (${response.status}): ${body || response.statusText}` - ); - } + if (!response.ok) { + const body = await response.text(); + throw new Error( + `Fabric API error (${response.status}): ${body || response.statusText}` + ); + } - return response.json(); + return response.json(); + }, this.retryOptions); } async discover(capability?: string): Promise { diff --git a/src/utils/python.ts b/src/utils/python.ts index c61b3c0..1079516 100644 --- a/src/utils/python.ts +++ b/src/utils/python.ts @@ -1,4 +1,6 @@ import { execSync, spawn } from "node:child_process"; +import { existsSync } from "node:fs"; +import { join } from "node:path"; import which from "which"; /** @@ -10,17 +12,9 @@ export async function findPython(): Promise { for (const name of candidates) { try { const resolved = await which(name); - const version = execSync(`${resolved} --version`, { - encoding: "utf-8", - }).trim(); - // Parse "Python 3.X.Y" - const match = version.match(/Python (\d+)\.(\d+)/); - if (match) { - const major = parseInt(match[1], 10); - const minor = parseInt(match[2], 10); - if (major === 3 && minor >= 12) { - return resolved; - } + const version = getPythonVersion(resolved); + if (version && version.major === 3 && version.minor >= 12) { + return resolved; } } catch { // Not found or can't execute @@ -30,6 +24,59 @@ export async function findPython(): Promise { return null; } +export interface PythonVersion { + major: number; + minor: number; + patch: number; +} + +/** + * Get the Python version from a given executable path. + */ +export function getPythonVersion(pythonPath: string): PythonVersion | null { + try { + const output = execSync(`${pythonPath} --version`, { + encoding: "utf-8", + }).trim(); + const match = output.match(/Python (\d+)\.(\d+)\.(\d+)/); + if (match) { + return { + major: parseInt(match[1], 10), + minor: parseInt(match[2], 10), + patch: parseInt(match[3], 10), + }; + } + } catch { + // Can't execute + } + return null; +} + +/** + * Create a Python virtual environment. + */ +export function createVenv(pythonPath: string, venvDir: string): void { + execSync(`${pythonPath} -m venv ${venvDir}`, { stdio: "pipe" }); +} + +/** + * Get the Python executable path inside a virtual environment. + */ +export function getVenvPythonPath(venvDir: string): string { + if (process.platform === "win32") { + return join(venvDir, "Scripts", "python.exe"); + } + return join(venvDir, "bin", "python"); +} + +/** + * Check if a virtual environment exists and is valid. + */ +export function isVenvValid(venvDir: string): boolean { + const pythonPath = getVenvPythonPath(venvDir); + return existsSync(pythonPath); +} + /** * Check if aceteam-nodes is installed and importable. */ @@ -60,15 +107,74 @@ export interface RunResult { error?: string; } +export interface ProgressEvent { + type: "started" | "node_running" | "node_done" | "node_error"; + totalNodes?: number; + currentNode?: number; + nodeName?: string; + message?: string; +} + +/** + * Parse a stderr line into a structured progress event, if applicable. + */ +export function parseProgressLine(line: string): ProgressEvent | null { + // "Workflow started (N nodes)" + const startMatch = line.match(/Workflow started\s*\((\d+)\s*nodes?\)/i); + if (startMatch) { + return { + type: "started", + totalNodes: parseInt(startMatch[1], 10), + }; + } + + // "[NodeType] running..." + const runningMatch = line.match(/\[([^\]]+)\]\s*running/i); + if (runningMatch) { + return { + type: "node_running", + nodeName: runningMatch[1], + }; + } + + // "[NodeType] done" + const doneMatch = line.match(/\[([^\]]+)\]\s*done/i); + if (doneMatch) { + return { + type: "node_done", + nodeName: doneMatch[1], + }; + } + + // "[NodeType] error: message" + const errorMatch = line.match(/\[([^\]]+)\]\s*error:\s*(.*)/i); + if (errorMatch) { + return { + type: "node_error", + nodeName: errorMatch[1], + message: errorMatch[2], + }; + } + + return null; +} + +export interface RunOptions { + verbose?: boolean; + config?: string; + onProgress?: (event: ProgressEvent) => void; +} + /** * Run a workflow via Python subprocess. * Streams stderr (progress) and collects stdout (JSON result). + * Always passes --verbose to Python so progress events are available. */ export function runWorkflow( pythonPath: string, filePath: string, input: Record, - options: { verbose?: boolean; config?: string } = {} + options: RunOptions = {} ): Promise { return new Promise((resolve, reject) => { const args = [ @@ -78,11 +184,9 @@ export function runWorkflow( filePath, "--input", JSON.stringify(input), + "--verbose", ]; - if (options.verbose) { - args.push("--verbose"); - } if (options.config) { args.push("--config", options.config); } @@ -93,6 +197,8 @@ export function runWorkflow( let stdout = ""; let stderr = ""; + let completedNodes = 0; + let totalNodes = 0; proc.stdout.on("data", (data: Buffer) => { stdout += data.toString(); @@ -101,7 +207,32 @@ export function runWorkflow( proc.stderr.on("data", (data: Buffer) => { const text = data.toString(); stderr += text; - // Stream progress messages to terminal + + // Parse progress events from each line + const lines = text.split("\n"); + for (const line of lines) { + const trimmed = line.trim(); + if (!trimmed) continue; + + const event = parseProgressLine(trimmed); + if (event) { + if (event.type === "started" && event.totalNodes) { + totalNodes = event.totalNodes; + } + if (event.type === "node_done") { + completedNodes++; + event.currentNode = completedNodes; + event.totalNodes = totalNodes; + } + if (event.type === "node_running") { + event.currentNode = completedNodes + 1; + event.totalNodes = totalNodes; + } + options.onProgress?.(event); + } + } + + // Stream raw stderr in verbose mode if (options.verbose) { process.stderr.write(text); } diff --git a/src/utils/retry.ts b/src/utils/retry.ts new file mode 100644 index 0000000..fa27294 --- /dev/null +++ b/src/utils/retry.ts @@ -0,0 +1,80 @@ +export interface RetryOptions { + maxRetries?: number; + baseDelayMs?: number; + maxDelayMs?: number; + /** Predicate to decide whether to retry on a given error. Defaults to retrying network/transient errors. */ + shouldRetry?: (error: unknown) => boolean; +} + +const RETRYABLE_STATUS_CODES = new Set([429, 502, 503, 504]); + +/** + * Default predicate: retry on network errors and transient HTTP status codes. + */ +function isRetryableError(error: unknown): boolean { + if (error instanceof TypeError && error.message.includes("fetch")) { + return true; // Network error (fetch failed) + } + + const message = error instanceof Error ? error.message : String(error); + + // Network errors + if ( + message.includes("ECONNREFUSED") || + message.includes("ECONNRESET") || + message.includes("ETIMEDOUT") || + message.includes("EAI_AGAIN") || + message.includes("fetch failed") + ) { + return true; + } + + // Retryable HTTP status codes from Fabric API errors + for (const code of RETRYABLE_STATUS_CODES) { + if (message.includes(`(${code})`)) { + return true; + } + } + + return false; +} + +/** + * Execute a function with exponential backoff retry logic. + */ +export async function withRetry( + fn: () => Promise, + options: RetryOptions = {} +): Promise { + const { + maxRetries = 3, + baseDelayMs = 1000, + maxDelayMs = 10000, + shouldRetry = isRetryableError, + } = options; + + let lastError: unknown; + + for (let attempt = 0; attempt <= maxRetries; attempt++) { + try { + return await fn(); + } catch (error) { + lastError = error; + + if (attempt >= maxRetries || !shouldRetry(error)) { + throw error; + } + + // Exponential backoff with jitter + const delay = Math.min(baseDelayMs * 2 ** attempt, maxDelayMs); + const jitter = delay * 0.1 * Math.random(); + await sleep(delay + jitter); + } + } + + throw lastError; +} + +function sleep(ms: number): Promise { + return new Promise((resolve) => setTimeout(resolve, ms)); +} diff --git a/tests/helpers/mocks.ts b/tests/helpers/mocks.ts new file mode 100644 index 0000000..fb76cb8 --- /dev/null +++ b/tests/helpers/mocks.ts @@ -0,0 +1,82 @@ +import { vi } from "vitest"; +import type { ChildProcess } from "node:child_process"; +import { EventEmitter } from "node:events"; +import { Readable } from "node:stream"; + +/** + * Create a mock child process that emits events. + */ +export function createMockProcess(options?: { + stdout?: string; + stderr?: string; + exitCode?: number; +}): ChildProcess { + const proc = new EventEmitter() as ChildProcess; + + const mockStdout = new Readable({ read() {} }); + const mockStderr = new Readable({ read() {} }); + + proc.stdout = mockStdout as ChildProcess["stdout"]; + proc.stderr = mockStderr as ChildProcess["stderr"]; + proc.stdin = null; + proc.pid = 12345; + proc.killed = false; + proc.connected = false; + proc.exitCode = null; + proc.signalCode = null; + proc.spawnargs = []; + proc.spawnfile = ""; + proc.kill = vi.fn(); + proc.send = vi.fn(); + proc.disconnect = vi.fn(); + proc.unref = vi.fn(); + proc.ref = vi.fn(); + proc[Symbol.dispose] = vi.fn(); + + // Schedule data/close events + setTimeout(() => { + if (options?.stdout) { + mockStdout.push(options.stdout); + } + mockStdout.push(null); + + if (options?.stderr) { + mockStderr.push(options.stderr); + } + mockStderr.push(null); + + proc.emit("close", options?.exitCode ?? 0); + }, 0); + + return proc; +} + +/** + * Create a mock fetch response. + */ +export function createMockResponse(options: { + ok?: boolean; + status?: number; + statusText?: string; + body?: unknown; + text?: string; +}): Response { + return { + ok: options.ok ?? true, + status: options.status ?? 200, + statusText: options.statusText ?? "OK", + json: () => Promise.resolve(options.body ?? {}), + text: () => Promise.resolve(options.text ?? JSON.stringify(options.body ?? {})), + headers: new Headers(), + redirected: false, + type: "basic", + url: "", + clone: vi.fn(), + body: null, + bodyUsed: false, + arrayBuffer: vi.fn(), + blob: vi.fn(), + formData: vi.fn(), + bytes: vi.fn(), + } as unknown as Response; +} diff --git a/tests/integration/init-lifecycle.test.ts b/tests/integration/init-lifecycle.test.ts new file mode 100644 index 0000000..cf1e0ce --- /dev/null +++ b/tests/integration/init-lifecycle.test.ts @@ -0,0 +1,151 @@ +import { describe, it, expect, vi, beforeEach } from "vitest"; + +vi.mock("node:fs", () => ({ + existsSync: vi.fn(), + readFileSync: vi.fn(), + writeFileSync: vi.fn(), + mkdirSync: vi.fn(), +})); + +vi.mock("node:os", () => ({ + homedir: vi.fn(() => "/mock-home"), +})); + +vi.mock("node:child_process", () => ({ + execSync: vi.fn(), + spawn: vi.fn(), +})); + +vi.mock("which", () => ({ + default: vi.fn(), +})); + +import { existsSync } from "node:fs"; +import { execSync } from "node:child_process"; +import which from "which"; +import { + findPython, + getPythonVersion, + getVenvPythonPath, + isVenvValid, + isAceteamNodesInstalled, +} from "../../src/utils/python.js"; +import { loadConfig, saveConfig, getConfigPath } from "../../src/utils/config.js"; + +const mockExistsSync = vi.mocked(existsSync); +const mockExecSync = vi.mocked(execSync); +const mockWhich = vi.mocked(which); + +beforeEach(() => { + vi.clearAllMocks(); +}); + +describe("init lifecycle", () => { + describe("Python detection", () => { + it("finds Python 3.12+ via which", async () => { + mockWhich.mockResolvedValue("/usr/bin/python3" as never); + mockExecSync.mockReturnValue("Python 3.12.3\n"); + + const result = await findPython(); + expect(result).toBe("/usr/bin/python3"); + }); + + it("rejects Python < 3.12", async () => { + mockWhich.mockResolvedValue("/usr/bin/python3" as never); + mockExecSync.mockReturnValue("Python 3.10.12\n"); + + const result = await findPython(); + expect(result).toBeNull(); + }); + + it("returns null when no Python found", async () => { + mockWhich.mockRejectedValue(new Error("not found")); + + const result = await findPython(); + expect(result).toBeNull(); + }); + }); + + describe("getPythonVersion", () => { + it("parses Python version string", () => { + mockExecSync.mockReturnValue("Python 3.12.3\n"); + const version = getPythonVersion("/usr/bin/python3"); + expect(version).toEqual({ major: 3, minor: 12, patch: 3 }); + }); + + it("returns null for invalid output", () => { + mockExecSync.mockReturnValue("Not Python\n"); + const version = getPythonVersion("/usr/bin/python3"); + expect(version).toBeNull(); + }); + + it("returns null when exec fails", () => { + mockExecSync.mockImplementation(() => { + throw new Error("not found"); + }); + const version = getPythonVersion("/usr/bin/python3"); + expect(version).toBeNull(); + }); + }); + + describe("venv management", () => { + it("getVenvPythonPath returns correct path on Unix", () => { + const path = getVenvPythonPath("/home/user/.ace/venv"); + // On the test platform (linux), should use bin/python + if (process.platform === "win32") { + expect(path).toContain("Scripts"); + } else { + expect(path).toBe("/home/user/.ace/venv/bin/python"); + } + }); + + it("isVenvValid checks for python executable", () => { + mockExistsSync.mockReturnValue(true); + expect(isVenvValid("/home/user/.ace/venv")).toBe(true); + + mockExistsSync.mockReturnValue(false); + expect(isVenvValid("/home/user/.ace/venv")).toBe(false); + }); + }); + + describe("aceteam-nodes detection", () => { + it("returns true when import succeeds", () => { + mockExecSync.mockReturnValue(""); + expect(isAceteamNodesInstalled("/usr/bin/python3")).toBe(true); + }); + + it("returns false when import fails", () => { + mockExecSync.mockImplementation(() => { + throw new Error("ModuleNotFoundError"); + }); + expect(isAceteamNodesInstalled("/usr/bin/python3")).toBe(false); + }); + }); + + describe("config lifecycle", () => { + it("config path is under home directory", () => { + const path = getConfigPath(); + expect(path).toBe("/mock-home/.ace/config.yaml"); + }); + + it("loadConfig returns empty object when no file", () => { + mockExistsSync.mockReturnValue(false); + const config = loadConfig(); + expect(config).toEqual({}); + }); + + it("saveConfig preserves python_path and venv_dir", async () => { + mockExistsSync.mockReturnValue(true); + const config = { + default_model: "gpt-4o-mini", + python_path: "/home/user/.ace/venv/bin/python", + venv_dir: "/home/user/.ace/venv", + }; + saveConfig(config); + + // Verify writeFileSync was called + const { writeFileSync: ws } = await import("node:fs"); + expect(vi.mocked(ws)).toHaveBeenCalled(); + }); + }); +}); diff --git a/tests/integration/workflow-lifecycle.test.ts b/tests/integration/workflow-lifecycle.test.ts new file mode 100644 index 0000000..2f8d8ca --- /dev/null +++ b/tests/integration/workflow-lifecycle.test.ts @@ -0,0 +1,131 @@ +import { describe, it, expect, vi, beforeEach } from "vitest"; +import { existsSync, readFileSync, writeFileSync } from "node:fs"; +import { TEMPLATES, getTemplateById } from "../../src/templates/index.js"; + +describe("workflow lifecycle", () => { + describe("list-templates", () => { + it("has at least 4 bundled templates", () => { + expect(TEMPLATES.length).toBeGreaterThanOrEqual(4); + }); + + it("each template has required metadata", () => { + for (const template of TEMPLATES) { + expect(template.id).toBeTruthy(); + expect(template.name).toBeTruthy(); + expect(template.description).toBeTruthy(); + expect(template.category).toBeTruthy(); + expect(Array.isArray(template.inputs)).toBe(true); + expect(template.inputs.length).toBeGreaterThan(0); + } + }); + + it("each template has valid workflow structure", () => { + for (const template of TEMPLATES) { + const wf = template.workflow; + expect(wf).toHaveProperty("nodes"); + expect(wf).toHaveProperty("inputs"); + expect(wf).toHaveProperty("outputs"); + expect(wf).toHaveProperty("input_edges"); + expect(wf).toHaveProperty("output_edges"); + + const nodes = wf.nodes as Array<{ id: string; type: string }>; + expect(nodes.length).toBeGreaterThan(0); + for (const node of nodes) { + expect(node.id).toBeTruthy(); + expect(node.type).toBeTruthy(); + } + } + }); + + it("template IDs are unique", () => { + const ids = TEMPLATES.map((t) => t.id); + expect(new Set(ids).size).toBe(ids.length); + }); + }); + + describe("getTemplateById", () => { + it("finds existing template", () => { + const template = getTemplateById("hello-llm"); + expect(template).toBeDefined(); + expect(template!.name).toBe("Hello LLM"); + }); + + it("returns undefined for missing template", () => { + expect(getTemplateById("nonexistent")).toBeUndefined(); + }); + }); + + describe("create workflow from template", () => { + it("produces valid JSON from hello-llm template", () => { + const template = getTemplateById("hello-llm")!; + const workflow = structuredClone(template.workflow); + + // Simulate customization + const nodes = workflow.nodes as Array<{ + params: Record; + }>; + nodes[0].params.model = "claude-3-haiku-20240307"; + + const json = JSON.stringify(workflow, null, 2); + const parsed = JSON.parse(json); + + expect(parsed.nodes[0].params.model).toBe("claude-3-haiku-20240307"); + expect(parsed.inputs).toBeDefined(); + expect(parsed.outputs).toBeDefined(); + }); + + it("produces valid JSON from llm-chain template", () => { + const template = getTemplateById("llm-chain")!; + const workflow = structuredClone(template.workflow); + + const json = JSON.stringify(workflow, null, 2); + const parsed = JSON.parse(json); + + expect(parsed.nodes).toHaveLength(2); + expect(parsed.edges).toHaveLength(1); + // Verify edge connects draft to refine + expect(parsed.edges[0].source_id).toBe("draft"); + expect(parsed.edges[0].target_id).toBe("refine"); + }); + + it("each template can be serialized and deserialized", () => { + for (const template of TEMPLATES) { + const cloned = structuredClone(template.workflow); + const json = JSON.stringify(cloned); + const parsed = JSON.parse(json); + + expect(parsed.nodes).toEqual(cloned.nodes); + expect(parsed.inputs).toEqual(cloned.inputs); + expect(parsed.outputs).toEqual(cloned.outputs); + } + }); + }); + + describe("validate workflow structure", () => { + it("rejects workflow without nodes", () => { + const wf = { inputs: [], outputs: [] }; + expect(wf).not.toHaveProperty("nodes"); + }); + + it("rejects workflow without inputs", () => { + const wf = { nodes: [], outputs: [] }; + expect(wf).not.toHaveProperty("inputs"); + }); + + it("rejects workflow without outputs", () => { + const wf = { nodes: [], inputs: [] }; + expect(wf).not.toHaveProperty("outputs"); + }); + + it("accepts valid workflow structure", () => { + const template = getTemplateById("hello-llm")!; + const wf = template.workflow; + + expect(wf).toHaveProperty("nodes"); + expect(wf).toHaveProperty("inputs"); + expect(wf).toHaveProperty("outputs"); + expect(wf).toHaveProperty("input_edges"); + expect(wf).toHaveProperty("output_edges"); + }); + }); +}); diff --git a/tests/utils/errors.test.ts b/tests/utils/errors.test.ts new file mode 100644 index 0000000..5e6c10a --- /dev/null +++ b/tests/utils/errors.test.ts @@ -0,0 +1,98 @@ +import { describe, it, expect } from "vitest"; +import { classifyPythonError } from "../../src/utils/errors.js"; + +describe("classifyPythonError", () => { + it("classifies ModuleNotFoundError for aceteam_nodes", () => { + const raw = `Traceback (most recent call last): + File "/usr/lib/python3.12/runpy.py", line 198, in _run_module_as_main + return _run_code(code, main_globals, None, +ModuleNotFoundError: No module named 'aceteam_nodes'`; + + const result = classifyPythonError(raw); + expect(result.message).toContain("aceteam_nodes"); + expect(result.suggestion).toContain("ace init"); + }); + + it("classifies ModuleNotFoundError for other modules", () => { + const raw = `ModuleNotFoundError: No module named 'torch'`; + const result = classifyPythonError(raw); + expect(result.message).toContain("torch"); + expect(result.suggestion).toContain("ace init"); + }); + + it("classifies AuthenticationError", () => { + const raw = `litellm.AuthenticationError: Incorrect API key provided`; + const result = classifyPythonError(raw); + expect(result.message).toContain("authentication"); + expect(result.suggestion).toContain("OPENAI_API_KEY"); + }); + + it("classifies missing API key", () => { + const raw = `Error: api_key must be set either as an environment variable or passed as an argument`; + const result = classifyPythonError(raw); + expect(result.message).toContain("authentication"); + }); + + it("classifies Pydantic ValidationError", () => { + const raw = `pydantic.ValidationError: 2 validation errors for WorkflowInput + prompt + Field required [type=missing, ...] + temperature + Input should be a valid number [type=float_type, ...]`; + + const result = classifyPythonError(raw); + expect(result.message).toContain("Validation failed"); + expect(result.message).toContain("prompt"); + }); + + it("classifies ConnectionError", () => { + const raw = `ConnectionError: HTTPSConnectionPool(host='api.openai.com', port=443): Max retries exceeded`; + const result = classifyPythonError(raw); + expect(result.message).toContain("connection"); + expect(result.suggestion).toContain("internet"); + }); + + it("classifies httpx.ConnectError", () => { + const raw = `httpx.ConnectError: [Errno 111] Connection refused`; + const result = classifyPythonError(raw); + expect(result.message).toContain("connection"); + }); + + it("classifies FileNotFoundError", () => { + const raw = `FileNotFoundError: [Errno 2] No such file or directory: 'workflow.json'`; + const result = classifyPythonError(raw); + expect(result.message).toContain("not found"); + expect(result.suggestion).toContain("file path"); + }); + + it("classifies TimeoutError", () => { + const raw = `TimeoutError: Operation timed out after 30 seconds`; + const result = classifyPythonError(raw); + expect(result.message).toContain("timed out"); + }); + + it("classifies RateLimitError", () => { + const raw = `RateLimitError: You have exceeded your rate limit`; + const result = classifyPythonError(raw); + expect(result.message).toContain("Rate limited"); + expect(result.suggestion).toContain("Wait"); + }); + + it("strips traceback for unknown errors", () => { + const raw = `Traceback (most recent call last): + File "something.py", line 42, in main + do_stuff() + File "other.py", line 10, in do_stuff + raise ValueError("bad input") +ValueError: bad input`; + + const result = classifyPythonError(raw); + expect(result.message).toBe("ValueError: bad input"); + expect(result.message).not.toContain("Traceback"); + }); + + it("handles empty input", () => { + const result = classifyPythonError(""); + expect(result.message).toBeDefined(); + }); +}); diff --git a/tests/utils/retry.test.ts b/tests/utils/retry.test.ts new file mode 100644 index 0000000..5ab1206 --- /dev/null +++ b/tests/utils/retry.test.ts @@ -0,0 +1,117 @@ +import { describe, it, expect, vi, beforeEach, afterEach } from "vitest"; +import { withRetry } from "../../src/utils/retry.js"; + +beforeEach(() => { + vi.useFakeTimers(); +}); + +afterEach(() => { + vi.useRealTimers(); +}); + +describe("withRetry", () => { + it("returns result on first success", async () => { + const fn = vi.fn().mockResolvedValue("ok"); + const result = await withRetry(fn); + expect(result).toBe("ok"); + expect(fn).toHaveBeenCalledTimes(1); + }); + + it("retries on network error and succeeds", async () => { + const fn = vi.fn() + .mockRejectedValueOnce(new Error("ECONNREFUSED")) + .mockResolvedValue("ok"); + + const promise = withRetry(fn, { maxRetries: 3, baseDelayMs: 100 }); + + // Advance past the first retry delay + await vi.advanceTimersByTimeAsync(200); + const result = await promise; + + expect(result).toBe("ok"); + expect(fn).toHaveBeenCalledTimes(2); + }); + + it("retries on 429 status code errors", async () => { + const fn = vi.fn() + .mockRejectedValueOnce(new Error("Fabric API error (429): Too Many Requests")) + .mockResolvedValue("ok"); + + const promise = withRetry(fn, { maxRetries: 3, baseDelayMs: 100 }); + await vi.advanceTimersByTimeAsync(200); + const result = await promise; + + expect(result).toBe("ok"); + expect(fn).toHaveBeenCalledTimes(2); + }); + + it("retries on 502/503/504 status codes", async () => { + const fn = vi.fn() + .mockRejectedValueOnce(new Error("Fabric API error (502): Bad Gateway")) + .mockRejectedValueOnce(new Error("Fabric API error (503): Service Unavailable")) + .mockResolvedValue("ok"); + + const promise = withRetry(fn, { maxRetries: 3, baseDelayMs: 100 }); + await vi.advanceTimersByTimeAsync(500); + const result = await promise; + + expect(result).toBe("ok"); + expect(fn).toHaveBeenCalledTimes(3); + }); + + it("throws after max retries exhausted", async () => { + vi.useRealTimers(); // Use real timers to avoid unhandled rejection timing issues + + const fn = vi.fn().mockImplementation(() => Promise.reject(new Error("ECONNREFUSED"))); + + await expect( + withRetry(fn, { maxRetries: 2, baseDelayMs: 10, maxDelayMs: 20 }) + ).rejects.toThrow("ECONNREFUSED"); + expect(fn).toHaveBeenCalledTimes(3); // initial + 2 retries + + vi.useFakeTimers(); // Restore for afterEach + }); + + it("does not retry on non-retryable errors", async () => { + const fn = vi.fn().mockRejectedValue(new Error("Fabric API error (403): Forbidden")); + + await expect( + withRetry(fn, { maxRetries: 3, baseDelayMs: 100 }) + ).rejects.toThrow("403"); + expect(fn).toHaveBeenCalledTimes(1); + }); + + it("respects custom shouldRetry predicate", async () => { + const fn = vi.fn() + .mockRejectedValueOnce(new Error("custom-retryable")) + .mockResolvedValue("ok"); + + const promise = withRetry(fn, { + maxRetries: 3, + baseDelayMs: 100, + shouldRetry: (err) => + err instanceof Error && err.message.includes("custom-retryable"), + }); + + await vi.advanceTimersByTimeAsync(200); + const result = await promise; + + expect(result).toBe("ok"); + expect(fn).toHaveBeenCalledTimes(2); + }); + + it("applies exponential backoff with delay growth", async () => { + const fn = vi.fn() + .mockRejectedValueOnce(new Error("ECONNREFUSED")) + .mockRejectedValueOnce(new Error("ECONNREFUSED")) + .mockResolvedValue("ok"); + + // baseDelay=100: attempt 0 delay = 100ms, attempt 1 delay = 200ms (+ jitter) + const promise = withRetry(fn, { maxRetries: 3, baseDelayMs: 100, maxDelayMs: 10000 }); + await vi.advanceTimersByTimeAsync(500); + const result = await promise; + + expect(result).toBe("ok"); + expect(fn).toHaveBeenCalledTimes(3); + }); +}); diff --git a/tsconfig.json b/tsconfig.json index e38ae70..148790b 100644 --- a/tsconfig.json +++ b/tsconfig.json @@ -10,7 +10,8 @@ "outDir": "dist", "rootDir": "src", "declaration": true, - "sourceMap": true + "sourceMap": true, + "resolveJsonModule": true }, "include": ["src/**/*"] } diff --git a/vitest.config.ts b/vitest.config.ts new file mode 100644 index 0000000..fc7eb34 --- /dev/null +++ b/vitest.config.ts @@ -0,0 +1,13 @@ +import { defineConfig } from "vitest/config"; + +export default defineConfig({ + test: { + globals: true, + testTimeout: 10000, + coverage: { + provider: "v8", + include: ["src/**/*.ts"], + exclude: ["src/index.ts"], + }, + }, +});