Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
38 changes: 33 additions & 5 deletions .dockerignore
Original file line number Diff line number Diff line change
@@ -1,5 +1,33 @@
.vscode
node_modules
bots/*/
!bots/*
keys.json
# Runtime bot data — large and should never be in the image
bots/*/logs/
bots/*/histories/
bots/*/action-code/
bots/*/ensemble_log.json

# Git history
.git/

# Local secrets / keys
keys.json
.env
.env.*

# Node dev artifacts
node_modules/

# Editor / OS
.vscode/
*.DS_Store
Thumbs.db

# Tasks output
tasks/**/__pycache__/
tasks/**/*.pyc

# AWS deploy scripts (not needed in container)
aws/

# Docs not needed at runtime
docs/
*.md
!README.md
14 changes: 14 additions & 0 deletions .env.example
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
# Minecraft Server Configuration
RCON_PASSWORD=your_rcon_password_here

# LiteLLM Configuration (required for docker-compose validation)
LITELLM_MASTER_KEY=your_litellm_key_here

# EC2 Deployment Configuration
EC2_PUBLIC_IP=your_ec2_ip_here

# Discord Bot (optional - leave blank if not using)
DISCORD_BOT_TOKEN=
DISCORD_ADMIN_IDS=
BOT_DM_CHANNEL=
BACKUP_CHAT_CHANNEL=
28 changes: 28 additions & 0 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
name: CI

on:
push:
branches: [main, develop]
pull_request:
branches: [main, develop]

jobs:
lint:
name: ESLint
runs-on: ubuntu-latest

steps:
- uses: actions/checkout@v4

- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: '20'

- name: Install dependencies
run: npm install --no-audit --no-fund
env:
HUSKY: '0'

- name: Run ESLint (0-warning tolerance)
run: npm run lint
40 changes: 40 additions & 0 deletions .github/workflows/deploy.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
name: Deploy to EC2

on:
push:
branches: [main]

concurrency:
group: deploy-ec2
cancel-in-progress: true

jobs:
deploy:
name: Deploy
runs-on: ubuntu-latest
environment: production

steps:
- uses: actions/checkout@v4

- name: Write SSH key
run: |
mkdir -p ~/.ssh
echo "${{ secrets.EC2_SSH_KEY }}" > ~/.ssh/mindcraft-ec2.pem
chmod 600 ~/.ssh/mindcraft-ec2.pem
ssh-keyscan -H "${{ secrets.EC2_HOST }}" >> ~/.ssh/known_hosts

- name: Deploy on EC2
env:
EC2_HOST: ${{ secrets.EC2_HOST }}
EC2_USER: ${{ secrets.EC2_USER }}
run: |
ssh -i ~/.ssh/mindcraft-ec2.pem \
-o StrictHostKeyChecking=no \
-o ConnectTimeout=30 \
"${EC2_USER}@${EC2_HOST}" \
'cd /app && git fetch origin main && git reset --hard origin/main && bash aws/ec2-go.sh'

- name: Clean up SSH key
if: always()
run: rm -f ~/.ssh/mindcraft-ec2.pem
24 changes: 24 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -28,3 +28,27 @@ tasks/construction_tasks/train/**
server_data*
**/.DS_Store
src/mindcraft-py/__pycache__/
.dockerignore

# Environment variables and secrets
.env
.env.local
.env.*.local
*.pem
*.key
*.p12
*.pfx
secrets.json

# Claude Code
.claude/

# Minecraft server data
minecraft-data/

# Binary assets (hosted on GitHub Releases)
tasks/construction_tasks/*.pdf

# AWS runtime config
aws/config.env
docker-compose.override.yml
4 changes: 4 additions & 0 deletions .husky/pre-commit
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
#!/usr/bin/env sh
. "$(dirname -- "$0")/_/husky.sh"

npx lint-staged
111 changes: 111 additions & 0 deletions CLAUDE.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,111 @@
# CLAUDE.md

This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.

## Project Overview

Mindcraft is an AI-powered Minecraft bot framework (research fork v0.1.3) where multiple LLMs play Minecraft autonomously. This fork adds a **Hybrid Research Rig** with:
- **CloudGrok** — cloud ensemble bot: 4 panel models (Gemini + Grok) voted by a heuristic arbiter + optional LLM judge; always-on on EC2
- **DragonSlayer** — local GPU bot (currently active): `sweaterdog/andy-4:q8_0` via Ollama on RTX 3090; autonomous Ender Dragon speedrun with RC29 persistent state
- **LocalAndy** — local GPU bot (dormant): `sweaterdog/andy-4` via Ollama; research/exploration profile
- All bots connect to one persistent Minecraft server on AWS EC2 (Paper 1.21.11) with ChromaDB-backed memory

This is an **ES module** project (`"type": "module"` in package.json). Use `import`/`export`, not `require`.

## Commands

```bash
npm install # Install deps (runs patch-package postinstall automatically)
npm start # Start bots: node main.js
npm run lint # ESLint with 0-warning tolerance (enforced pre-commit via husky)
npm test # No-op (no tests configured)

# Run DragonSlayer (current active local bot — connects to EC2 server)
node main.js --profiles ./profiles/dragon-slayer.json

# Run a specific bot profile
node main.js --profiles ./profiles/ensemble.json

# Run with task automation
node main.js --task_path tasks/basic/single_agent.json --task_id gather_oak_logs

# EC2 deployment (run on EC2 instance)
bash aws/ec2-go.sh # Quick restart (code pull + rebuild)
bash aws/ec2-go.sh --full # Full redeploy (+ secrets from SSM)
bash aws/ec2-go.sh --secrets # Refresh API keys from AWS SSM only

# Windows launcher
.\start.ps1 both # Start both bots
.\start.ps1 local -Detach # Start local bot detached
.\start.ps1 stop # Stop all
```

## Architecture

### Entry Point & Agent Lifecycle
`main.js` → parses profiles from `settings.js` → spawns one `Agent` per profile via `src/process/agent_process.js` → each agent connects to Minecraft via mineflayer.

### Core Agent Loop (`src/agent/`)
- **`agent.js`** — Main `Agent` class: event handling, conversation loop (`promptConvo()`), mode management
- **`action_manager.js`** — Validates and executes `!commands` (e.g., `!collectBlocks`, `!goToPlayer`)
- **`conversation.js`** — Chat message routing; inter-bot messaging protocol
- **`coder.js`** — JavaScript code execution in an SES sandbox
- **`modes.js`** — Behavioral modes: survival, cowardice, hunting, etc.
- **`history.js`** / **`memory_bank.js`** / **`learnings.js`** — Persistent memory across sessions
- **`library/skills.js`** — All in-game action implementations (~89k LOC)
- **`library/world.js`** — World navigation and block/entity queries

### Ensemble Decision Pipeline (`src/ensemble/`)
The `EnsembleModel` class in `controller.js` runs a 3-phase decision process on every LLM call:
1. **Panel** (`panel.js`) — Queries all 4 panel models in parallel
2. **Arbiter** (`arbiter.js`) — Scores responses heuristically (length, completeness, action quality, latency). If top 2 scores are within 0.08 margin, escalates to Judge.
3. **Judge** (`judge.js`) — LLM-as-Judge (Gemini Flash) picks the best response; 10s timeout
4. **ChromaDB Memory** (`feedback.js`) — Embeds recent context, retrieves similar past decisions (similarity > 0.6), injects as `[PAST EXPERIENCE]` before panel queries, then logs outcome for future retrieval

Every ensemble decision is written to `bots/{BotName}/ensemble_log.json`.

### Model Abstraction (`src/models/`)
- **`_model_map.js`** — Dynamically discovers all provider modules
- **`prompter.js`** — Unified prompt builder: injects `$MEMORY`, `$INVENTORY`, `$STATS`, `$EXAMPLES` into system prompt
- **`{provider}.js`** — 23 provider implementations (gpt, gemini, grok, claude, ollama, etc.)

Model routing: a string like `"gemini-2.5-pro"` is auto-matched to its provider; `"openrouter/google/gemini-2.5-pro"` uses explicit routing; profile can also pass an object `{ api, model, url, params }`.

### Configuration
- **`settings.js`** — Global defaults. Override any key via `SETTINGS_JSON` env var (prototype-pollution protected). API keys in `.env` take priority over `keys.json`.
- **`profiles/*.json`** — Per-bot personality, model selection, system prompts, and per-profile `blocked_actions`. Profiles inherit from `profiles/defaults/{base_profile}.json`.
- **`src/utils/keys.js`** — Loads API keys; env vars always override `keys.json`.

### Web UI & Multi-Agent
- **`src/mindcraft/mindserver.js`** — WebSocket server on port 8080; hosts HUD overlay and bot registry
- **`src/mindcraft/public/`** — Frontend HUD with per-bot runtime, goal, and command log
- Multiple bots share one MindServer. Inter-bot messaging uses `!startConversation()` protocol and an alias system (`/msg gk` → `Grok_En`).

### Security Guards (do not remove)
- `src/utils/message_validator.js` — Injection detection and char sanitization on all chat input
- `src/utils/rate_limiter.js` — Per-user rate limiting
- `settings.js` `deepSanitize()` — Prototype pollution guard on `SETTINGS_JSON`
- `discord-bot.js` — Path traversal guard on profile loading; command injection detection
- `allow_insecure_coding: false` by default (controls `!newAction` code execution)

## Key Configuration Notes

- **Node.js**: v18+ required; v20 LTS recommended; v24+ may cause issues
- **Minecraft version**: Set `minecraft_version` in `settings.js` (default `"auto"` for up to v1.21.6)
- **EC2 server**: set `host` in `settings.js` to your EC2 public IP; `port: 19565` (non-default external port, internal 25565)
- **Docker host**: `"host": "minecraft-server"` is the Docker service name; change to `"localhost"` for non-Docker runs
- **Vision**: Requires `LIBGL_ALWAYS_SOFTWARE=1` and Xvfb (only works in Docker); prismarine-viewer canvas bindings broken on Windows
- **Active local profile**: `profiles/dragon-slayer.json` — DragonSlayer bot with `sweaterdog/andy-4:q8_0` via Ollama
- **Ensemble profile**: `profiles/ensemble.json` — CloudGrok config with 4-panel voting (runs on EC2)
- **Whitelist**: `ENFORCE_WHITELIST=TRUE`; `whitelist.json` mounted into container with pre-built offline UUIDs. **Do not** use the `WHITELIST` env var — it queries Playerdb and crashes for offline-mode bot names.

## Deployment Topologies

| Mode | Compose File | Notes |
|------|-------------|-------|
| Local dev | `docker-compose.yml` | Ollama on host via `host.docker.internal:11434` |
| EC2 production | `docker-compose.aws.yml` | Includes LiteLLM proxy (:4000), ChromaDB, Tailscale sidecar |
| Local bot → EC2 server | `settings.js` | set `host` to EC2 public IP, `port: 19565`; bot on Windows, server on EC2 |
| EC2 production | `docker-compose.aws.yml` | Includes LiteLLM proxy (:4000), ChromaDB, Tailscale sidecar |

AWS secrets managed via SSM Parameter Store; `aws/ec2-go.sh --secrets` pulls and writes them.
12 changes: 11 additions & 1 deletion Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -27,9 +27,19 @@ RUN apt-get update && \

WORKDIR /app

COPY package*.json .
# Copy package files and patches for better caching
COPY package*.json ./
COPY patches/ ./patches/
RUN npm install

# Copy source code
COPY . .

# Run tests during build
RUN npm test

# Drop root privileges — node:slim includes a non-root 'node' user
RUN chown -R node:node /app
USER node

CMD ["npm", "start"]
8 changes: 6 additions & 2 deletions FAQ.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
# Common Issues

- `Error: connect ECONNREFUSED`: Minecraft refused to connect with mindcraft program. Most likely due to:
- you have not opened your game to LAN in game settings
- your LAN port is incorrect, make sure the one you enter in game is the same as specified in `settings.js`
Expand All @@ -11,12 +12,14 @@
- **`npm install` fails with Python or C++ build errors**: This typically happens when building native modules like `gl`. Common solutions:
- **Python not found** (macOS/Linux): If you see `python: command not found`, create a symlink: `sudo ln -s $(which python3) /usr/local/bin/python`
- **C++20 errors or Node version issues**: If you see `"C++20 or later required"` errors, you're likely using Node v24 or newer. The `gl` package requires Node LTS (v18 or v20). Switch versions using:

```bash
nvm install 20
nvm use 20
rm -rf node_modules package-lock.json
npm install
```

- **Skip optional packages**: If you don't need the vision feature (disabled by default), you can skip the problematic `gl` package: `npm install --no-optional`

- `My brain disconnected, try again`: Something is wrong with the LLM api. You may have the wrong API key, exceeded your rate limits, or other. Check the program outputs for more details.
Expand All @@ -28,9 +31,10 @@
- `Why I added the api key but still prompted that the key can't be found?`
- Possible reason 1: Did not modify keys.example.json to keys.json.
- Possible reason 2: If you use vscode to edit, you need to `ctrl+s` to save the file for the changes to take effect.
- Possible reason 3: Not setting the code path correctly in setting.js, use andy.js by default.
- Possible reason 3: Not setting the code path correctly in setting.js, use andy.js by default.

## Common Questions

# Common Questions
- Mod Support? Mindcraft only supports client-side mods like optifine and sodium, though they can be tricky to set up. Mods that change minecraft game mechanics are not supported.

- Texture Packs? Apparently these cause issues and refuse to connect. Not sure why
Expand Down
Loading