Skip to content

Latest commit

 

History

History
1374 lines (1103 loc) · 63.4 KB

File metadata and controls

1374 lines (1103 loc) · 63.4 KB

Agent Dashboard - System Design and Technical Reference

Architectural overview and technical reference for the Agent Dashboard system, covering design goals, high-level architecture, data flow, server and client components, database design, WebSocket protocol, hook integration, MCP extension layer, state management, security considerations, performance characteristics, deployment modes, and technology choices.

Claude Code Node.js Express React TypeScript Javascript Vite Tailwind CSS SQLite WebSocket Model Context Protocol better--sqlite3 React Router Lucide D3.js PostCSS Autoprefixer Python Docker Podman Vitest React Testing Library SSE Terraform Kubernetes Helm Kustomize Prometheus Grafana Nginx Coralogix OpenTelemetry AWS Google Cloud Azure Oracle Cloud GitLab CI Make GitHub Actions MIT License


Table of Contents


System Overview

Agent Dashboard is a local-first monitoring platform for Claude Code sessions. It captures agent lifecycle events via Claude Code's native hook system, persists them in SQLite, and presents them through a React dashboard with real-time WebSocket updates.

C4Context
    title System Context Diagram

    Person(user, "Developer", "Uses Claude Code CLI")
    System(claude, "Claude Code", "AI coding assistant with hook system")
    System(dashboard, "Agent Dashboard", "Monitoring platform")
    SystemDb(sqlite, "SQLite", "Persistent storage")

    Rel(user, claude, "Interacts with")
    Rel(claude, dashboard, "Sends hook events via stdin + HTTP")
    Rel(user, dashboard, "Views in browser")
    Rel(dashboard, sqlite, "Reads/writes")
Loading

Design goals:

  • Zero-config operation -- auto-discovers sessions from hook events
  • Never block Claude Code -- hooks fail silently with timeouts
  • Instant feedback -- WebSocket push, no polling
  • Portable -- SQLite, no external services, runs on any OS with Node.js 18+

High-Level Architecture

graph TB
    subgraph "Claude Code Process"
        CC[Claude Code CLI]
        H0[SessionStart Hook]
        H1[PreToolUse Hook]
        H2[PostToolUse Hook]
        H3[Stop Hook]
        H4[SubagentStop Hook]
        H5[Notification Hook]
        H6[SessionEnd Hook]
        CC --> H0 & H1 & H2 & H3 & H4 & H5 & H6
    end

    subgraph "Hook Layer"
        HH["hook-handler.js<br/>(stdin → HTTP)"]
        H0 & H1 & H2 & H3 & H4 & H5 & H6 -->|stdin JSON| HH
    end

    subgraph "Server Process (port 4820)"
        direction TB
        EX[Express Server]
        HR[Hook Router]
        SR[Session Router]
        AR[Agent Router]
        ER[Event Router]
        STR[Stats Router]
        DB[(SQLite<br/>WAL mode)]
        WSS[WebSocket Server]

        EX --> HR & SR & AR & ER & STR
        HR -->|transaction| DB
        SR & AR & ER & STR --> DB
        HR -->|broadcast| WSS
        SR & AR -->|broadcast| WSS
    end

    subgraph "Client (Browser)"
        direction TB
        VITE[Vite Dev Server<br/>or Static Files]
        APP[React App]
        WS_CLIENT[WebSocket Client]
        EB[Event Bus]
        PAGES[Pages:<br/>Dashboard / Kanban /<br/>Sessions / Activity /<br/>Workflows]

        VITE --> APP
        APP --> WS_CLIENT
        WS_CLIENT --> EB
        EB --> PAGES
        PAGES -->|fetch| EX
    end

    HH -->|"POST /api/hooks/event"| HR
    WSS -->|push messages| WS_CLIENT

    style CC fill:#6366f1,stroke:#818cf8,color:#fff
    style DB fill:#003B57,stroke:#005f8a,color:#fff
    style WSS fill:#10b981,stroke:#34d399,color:#fff
    style EB fill:#f59e0b,stroke:#fbbf24,color:#000
Loading

Data Flow

Event Ingestion Pipeline

sequenceDiagram
    participant CC as Claude Code
    participant HH as hook-handler.js
    participant API as POST /api/hooks/event
    participant TX as SQLite Transaction
    participant WS as WebSocket.broadcast()
    participant UI as React Client

    CC->>HH: stdin: {"session_id":"abc","tool_name":"Bash",...}
    Note over HH: Reads stdin, parses JSON,<br/>wraps with hook_type

    HH->>API: POST {"hook_type":"PreToolUse","data":{...}}
    Note over API: Validates hook_type + data

    API->>TX: BEGIN TRANSACTION
    TX->>TX: ensureSession(session_id)
    Note over TX: Creates session + main agent<br/>if first contact

    TX->>TX: Process by hook_type
    Note over TX: SessionStart → create/reactivate session,<br/>abandon stale sessions (5+ min idle)<br/>PreToolUse → set agent working<br/>PostToolUse → clear current_tool<br/>Stop → main agent idle (non-tool turns too)<br/>SubagentStop → mark subagent done<br/>SessionEnd → mark all completed<br/>Every event → detect compaction from JSONL,<br/>create Compaction agent + event if new

    TX->>TX: insertEvent(...)
    TX->>TX: COMMIT

    API->>WS: broadcast("agent_updated", agent)
    API->>WS: broadcast("new_event", event)

    WS->>UI: {"type":"agent_updated","data":{...}}
    UI->>UI: eventBus.publish(msg)
    UI->>UI: Page re-renders with new data
Loading

Client Data Loading Pattern

sequenceDiagram
    participant Page as React Page
    participant API as api.ts
    participant Server as Express
    participant EB as eventBus
    participant WS as WebSocket

    Note over Page: Component mounts
    Page->>API: load() via useEffect
    API->>Server: GET /api/sessions (or agents, events, stats)
    Server-->>API: JSON response
    API-->>Page: setState(data)

    Note over Page: Subscribes to live updates
    Page->>EB: eventBus.subscribe(handler)

    loop Real-time updates
        WS->>EB: eventBus.publish(msg)
        EB->>Page: handler(msg)
        Page->>Page: Reload or optimistic update
    end

    Note over Page: Component unmounts
    Page->>EB: unsubscribe()
Loading

Server Architecture

Module Dependency Graph

graph TD
    INDEX[server/index.js<br/>Express app + HTTP server]
    DB[server/db.js<br/>SQLite + prepared statements<br/>better-sqlite3 → node:sqlite fallback]
    WS[server/websocket.js<br/>WS server + broadcast]
    HOOKS[routes/hooks.js<br/>Hook event processing]
    TC[lib/transcript-cache.js<br/>JSONL cache + incremental reads]
    SESSIONS[routes/sessions.js<br/>Session CRUD]
    AGENTS[routes/agents.js<br/>Agent CRUD]
    EVENTS[routes/events.js<br/>Event listing]
    STATS[routes/stats.js<br/>Aggregate queries]
    PRICING[routes/pricing.js<br/>Cost calculation + pricing CRUD]
    SETTINGS[routes/settings.js<br/>System info + data management]
    WORKFLOWS[routes/workflows.js<br/>Workflow visualizations]

    INDEX --> DB
    INDEX --> WS
    INDEX --> HOOKS & SESSIONS & AGENTS & EVENTS & STATS & PRICING & SETTINGS & WORKFLOWS

    HOOKS --> DB & WS & TC
    SETTINGS --> DB & TC
    INDEX --> TC
    SESSIONS --> DB & WS
    AGENTS --> DB & WS
    EVENTS --> DB
    STATS --> DB & WS
    PRICING --> DB
    WORKFLOWS --> DB

    style INDEX fill:#6366f1,stroke:#818cf8,color:#fff
    style DB fill:#003B57,stroke:#005f8a,color:#fff
    style WS fill:#10b981,stroke:#34d399,color:#fff
Loading

Server Components

Module Responsibility
server/index.js Express app setup, middleware, route mounting, static file serving in production, HTTP server creation. Runs a periodic maintenance sweep every 2 min (abandons stale sessions with transcript cache eviction, scans active sessions for new compaction entries via shared transcript cache). Triggers legacy session import (with active-session detection for recently-modified JSONL files) and compaction backfill on startup
server/db.js SQLite connection with WAL mode, schema migration (CREATE TABLE IF NOT EXISTS + ALTER TABLE for column additions), all prepared statements as a reusable stmts object. Tries better-sqlite3 first, falls back to node:sqlite via compat-sqlite.js. Migrations use literal defaults for ALTER TABLE since SQLite does not support expressions like strftime() in column defaults added via ALTER TABLE
server/compat-sqlite.js Compatibility wrapper that gives Node.js built-in node:sqlite (DatabaseSync) the same API as better-sqlite3 — pragma, transaction, prepare. Used as automatic fallback when the native module is unavailable (Node 22+)
server/websocket.js WebSocket server on /ws path, 30s heartbeat with ping/pong dead connection detection, typed broadcast function
routes/hooks.js Core event processing inside a SQLite transaction. Auto-creates sessions/agents. Handles 7 hook types (SessionStart through SessionEnd) plus synthetic Compaction events. Manages agent state machine, session reactivation on resume (including Stop/SubagentStop reactivation for imported completed/abandoned sessions), orphaned session cleanup (5+ min idle). Uses a shared TranscriptCache instance (server/lib/transcript-cache.js) for token extraction — stat-based caching with incremental byte-offset reads avoids re-reading entire JSONL files on every event. Detects compaction via isCompactSummary in JSONL transcripts and creates compaction agents + events (deduplicated by uuid). Token baselines (baseline_* columns) preserve pre-compaction totals so no usage is lost. Cache entries are evicted on SessionEnd
routes/sessions.js Standard CRUD with pagination. GET includes agent count via LEFT JOIN. POST is idempotent on session ID
routes/agents.js CRUD with status/session_id filtering. PATCH broadcasts agent_updated
routes/events.js Read-only event listing with session_id filter and pagination
routes/stats.js Single aggregate query returning total/active counts + status distributions
routes/analytics.js Extended analytics — token totals, tool usage counts, daily event/session trends, agent type distribution
routes/pricing.js Model pricing CRUD (list/upsert/delete), per-session and global cost calculation with pattern-based model matching
routes/settings.js System info (DB size, hook status, server uptime, transcript cache stats), data export as JSON, session cleanup (abandon stale, purge old), clear all data, reset pricing, reinstall hooks
routes/workflows.js Aggregate workflow visualization data (agent orchestration graphs, tool transition flows, collaboration networks, workflow pattern detection, model delegation, error propagation, concurrency timelines, session complexity metrics, compaction impact). Per-session drill-in endpoint with agent tree, tool timeline, and event details
lib/transcript-cache.js Stat-based JSONL transcript cache with incremental byte-offset reads. Shared between hooks.js (token extraction on every event) and the periodic compaction scanner (index.js). Uses (path, mtime, size) cache key — unchanged files return cached results instantly, grown files only parse new bytes, shrunk files (compaction) trigger full re-read. LRU eviction caps at 200 entries. Entries evicted on SessionEnd and abandoned session cleanup

Request Processing

flowchart LR
    REQ[Incoming<br/>Request] --> CORS[CORS<br/>Middleware]
    CORS --> JSON[JSON Body<br/>Parser<br/>1MB limit]
    JSON --> ROUTER{Route<br/>Match}
    ROUTER -->|/api/hooks| HOOKS[hooks.js]
    ROUTER -->|/api/sessions| SESSIONS[sessions.js]
    ROUTER -->|/api/agents| AGENTS[agents.js]
    ROUTER -->|/api/events| EVENTS[events.js]
    ROUTER -->|/api/stats| STATS[stats.js]
    ROUTER -->|/api/pricing| PRICING[pricing.js]
    ROUTER -->|/api/settings| SETTINGS[settings.js]
    ROUTER -->|/api/workflows| WORKFLOWS[workflows.js]
    ROUTER -->|/api/health| HEALTH[Health Check]
    ROUTER -->|"* (prod)"| STATIC[Static Files<br/>client/dist]

    HOOKS --> DB[(SQLite)]
    SESSIONS --> DB
    AGENTS --> DB
    EVENTS --> DB
    STATS --> DB
    PRICING --> DB
    SETTINGS --> DB
    WORKFLOWS --> DB

    HOOKS --> WS[WebSocket<br/>Broadcast]
    SESSIONS --> WS
    AGENTS --> WS
Loading

Client Architecture

Component Tree

graph TD
    APP["App.tsx<br/>Router + WebSocket"]
    LAYOUT["Layout.tsx<br/>Sidebar + Outlet"]
    SIDEBAR["Sidebar.tsx<br/>Nav + Connection Status"]
    DASH["Dashboard.tsx"]
    KANBAN["KanbanBoard.tsx"]
    SESS["Sessions.tsx"]
    DETAIL["SessionDetail.tsx"]
    ACTIVITY["ActivityFeed.tsx"]
    SETTINGS_P["Settings.tsx"]

    ANALYTICS_P["Analytics.tsx"]
    WORKFLOWS_P["Workflows.tsx"]
    NOTFOUND["NotFound.tsx"]

    APP --> LAYOUT
    LAYOUT --> SIDEBAR
    LAYOUT --> DASH & KANBAN & SESS & DETAIL & ACTIVITY & ANALYTICS_P & WORKFLOWS_P & SETTINGS_P & NOTFOUND

    DASH --> SC1["StatCard x6<br/>(sessions/agents/subagents/<br/>events today/total events/cost)<br/>3-column grid"]
    DASH --> AC1["AgentCard[]<br/>with collapsible subagent hierarchy"]
    DASH --> EV1["Event rows"]

    KANBAN --> COL["Column x5<br/>(idle/connected/<br/>working/completed/error)"]
    COL --> AC2["AgentCard[]"]

    SESS --> TABLE["Session Table<br/>with filters"]
    DETAIL --> AC3["AgentCard hierarchy<br/>parent → children tree"]
    DETAIL --> TL["Event Timeline"]
    ACTIVITY --> FEED["Streaming<br/>Event List"]
    WORKFLOWS_P --> WFC["12 D3.js components<br/>(workflows/ directory)"]

    style APP fill:#6366f1,stroke:#818cf8,color:#fff
    style LAYOUT fill:#1a1a28,stroke:#2a2a3d,color:#e4e4ed
Loading

Client Module Graph

graph TD
    MAIN["main.tsx<br/>React entry"]
    APP["App.tsx<br/>Router + WS + Notifications"]
    EB["eventBus.ts<br/>Pub/sub + connection state"]
    WS["useWebSocket.ts<br/>Auto-reconnect hook"]
    NOTIF["useNotifications.ts<br/>Browser notification triggers"]
    API["api.ts<br/>Typed fetch client"]
    TYPES["types.ts<br/>Interfaces + configs"]
    FMT["format.ts<br/>Date/time utilities"]

    MAIN --> APP
    APP --> WS
    APP --> EB
    APP --> NOTIF
    NOTIF --> EB
    WS --> TYPES
    EB --> TYPES

    subgraph Pages
        D[Dashboard]
        K[KanbanBoard]
        S[Sessions]
        SD[SessionDetail]
        AF[ActivityFeed]
        AN[Analytics]
        WF[Workflows]
        SET[Settings]
        NF[NotFound]
    end

    APP --> D & K & S & SD & AF & AN & WF
    D & K & S & SD & AF & AN & WF --> API
    D & K & S & SD & AF & AN & WF --> EB
    D & K & S & SD & AF & AN & WF --> FMT
    SET --> API
    SET --> EB
    SET --> FMT
    API --> TYPES

    subgraph Components
        L[Layout]
        SB[Sidebar]
        AGC[AgentCard]
        STC[StatCard]
        STB[StatusBadge]
        ES[EmptyState]
    end

    D --> STC & AGC & STB
    K --> AGC
    S --> STB & ES
    SD --> AGC & STB
    AF --> STB & ES
    APP --> L
    L --> SB

    style TYPES fill:#3178C6,stroke:#5a9fd4,color:#fff
    style EB fill:#f59e0b,stroke:#fbbf24,color:#000
    style API fill:#10b981,stroke:#34d399,color:#fff
Loading

Routing

graph LR
    ROOT["/ (index)"] --> DASH[Dashboard]
    KANBAN_R["/kanban"] --> KANBAN[KanbanBoard]
    SESS_R["/sessions"] --> SESS[Sessions]
    DETAIL_R["/sessions/:id"] --> DETAIL[SessionDetail]
    ACT_R["/activity"] --> ACT[ActivityFeed]
    AN_R["/analytics"] --> AN[Analytics]
    WF_R["/workflows"] --> WF[Workflows]
    SET_R["/settings"] --> SET[Settings]
    NF_R["/*"] --> NF[NotFound]

    ALL["All routes"] --> LAYOUT["Layout wrapper<br/>(Sidebar + Outlet)"]
Loading
Route Page Data Sources
/ Dashboard GET /api/stats, GET /api/agents, GET /api/events, GET /api/agents?session_id={sid} (subagent hierarchy)
/kanban KanbanBoard GET /api/agents?status={each} per-status (no limit)
/sessions Sessions GET /api/sessions
/sessions/:id SessionDetail GET /api/sessions/:id (includes agents + events)
/activity ActivityFeed GET /api/events?limit=100
/analytics Analytics GET /api/analytics/tokens, GET /api/analytics/tools, GET /api/analytics/trends, GET /api/analytics/agents
/workflows Workflows GET /api/workflows, GET /api/workflows/session/:id + WebSocket auto-refresh (3s debounce)
/settings Settings GET /api/settings/info, GET /api/pricing, GET /api/pricing/cost + localStorage for notification prefs
/* NotFound None (static 404 page)

Workflows Page Architecture

The Workflows page (/workflows) is the most visualization-heavy page, composed of 12 child components in client/src/components/workflows/. All D3.js rendering is done client-side using data from two API endpoints.

graph TD
    WF["Workflows.tsx<br/>Page orchestrator"]:::root
    API_AGG["GET /api/workflows<br/>Aggregate data"]
    API_DI["GET /api/workflows/session/:id<br/>Session drill-in"]
    WS_D["WebSocket auto-refresh<br/>(3s debounce)"]

    WF --> API_AGG
    WF --> API_DI
    WS_D --> WF

    WF --> S1["WorkflowStats<br/>Summary cards"]
    WF --> S2["OrchestrationDAG<br/>Horizontal DAG —<br/>Sessions → Main → Subagents → Outcomes"]
    WF --> S3["ToolExecutionFlow<br/>d3-sankey tool transitions"]
    WF --> S4["AgentCollaborationNetwork<br/>Force-directed pipeline graph"]
    WF --> S5["SubagentEffectiveness<br/>SVG success rings + sparklines"]
    WF --> S6["WorkflowPatterns<br/>Auto-detected sequences"]
    WF --> S7["ModelDelegationFlow<br/>Model → agent routing"]
    WF --> S8["ErrorPropagationMap<br/>Error clustering by depth"]
    WF --> S9["ConcurrencyTimeline<br/>Swim-lane parallel execution"]
    WF --> S10["SessionComplexityScatter<br/>D3 bubble chart"]
    WF --> S11["CompactionImpact<br/>Token compression analysis"]
    WF --> S12["SessionDrillIn<br/>Searchable session explorer<br/>(3 tabs: tree / timeline / events)"]

    classDef root fill:#6366f1,stroke:#818cf8,color:#fff
Loading
Component Visualization D3 Feature
OrchestrationDAG Horizontal DAG of aggregate spawning patterns Custom DAG layout, capped at top 7 subagent types with overflow node
ToolExecutionFlow Tool-to-tool transition Sankey diagram d3-sankey
AgentCollaborationNetwork Agent pipeline graph with directed edges d3-force with arrowheads and frequency labels
SubagentEffectiveness Scorecard grid with success rate rings SVG arc rendering, sparklines (max 3 per row)
WorkflowPatterns Common orchestration sequences Pattern detection from event data
ModelDelegationFlow Model routing through agent hierarchies Hierarchical layout
ErrorPropagationMap Error clustering by hierarchy depth Depth-based grouping
ConcurrencyTimeline Swim-lane parallel agent execution Time-scaled horizontal bars
SessionComplexityScatter Duration vs agents vs tokens D3 bubble/scatter chart
CompactionImpact Token compression events and recovery Before/after comparison
SessionDrillIn Per-session agent tree, tool timeline, events Searchable dropdown with pagination, 3 tabs

Cross-filtering: Clicking nodes in the OrchestrationDAG filters data in other sections. JSON export: All workflow data can be exported as JSON from the page header.


Database Design

Entity Relationship Diagram

erDiagram
    sessions ||--o{ agents : has
    sessions ||--o{ events : has
    sessions ||--o{ token_usage : tracks
    agents ||--o{ events : generates
    agents ||--o{ agents : spawns

    sessions {
        TEXT id PK "UUID"
        TEXT name "Human-readable label"
        TEXT status "active|completed|error|abandoned"
        TEXT cwd "Working directory"
        TEXT model "Claude model ID"
        TEXT started_at "ISO 8601"
        TEXT ended_at "ISO 8601 or NULL"
        TEXT metadata "JSON blob"
    }

    agents {
        TEXT id PK "UUID or session_id-main"
        TEXT session_id FK "References sessions.id"
        TEXT name "Main Agent — {session name} or subagent description"
        TEXT type "main|subagent"
        TEXT subagent_type "Explore|general-purpose|etc"
        TEXT status "idle|connected|working|completed|error"
        TEXT task "Current task description"
        TEXT current_tool "Active tool name or NULL"
        TEXT started_at "ISO 8601"
        TEXT ended_at "ISO 8601 or NULL"
        TEXT parent_agent_id FK "References agents.id"
        TEXT metadata "JSON blob"
    }

    events {
        INTEGER id PK "Auto-increment"
        TEXT session_id FK "References sessions.id"
        TEXT agent_id FK "References agents.id"
        TEXT event_type "PreToolUse|PostToolUse|Stop|etc"
        TEXT tool_name "Tool that triggered the event"
        TEXT summary "Human-readable summary"
        TEXT data "Full event JSON"
        TEXT created_at "ISO 8601"
    }

    token_usage {
        TEXT session_id PK "FK to sessions + part of composite PK"
        TEXT model PK "Model identifier + part of composite PK"
        INTEGER input_tokens "Current JSONL total"
        INTEGER output_tokens "Current JSONL total"
        INTEGER cache_read_tokens "Current JSONL total"
        INTEGER cache_write_tokens "Current JSONL total"
        INTEGER baseline_input "Accumulated pre-compaction tokens"
        INTEGER baseline_output "Accumulated pre-compaction tokens"
        INTEGER baseline_cache_read "Accumulated pre-compaction tokens"
        INTEGER baseline_cache_write "Accumulated pre-compaction tokens"
    }

    model_pricing {
        TEXT model_pattern PK "SQL LIKE pattern e.g. claude-opus-4-6%"
        TEXT display_name "Human-readable name"
        REAL input_per_mtok "Cost per million input tokens"
        REAL output_per_mtok "Cost per million output tokens"
        REAL cache_read_per_mtok "Cost per million cache read tokens"
        REAL cache_write_per_mtok "Cost per million cache write tokens"
        TEXT updated_at "ISO 8601"
    }
Loading

Indexes

Index Table Column(s) Purpose
idx_agents_session agents session_id Fast agent lookup by session
idx_agents_status agents status Kanban board column queries
idx_events_session events session_id Session detail event list
idx_events_type events event_type Filter events by type
idx_events_created events created_at DESC Activity feed ordering
idx_sessions_status sessions status Status filter on sessions page
idx_sessions_started sessions started_at DESC Default sort order

SQLite Configuration

Pragma Value Rationale
journal_mode WAL Concurrent reads during writes, better performance for read-heavy workload
foreign_keys ON Referential integrity enforcement
busy_timeout 5000 Wait up to 5s for write lock instead of failing immediately

Prepared Statements

All queries use prepared statements (db.prepare()) for:

  • Security -- parameterized queries prevent SQL injection
  • Performance -- compiled once, executed many times
  • Reliability -- syntax errors caught at startup, not runtime

Notable prepared statements include findStaleSessions (used by SessionStart to identify active sessions with no activity for a configurable number of minutes), touchSession (bumps updated_at on every event), and reactivateSession / reactivateAgent (used when a previously completed/abandoned session receives new work or stop events — Stop/SubagentStop reactivate completed/abandoned sessions to handle sessions imported before the server started).


WebSocket Protocol

Connection

  • Path: /ws
  • Protocol: Standard WebSocket (RFC 6455)
  • Heartbeat: Server sends ping every 30 seconds; clients that don't pong are terminated

Message Format

All messages are JSON with this envelope:

{
  type: "session_created" | "session_updated" | "agent_created" | "agent_updated" | "new_event";
  data: Session | Agent | DashboardEvent;
  timestamp: string; // ISO 8601
}

Message Flow

graph TD
    subgraph "Server Events"
        A[Hook event processed]
        B[Session created/updated via API]
        C[Agent created/updated via API]
    end

    subgraph "Broadcast"
        BC["broadcast(type, data)<br/>Serializes to JSON,<br/>sends to all OPEN clients"]
    end

    subgraph "Client Handling"
        WS["useWebSocket hook<br/>Auto-reconnect on close"]
        EB["eventBus.publish(msg)"]
        SUB1["Dashboard subscriber"]
        SUB2["Kanban subscriber"]
        SUB3["Sessions subscriber"]
        SUB4["SessionDetail subscriber"]
        SUB5["ActivityFeed subscriber"]
        SUB6["Workflows subscriber<br/>(3s debounce)"]
    end

    A & B & C --> BC
    BC --> WS
    WS --> EB
    EB --> SUB1 & SUB2 & SUB3 & SUB4 & SUB5 & SUB6

    style BC fill:#10b981,stroke:#34d399,color:#fff
    style EB fill:#f59e0b,stroke:#fbbf24,color:#000
Loading

Client Reconnection

The useWebSocket hook implements automatic reconnection:

stateDiagram-v2
    [*] --> Connecting: Component mounts
    Connecting --> Connected: onopen
    Connected --> Closed: onclose
    Connected --> Closed: onerror → close
    Closed --> Connecting: setTimeout(2000ms)
    Connected --> [*]: Component unmounts
    Closed --> [*]: Component unmounts
Loading

Hook Integration

Hook Handler Design

scripts/hook-handler.js is designed to be a minimal, fail-safe forwarder:

flowchart TD
    START[Claude Code fires hook] --> STDIN[Read stdin to EOF]
    STDIN --> PARSE{Parse JSON?}
    PARSE -->|Success| POST["POST to 127.0.0.1:4820<br/>/api/hooks/event"]
    PARSE -->|Failure| WRAP["Wrap raw input as<br/>#123;raw: ...#125;"]
    WRAP --> POST
    POST --> RESP{Response?}
    RESP -->|200| EXIT0[exit = 0]
    RESP -->|Error| EXIT0_ERR[exit = 0]
    RESP -->|Timeout 3s| DESTROY[Destroy request]
    DESTROY --> EXIT0_TO[exit = 0]

    SAFETY[Safety net: setTimeout 5s] --> EXIT0_SAFETY[exit = 0]

    style EXIT0 fill:#10b981,stroke:#34d399,color:#fff
    style EXIT0_ERR fill:#10b981,stroke:#34d399,color:#fff
    style EXIT0_TO fill:#10b981,stroke:#34d399,color:#fff
    style EXIT0_SAFETY fill:#10b981,stroke:#34d399,color:#fff
Loading

Key design decisions:

  • Always exits 0 -- never blocks Claude Code regardless of server state
  • 3-second HTTP timeout + 5-second process safety net
  • Uses Node.js http module directly -- no dependencies
  • Reads CLAUDE_DASHBOARD_PORT env var for port override

Hook Installation

scripts/install-hooks.js modifies ~/.claude/settings.json:

flowchart TD
    START[Run install-hooks.js] --> READ{~/.claude/settings.json<br/>exists?}
    READ -->|Yes| PARSE[Parse JSON]
    READ -->|No| EMPTY[Start with empty object]
    PARSE --> CHECK
    EMPTY --> CHECK

    CHECK[Ensure hooks section exists]
    CHECK --> LOOP["For each hook type:<br/>SessionStart, PreToolUse, PostToolUse,<br/>Stop, SubagentStop, Notification, SessionEnd"]

    LOOP --> EXISTS{Our hook<br/>already installed?}
    EXISTS -->|Yes| UPDATE[Update command path]
    EXISTS -->|No| APPEND[Append to array]
    UPDATE --> NEXT
    APPEND --> NEXT

    NEXT{More hook types?}
    NEXT -->|Yes| LOOP
    NEXT -->|No| WRITE[Write settings.json]
    WRITE --> DONE[Print summary]
Loading

Preserves existing hooks -- only adds or updates entries containing hook-handler.js.


Agent Extension Layer

The repository includes a dual extension strategy:

  • Claude Code-native extensions (CLAUDE.md, .claude/rules, .claude/skills)
  • Codex-native extensions (AGENTS.md, .codex/rules, .codex/agents, .codex/skills)
graph TD
    USER["Developer"] --> CLAUDE["Claude Code"]
    USER --> CODEX["Codex"]

    CLAUDE --> C_MEM["CLAUDE.md"]
    CLAUDE --> C_RULES[".claude/rules/*"]
    CLAUDE --> C_SKILLS[".claude/skills/*"]

    CODEX --> X_MEM["AGENTS.md"]
    CODEX --> X_RULES[".codex/rules/*.rules"]
    CODEX --> X_AGENTS[".codex/agents/*.toml"]
    CODEX --> X_SKILLS[".codex/skills/*"]
Loading

Claude Code extension scope

  • CLAUDE.md defines always-on project working agreements.
  • .claude/rules/ adds path-scoped guidance by file area.
  • .claude/skills/ provides reusable workflows:
    • onboarding
    • feature shipping
    • MCP operations
    • live issue debugging
  • .claude/agents/ provides specialized review workers:
    • backend reviewer
    • frontend reviewer
    • MCP reviewer

Codex extension scope

  • AGENTS.md provides project-wide default behavior.
  • .codex/rules/default.rules controls external execution decisions.
  • .codex/agents/ provides custom subagent templates.
  • .codex/skills/ provides reusable task workflows.

MCP Integration

The repository includes an enterprise-grade local MCP server in mcp/ that exposes dashboard functionality as tools for MCP hosts such as Claude Code and Claude Desktop. It supports three transport modes: stdio (for MCP host child-process integration), HTTP+SSE (for remote/networked clients), and an interactive REPL (for operator debugging).

MCP Transport Selection

flowchart TD
    START["MCP Server Start"] --> ARG{"CLI arg or env?"}
    ARG -->|"--transport=stdio\nor default"| STDIO["stdio transport\nJSON-RPC over stdin/stdout"]
    ARG -->|"--transport=http\nor --http"| HTTP["HTTP + SSE transport\nExpress on :8819"]
    ARG -->|"--transport=repl\nor --repl"| REPL["Interactive REPL\nreadline with tab completion"]

    STDIO --> HOST["MCP Host\n(Claude Code / Desktop)"]
    HTTP --> ENDPOINTS["Endpoints:\n/mcp (Streamable HTTP)\n/sse (Legacy SSE)\n/messages (Legacy POST)\n/health (status)"]
    REPL --> CLI["Operator Terminal\ncolored output, JSON highlighting\ntool invocation, domain browsing"]

    style STDIO fill:#6366f1,stroke:#818cf8,color:#fff
    style HTTP fill:#f59e0b,stroke:#fbbf24,color:#000
    style REPL fill:#a855f7,stroke:#c084fc,color:#fff
Loading

MCP Runtime Topology

graph LR
    HOST["MCP Host<br/>(Claude Code / Claude Desktop)"]
    HTTP_CLIENT["Remote MCP Client"]
    OPERATOR["Operator CLI"]

    MCP_STDIO["MCP Server<br/>stdio"]
    MCP_HTTP["MCP Server<br/>HTTP+SSE :8819"]
    MCP_REPL["MCP Server<br/>REPL"]

    API["Dashboard API<br/>http://127.0.0.1:4820/api/*"]
    DB["SQLite"]

    HOST -->|"stdin/stdout"| MCP_STDIO
    HTTP_CLIENT -->|"POST /mcp · GET /sse"| MCP_HTTP
    OPERATOR -->|"interactive CLI"| MCP_REPL

    MCP_STDIO -->|"validated HTTP"| API
    MCP_HTTP -->|"validated HTTP"| API
    MCP_REPL -->|"validated HTTP"| API
    API --> DB

    style HOST fill:#6366f1,stroke:#818cf8,color:#fff
    style HTTP_CLIENT fill:#f59e0b,stroke:#fbbf24,color:#000
    style OPERATOR fill:#a855f7,stroke:#c084fc,color:#fff
    style MCP_STDIO fill:#0f766e,stroke:#14b8a6,color:#fff
    style MCP_HTTP fill:#0f766e,stroke:#14b8a6,color:#fff
    style MCP_REPL fill:#0f766e,stroke:#14b8a6,color:#fff
    style API fill:#339933,stroke:#5cb85c,color:#fff
    style DB fill:#003B57,stroke:#005f8a,color:#fff
Loading

MCP Module Architecture

graph TD
    ENTRY["src/index.ts<br/>(transport router)"]
    SERVER["src/server.ts"]
    CONFIG["config/app-config.ts"]
    CLIENT["clients/dashboard-api-client.ts"]
    CORE["core/*<br/>logger, tool-registry, tool-result"]
    POLICY["policy/tool-guards.ts"]
    TOOLS["tools/index.ts"]
    DOMAINS["tools/domains/*<br/>observability, sessions, agents,<br/>events, pricing, maintenance"]

    T_HTTP["transports/http-server.ts<br/>Express SSE + Streamable HTTP"]
    T_REPL["transports/repl.ts<br/>readline + tab completion"]
    T_COLL["transports/tool-collector.ts<br/>handler collection for REPL"]
    UI["ui/*<br/>banner, colors, formatter"]

    ENTRY --> CONFIG
    ENTRY --> SERVER
    ENTRY --> T_HTTP
    ENTRY --> T_REPL
    ENTRY --> T_COLL
    SERVER --> TOOLS
    TOOLS --> DOMAINS
    DOMAINS --> CLIENT
    DOMAINS --> POLICY
    DOMAINS --> CORE
    T_HTTP --> UI
    T_REPL --> UI
Loading

MCP Safety Model

  • API target is restricted to loopback hosts only (127.0.0.1, localhost, ::1)
  • Tool inputs are schema-validated with zod before execution
  • Mutating tools require MCP_DASHBOARD_ALLOW_MUTATIONS=true
  • Destructive tools additionally require MCP_DASHBOARD_ALLOW_DESTRUCTIVE=true and explicit confirmation token
  • Logging is written to stderr only so stdio protocol traffic is never corrupted

MCP Tool Domains

  • Observability: health/stats/analytics/system/export/snapshot
  • Sessions: list/get/create/update
  • Agents: list/get/create/update
  • Events: list + hook event ingestion
  • Pricing: rule CRUD + total/per-session cost
  • Maintenance: cleanup/reimport/reinstall-hooks/clear-data (guarded)

State Management

Client-Side Architecture

The client uses a deliberately simple state management approach:

graph TD
    subgraph "Data Sources"
        REST["REST API<br/>(initial load + refresh)"]
        WSM["WebSocket Messages<br/>(real-time updates)"]
        LS["localStorage<br/>(notification prefs)"]
    end

    subgraph "Distribution"
        EB["eventBus<br/>(Set-based pub/sub)"]
    end

    subgraph "App-Level Hooks"
        NOTIF_H["useNotifications<br/>reads prefs, fires<br/>browser notifications"]
    end

    subgraph "Page State"
        US1["useState<br/>Dashboard"]
        US2["useState<br/>KanbanBoard"]
        US3["useState<br/>Sessions"]
        US4["useState<br/>SessionDetail"]
        US5["useState<br/>ActivityFeed"]
        US6["useState<br/>Analytics"]
        US8["useState<br/>Workflows"]
        US7["useState<br/>Settings"]
    end

    REST --> US1 & US2 & US3 & US4 & US5 & US6 & US8 & US7
    WSM --> EB
    EB --> US1 & US2 & US3 & US4 & US5 & US6 & US8 & US7
    EB --> NOTIF_H
    LS --> NOTIF_H
    LS --> US7
Loading

Why no Redux / Zustand / Context:

  • Each page owns its data and lifecycle
  • No cross-page state sharing needed (notification prefs use localStorage as the shared store)
  • WebSocket events trigger reload or append, not complex state merging
  • Simpler mental model, fewer abstraction layers, easier to debug

Event Bus

The eventBus is a Set-based pub/sub with subscribe() returning an unsubscribe function. It also tracks WebSocket connection state, exposing connected (boolean getter), setConnected(value), and onConnection(handler) so any component can subscribe to connection status changes.

// Subscribe to messages in useEffect, unsubscribe on cleanup
useEffect(() => {
  return eventBus.subscribe((msg) => {
    if (msg.type === "agent_updated") load();
  });
}, [load]);

// Read connection state reactively (e.g. with useSyncExternalStore)
const wsConnected = useSyncExternalStore(eventBus.onConnection, () => eventBus.connected);

This pattern ensures:

  • No memory leaks (cleanup on unmount)
  • No stale closures (subscribe with latest callback ref)
  • Only active pages receive messages
  • Connection state is available to any component without prop drilling

Browser Notification System

The dashboard implements native browser notifications using the Web Notifications API, allowing users to receive alerts when they're not actively viewing the dashboard tab.

Notification Architecture

graph TD
    subgraph "Server Side"
        WS_SRV["WebSocket Server<br/>broadcasts events"]
    end

    subgraph "Client Side"
        WS_CLI["useWebSocket hook<br/>receives messages"]
        EB["eventBus<br/>distributes messages"]
        NOTIF["useNotifications hook<br/>evaluates notification rules"]
        PREFS["localStorage<br/>(notification preferences)"]
        API_N["Web Notifications API<br/>(browser native)"]
    end

    WS_SRV -->|push| WS_CLI
    WS_CLI --> EB
    EB --> NOTIF
    NOTIF -->|reads| PREFS
    NOTIF -->|fires| API_N

    style NOTIF fill:#f59e0b,stroke:#fbbf24,color:#000
    style API_N fill:#10b981,stroke:#34d399,color:#fff
    style PREFS fill:#6366f1,stroke:#818cf8,color:#fff
Loading

Notification Flow

flowchart TD
    MSG["WebSocket message received"] --> CHECK_ENABLED{"Notifications<br/>enabled?"}
    CHECK_ENABLED -->|No| SKIP[Skip]
    CHECK_ENABLED -->|Yes| CHECK_TYPE{"Message type?"}

    CHECK_TYPE -->|session_created| CHECK_NEW{"onNewSession<br/>enabled?"}
    CHECK_TYPE -->|session_updated| CHECK_STATUS{"Session status?"}
    CHECK_TYPE -->|agent_created| CHECK_SUB{"Subagent?<br/>onSubagentSpawn?"}
    CHECK_TYPE -->|new_event| CHECK_EVENT{"event_type?"}

    CHECK_STATUS -->|error| CHECK_ERROR{"onSessionError?"}

    CHECK_EVENT -->|Stop| CHECK_STOP{"onSessionComplete?"}
    CHECK_EVENT -->|SessionEnd| CHECK_END{"onSessionComplete?"}
    CHECK_EVENT -->|Notification| FIRE

    CHECK_NEW -->|Yes| FIRE["new Notification(title, body)"]
    CHECK_STOP -->|Yes| FIRE_STOP["notify: Claude Finished Responding"]
    CHECK_END -->|Yes| FIRE_END["notify: Session Completed"]
    CHECK_ERROR -->|Yes| FIRE
    CHECK_SUB -->|Yes| FIRE

    style FIRE fill:#10b981,stroke:#34d399,color:#fff
    style SKIP fill:#1a1a28,stroke:#2a2a3d,color:#e4e4ed
Loading

Preference Storage

Notification preferences are stored in localStorage as a JSON object:

interface NotifPrefs {
  enabled: boolean;          // Master toggle
  onNewSession: boolean;     // New session created
  onSessionError: boolean;   // Session ended with error
  onSessionComplete: boolean; // Session completed successfully
  onSubagentSpawn: boolean;  // Background subagent spawned
}

Key: agent-monitor-notifications

The Settings page provides a UI for toggling each preference, managing browser permission state (granted/denied/prompt), and sending test notifications.

Permission States

Browser Permission UI Indicator Behavior
granted Green shield Notifications fire immediately
denied Red shield Notifications silently suppressed by browser
default Amber shield Enabling triggers Notification.requestPermission()

Security Considerations

Area Approach
SQL injection All queries use prepared statements with parameterized values
Request size Express JSON body parser limited to 1MB
Input validation Required fields checked before database operations; CHECK constraints on status enums
Hook safety Hook handler always exits 0; 5s max lifetime; uses 127.0.0.1 not external hosts
CORS Enabled for development; in production, same-origin (Express serves the client)
No auth Intentional -- this is a local development tool. Server binds to 0.0.0.0 only for LAN access; restrict with DASHBOARD_PORT or firewall rules if needed
No secrets No API keys, tokens, or credentials stored or transmitted
Dependency surface Minimal: 5 runtime server deps, 6 runtime client deps (includes d3 and d3-sankey for Workflows visualizations)

Performance Characteristics

Metric Value Notes
Server startup < 200ms SQLite opens instantly; schema migration is idempotent
Hook latency < 5ms (cache hit), < 50ms (miss) TranscriptCache: stat-check only on cache hit; incremental byte-offset read on file growth; full read only on first contact or compaction
Client bundle 200 KB JS, 17 KB CSS Gzipped: ~63 KB JS, ~4 KB CSS
WebSocket latency < 5ms Local loopback, JSON serialization only
SQLite write throughput ~50,000 inserts/sec WAL mode on SSD; far exceeds hook event rate
Max events before slowdown ~1M rows SQLite handles this easily; pagination prevents full-table scans
Memory usage ~30 MB server, ~15 MB client SQLite in-process, no ORM overhead. TranscriptCache adds ~1 KB per active session (LRU-capped at 200 entries)

SQLite WAL Mode Benefits

graph LR
    subgraph "Without WAL"
        W1[Writer] -->|blocks| R1[Reader]
    end

    subgraph "With WAL"
        W2[Writer] --- R2[Reader]
        Note["Concurrent reads<br/>during writes"]
    end

    style Note fill:#10b981,stroke:#34d399,color:#fff
Loading

Deployment Modes

Development

graph LR
    subgraph "Terminal"
        DEV["npm run dev<br/>(concurrently)"]
    end

    DEV --> SERVER["node --watch server/index.js<br/>Port 4820<br/>Auto-restart on changes"]
    DEV --> VITE["vite dev server<br/>Port 5173<br/>HMR, proxies /api + /ws to 4820"]
    BROWSER["Browser"] --> VITE
    VITE -->|proxy| SERVER

    style VITE fill:#646CFF,stroke:#818cf8,color:#fff
    style SERVER fill:#339933,stroke:#5cb85c,color:#fff
Loading

Production

graph LR
    BUILD["npm run build<br/>(vite build in client/)"] --> DIST["client/dist/<br/>Static files"]
    START["npm start"] --> SERVER["node server/index.js<br/>Port 4820"]
    SERVER -->|serves| DIST
    BROWSER["Browser"] --> SERVER

    style SERVER fill:#339933,stroke:#5cb85c,color:#fff
    style DIST fill:#646CFF,stroke:#818cf8,color:#fff
Loading
Aspect Development Production
Processes 2 (Express + Vite) 1 (Express)
Client Vite HMR on :5173 Static files from client/dist
API proxy Vite proxies /api + /ws to :4820 Same origin, no proxy needed
File watching node --watch + Vite HMR None
Source maps Inline External files

MCP Sidecar (Optional)

The MCP server runs as a sidecar alongside the dashboard, connecting to the same API. It supports three transport modes:

graph LR
    subgraph "MCP Transports"
        M_STDIO["stdio\nnpm run mcp:start"]
        M_HTTP["HTTP+SSE\nnpm run mcp:start:http\n:8819"]
        M_REPL["REPL\nnpm run mcp:start:repl"]
    end

    HOST["MCP Host"] -->|"stdin/stdout"| M_STDIO
    RC["Remote Client"] -->|"POST /mcp · GET /sse"| M_HTTP
    OP["Operator"] -->|"interactive CLI"| M_REPL

    M_STDIO --> API["Dashboard API<br/>:4820"]
    M_HTTP --> API
    M_REPL --> API

    style M_STDIO fill:#0f766e,stroke:#14b8a6,color:#fff
    style M_HTTP fill:#0f766e,stroke:#14b8a6,color:#fff
    style M_REPL fill:#0f766e,stroke:#14b8a6,color:#fff
Loading
Command Purpose
npm run mcp:install Install MCP package dependencies
npm run mcp:build Compile MCP server to mcp/build/
npm run mcp:start Start MCP server (stdio, for MCP hosts)
npm run mcp:start:http Start MCP HTTP+SSE server on port 8819
npm run mcp:start:repl Start interactive MCP REPL
npm run mcp:dev Run MCP server in dev mode (stdio, tsx)
npm run mcp:dev:http Run MCP HTTP server in dev mode (tsx)
npm run mcp:dev:repl Run MCP REPL in dev mode (tsx)
npm run mcp:typecheck Type-check MCP source
npm run mcp:docker:build Build MCP container image with Docker
npm run mcp:podman:build Build MCP container image with Podman

Container (Docker / Podman)

A multi-stage Dockerfile builds the client and server into a single production image. Both Docker and Podman are fully supported — the image is OCI-compliant.

graph LR
    subgraph "Multi-Stage Build"
        S1["Stage 1: server-deps\nnode:22-alpine\nnpm ci --omit=dev"]
        S2["Stage 2: client-build\nnode:22-alpine\nnpm ci + vite build"]
        S3["Stage 3: runtime\nnode:22-alpine\nCopies node_modules + client/dist"]
        S1 --> S3
        S2 --> S3
    end

    subgraph "Container Runtime"
        VOL1["~/.claude (ro)\nlegacy session import"]
        VOL2["agent-monitor-data\nSQLite persistence"]
        S3 -->|"EXPOSE 4820"| SRV["node server/index.js\nport 4820"]
        VOL1 --> SRV
        VOL2 --> SRV
    end

    style S3 fill:#339933,stroke:#5cb85c,color:#fff
    style SRV fill:#6366f1,stroke:#818cf8,color:#fff
Loading

Usage:

# Docker Compose
docker compose up -d --build

# Podman Compose
CLAUDE_HOME="$HOME/.claude" podman compose up -d --build

# Plain Docker / Podman (equivalent)
docker build -t agent-monitor .
docker run -d -p 4820:4820 \
  -v "$HOME/.claude:/root/.claude:ro" \
  -v agent-monitor-data:/app/data \
  agent-monitor

Note

Hook note: Claude Code hooks run on the host, not inside the container. The containerized server still receives hook events via HTTP on localhost:4820 — run npm run install-hooks on the host after the container is up.

Cloud Deployment

For production cloud deployments, the deployments/ directory provides enterprise-grade infrastructure supporting four cloud providers and multiple deployment strategies.

graph TB
  subgraph "Deployment Pipeline"
    direction LR
    CI["CI Pipeline<br/>Build · Test · Scan"] --> DEPLOY["Deployment<br/>Helm · Kustomize · Terraform"]
    DEPLOY --> VERIFY["Verification<br/>Health Check · Smoke Tests"]
    VERIFY -->|Fail| ROLLBACK["Rollback<br/>Instant Revert"]
  end

  subgraph "Infrastructure"
    direction TB
    subgraph "Compute"
      BLUE["Blue Slot<br/>Current Version"]
      GREEN["Green Slot<br/>New Version"]
    end
    LB["Load Balancer<br/>TLS 1.3 · WebSocket<br/>Weighted Routing"]
    PV["Persistent Storage<br/>Encrypted NFS"]
    MON["Monitoring<br/>Prometheus · Grafana<br/>13 Alert Rules"]
    OTEL["OTel Collector<br/>Coralogix"]
  end

  LB -->|"Active"| BLUE
  LB -.->|"Standby"| GREEN
  BLUE & GREEN --> PV
  MON -->|"Scrape"| BLUE & GREEN
  BLUE & GREEN -->|"logs + metrics + traces"| OTEL

  style BLUE fill:#2563eb,color:#fff
  style GREEN fill:#16a34a,color:#fff
  style LB fill:#7c3aed,color:#fff
  style CI fill:#2088ff,color:#fff
  style OTEL fill:#4f46e5,color:#fff
Loading
Capability Details
Cloud Providers AWS (ECS Fargate + ALB), GCP (Cloud Run + GCLB), Azure (ACI + App Gateway), OCI (OKE + LBaaS)
Deployment Methods Helm chart, Kustomize overlays, Terraform modules
Release Strategies Rolling update, blue-green (instant switchover), canary (automated analysis)
Environments Dev, staging, production with per-environment configuration
CI/CD GitHub Actions and GitLab CI pipelines with Trivy security scanning
Observability Prometheus scraping, 13 alert rules, Grafana dashboard (16 panels), Alertmanager routing, Coralogix full-stack observability (logs, metrics, traces, SLO tracking) via OpenTelemetry Collector
Operations Scripts for deploy, rollback, blue-green switch, database backup/restore, teardown
Security Restricted PSS, network policies, TLS enforcement, OIDC auth, no long-lived credentials

Note

📘 Full guide: See DEPLOYMENT.md for step-by-step deployment instructions, and deployments/README.md for the infrastructure technical reference.


Statusline Utility

The statusline/ directory contains a standalone CLI statusline for Claude Code, separate from the web dashboard. It renders a color-coded bar at the bottom of the Claude Code terminal.

Data Flow

sequenceDiagram
    participant CC as Claude Code
    participant SH as statusline-command.sh
    participant PY as statusline.py
    participant GIT as git CLI

    CC->>SH: stdin (JSON payload)
    SH->>PY: Pipes stdin through
    PY->>PY: Parse JSON (model, cwd, context_window)
    PY->>GIT: git symbolic-ref --short HEAD
    GIT-->>PY: Branch name
    PY->>PY: Build ANSI-colored segments
    PY-->>CC: stdout (formatted statusline)
Loading

Segments

Segment Source Color Logic
Model data.model.display_name Always cyan
User $USERNAME / $USER env var Always green
Working Dir data.workspace.current_dir Always yellow, ~ prefix for home
Git Branch git symbolic-ref --short HEAD Always magenta, hidden outside git repos
Context Bar data.context_window.used_percentage Green < 50%, Yellow 50–79%, Red >= 80%
Token Counts data.context_window.current_usage Always dim; input, output, c cache reads

Integration

The statusline is configured in ~/.claude/settings.json via the statusLine key:

{
  "statusLine": {
    "type": "command",
    "command": "bash \"/path/to/.claude/statusline-command.sh\""
  }
}

Claude Code invokes this command on each update, piping a JSON payload to stdin. The script reads the JSON, extracts fields, runs git for branch info, and prints ANSI-formatted output to stdout.

Design decisions:

  • Python 3.6+ -- available on virtually all systems, handles ANSI and JSON natively
  • No dependencies -- uses only stdlib (sys, json, os, subprocess)
  • Shell wrapper -- statusline-command.sh sets PYTHONUTF8=1 for Windows Unicode support and resolves the absolute path to the Python script
  • Fail-safe -- exits silently on empty input or JSON parse errors, never blocks Claude Code

Technology Choices

Technology Why This Over Alternatives
SQLite (via better-sqlite3 or built-in node:sqlite) Zero-config, embedded, no server process. WAL mode gives concurrent reads. Synchronous API is simpler than async alternatives for this use case. Falls back to Node.js built-in node:sqlite when better-sqlite3 cannot be compiled
Express Battle-tested, minimal, well-understood. Overkill would be Fastify for this scale; underkill would be raw http module
ws Fastest, most lightweight WebSocket library for Node. No Socket.IO overhead needed since we only push JSON messages
React 18 Stable, widely known, strong TypeScript support. No need for Server Components or RSC given this is a client-rendered SPA
Vite Fast builds, native ESM, excellent dev experience. Proxy config handles the dev server split cleanly
Tailwind CSS Utility-first approach keeps styles colocated with markup. No CSS module boilerplate. Custom theme config for the dark UI
React Router 6 Standard routing for React SPAs. Layout routes with <Outlet> give clean shell composition
Lucide React Tree-shakeable icon library. Only imports what's used (~20 icons)
TypeScript Strict Catches null/undefined bugs at compile time. noUncheckedIndexedAccess prevents array bounds issues

Build & Run Targets

A root Makefile mirrors every npm script for developers who prefer make. Run make help for the full list.

make setup          Install all dependencies (root + client + MCP)
make dev            Start server + client in watch mode
make build          Build the React client for production
make start          Start the production server
make test           Run all tests (server + client)
make format         Format all files with Prettier
make mcp-build      Compile MCP TypeScript → JavaScript
make mcp-typecheck  Type-check MCP source without emitting
make docker-up      Start via docker-compose
make docker-down    Stop docker-compose stack

See Makefile for the complete set of 30 targets covering setup, dev, testing, formatting, MCP, data management, Codex extensions, and Docker/Podman workflows.