| name | sidecar |
|---|---|
| description | Spawn conversations with other LLMs (Gemini, GPT, ChatGPT, Codex, o3, DeepSeek, Qwen, Grok, Mistral, etc.) and fold results back into your context. TRIGGER when: user asks to talk to, chat with, use, call, or spawn another LLM or model; user mentions Gemini, GPT, ChatGPT, Codex, o3, DeepSeek, Claude (as a sidecar target), Qwen, Grok, Mistral, or any non-current model by name; user asks to get a second opinion from another model; user wants parallel exploration with a different model; user says "sidecar", "fork", or "fold". CRITICAL RULES: (1) ALWAYS launch sidecar CLI commands with Bash tool's run_in_background: true. Never run sidecar start/resume/continue in the foreground. (2) The fold summary returns on stdout when the user clicks Fold in the GUI or the headless agent finishes. Use TaskOutput to read it when the background task completes. (3) Use --prompt for the start command (NOT --briefing). --briefing is only for subagent spawn. (4) NEVER use o3 or o3-pro unless the user explicitly asks for it by name. These models are extremely expensive ($10-60+ per request). If the user asks for o3, warn them about the cost before proceeding. Default to gemini for most tasks. (5) When the user asks to query MULTIPLE LLMs simultaneously (e.g., "ask Gemini AND ChatGPT", "compare Gemini vs GPT"), ALWAYS use --no-ui (headless) for all of them unless the user explicitly requests interactive. Opening multiple Electron windows at once is disruptive. Launch them all in parallel with run_in_background: true. |
Spawn parallel conversations with different LLMs (Gemini, GPT, ChatGPT, Codex, o3, etc.) and fold results back into your context.
npm install -g claude-sidecarVerify installation:
sidecar --version- Node.js 18+
- API credentials (see Setup below)
- Electron (installed automatically as optional dependency)
On install, an MCP server is auto-registered for Claude Cowork and Claude Desktop. If you're in an MCP-enabled environment, you can use sidecar_start, sidecar_status, sidecar_read, and other MCP tools directly instead of CLI commands. Call sidecar_guide for detailed usage instructions.
Sidecar uses the OpenCode SDK to communicate with LLM providers. You need to configure API credentials for your chosen provider(s).
OpenRouter provides unified access to many models (Gemini, GPT-4, Claude, o3, etc.) with a single API key.
Step 1: Get an OpenRouter API key
- Sign up at https://openrouter.ai
- Go to Keys → Create Key
- Copy your key (starts with
sk-or-v1-...)
Step 2: Configure credentials
Create the auth file:
mkdir -p ~/.local/share/opencode
cat > ~/.local/share/opencode/auth.json << 'EOF'
{
"openrouter": {
"apiKey": "sk-or-v1-YOUR_KEY_HERE"
}
}
EOFStep 3: Verify setup
sidecar start --model gemini --prompt "Say hello" --no-uiModel names with OpenRouter:
When using OpenRouter, prefix the model with openrouter/:
sidecar start --model gemini --prompt "..."
sidecar start --model gpt --prompt "..."
sidecar start --model claude --prompt "..."Use this if you have API keys directly from Google, OpenAI, or Anthropic.
For Google Gemini:
export GOOGLE_GENERATIVE_AI_API_KEY=your-google-api-keyFor OpenAI:
export OPENAI_API_KEY=your-openai-api-keyFor Anthropic:
export ANTHROPIC_API_KEY=your-anthropic-api-keyAdd these to your shell profile (~/.bashrc, ~/.zshrc) for persistence.
zsh users: ~/.zshrc is only sourced by interactive shells. If you use sidecar from Claude Code, CI, or scripts, either:
- Run
sidecar setupto store keys in sidecar's config (recommended) - Move your exports to
~/.zshenv(sourced by all zsh shell types)
Model names with direct API keys:
When using direct API keys, use the provider/model format WITHOUT the openrouter/ prefix:
sidecar start --model google/<model-name> --prompt "..."
sidecar start --model openai/<model-name> --prompt "..."
sidecar start --model anthropic/<model-name> --prompt "..."| Provider Access | Model Name Format | Example |
|---|---|---|
| OpenRouter | openrouter/provider/model |
openrouter/google/<model-name> |
| Direct Google API | google/model |
google/<model-name> |
| Direct OpenAI API | openai/model |
openai/<model-name> |
| Direct Anthropic API | anthropic/model |
anthropic/<model-name> |
Important: The model name format tells the SDK which authentication to use:
openrouter/...→ Uses OpenRouter API key from auth.jsongoogle/...→ UsesGOOGLE_GENERATIVE_AI_API_KEYenvironment variableopenai/...→ UsesOPENAI_API_KEYenvironment variableanthropic/...→ UsesANTHROPIC_API_KEYenvironment variable
DO spawn a sidecar when:
- Task benefits from a different model's strengths (Gemini's large context, o3's reasoning)
- Deep exploration that would pollute your main context
- User explicitly requests a different model
- Parallel investigation while you continue other work
- Research/comparison tasks that generate verbose output
DON'T spawn a sidecar when:
- Simple task you can handle directly
- User wants to stay in the current conversation
- Task requires your specific context that's hard to transfer
Chat mode (default) — no --agent flag needed. Reads are auto-approved, writes and bash commands require user permission in the Electron UI:
# Default — good for questions, analysis, and guided work
sidecar start --model gemini --prompt "Analyze the auth flow and suggest improvements"Plan mode — fully read-only, no file modifications possible:
# Strict read-only for deep analysis
sidecar start --model gemini --prompt "Review the codebase architecture" --agent PlanBuild mode — full tool access, all operations auto-approved:
# Only when offloading development tasks
sidecar start --model gemini --prompt "Implement the login feature" --agent BuildWhen to use each mode:
| Mode | Use When |
|---|---|
| Chat (default) | Questions, analysis, guided exploration — you control what gets written |
| Plan | Comprehensive read-only analysis where no changes should happen |
| Build | Offloading implementation tasks where full autonomy is desired |
sidecar start \
--model <provider/model> \
--prompt "<task description>" \
--session-id <your-session-id>Required:
--model: The model to use (see Models below)--prompt: Detailed task description you generate
Recommended:
--session: Your Claude Code session ID for accurate context passing
Optional:
-
--no-ui: Run autonomously without GUI (for bulk tasks) -
--no-context: Skip parent conversation history. Use when the briefing is self-contained and includes all necessary file paths, code snippets, and criteria. Default: context is included. -
--timeout <min>: Headless timeout (default: 15) -
--context-turns <N>: Max conversation turns to include (default: 50) -
--context-since <duration>: Time filter for context (e.g.,2h,30m,1d). Overrides--context-turns. -
--context-max-tokens <N>: Max context size (default: 80000) -
--thinking <level>: Model thinking/reasoning effort level:none- No extended thinkingminimal- Minimal thinking (may be adjusted if unsupported by model)low- Low thinking effortmedium- Medium thinking effort (default)high- High thinking effortxhigh- Extra high thinking effort Note: If the model doesn't support the specified level, it will be automatically adjusted.
-
--summary-length <length>: Summary verbosity:brief- Concise summarynormal- Standard summary (default)verbose- Detailed summary
-
--mcp <spec>: Add MCP server for enhanced tool access. Formats:name=url- Remote MCP server (e.g.,--mcp "db=postgresql://localhost:5432/mydb")name=command- Local MCP server (spawns process)
-
--mcp-config <path>: Path to opencode.json file with MCP server configuration. Alternative to--mcpfor complex setups. -
--client <type>: Client entry point (code-local,code-web,cowork). Affects system prompt personality. Default:code-local. -
--agent <agent>: Agent mode (controls tool permissions). If omitted, defaults to Chat.Primary Agents (for
sidecar start):Chat(default): Reads auto-approved, writes/bash require user permissionPlan: Read-only mode - no file modifications possibleBuild: Full tool access - all operations auto-approved
Subagents (for
sidecar subagent spawn):Explore: Read-only subagent - for codebase explorationGeneral: Full-access subagent - for research requiring file writes
Custom Agents: Custom agents defined in
~/.config/opencode/agents/or.opencode/agents/are passed through directly.
The CLI validates all inputs before launching the sidecar. Invalid inputs fail immediately with clear error messages - no Electron window will open.
Required inputs (will error if invalid):
| Input | Validation | Error Message |
|---|---|---|
--model |
Must be present, format: provider/model |
Error: --model is required or Error: --model must be in format provider/model |
--prompt |
Must be present and non-empty | Error: --prompt is required or Error: --prompt cannot be empty or whitespace-only |
--cwd |
If provided, directory must exist | Error: --cwd path does not exist: <path> |
--session-id |
If explicit ID provided (not 'current'), must exist | Error: --session-id '<id>' not found. Use 'sidecar list' to see available sessions or omit --session-id for most recent. |
--agent |
If provided, must be non-empty | Error: --agent cannot be empty |
--timeout |
Must be positive number | Error: --timeout must be a positive number |
--context-turns |
Must be positive number | Error: --context-turns must be a positive number |
--context-since |
Must match format: 30m, 2h, 1d |
Error: --context-since must be in format: 30m, 2h, or 1d |
| API Key | Must be set for model's provider | Error: <KEY_NAME> environment variable is required for <Provider> models |
API Key Requirements by Provider:
| Model Prefix | Required Env Var | Example |
|---|---|---|
openrouter/... |
OPENROUTER_API_KEY |
export OPENROUTER_API_KEY=sk-or-... |
google/... |
GOOGLE_GENERATIVE_AI_API_KEY |
export GOOGLE_GENERATIVE_AI_API_KEY=... |
openai/... |
OPENAI_API_KEY |
export OPENAI_API_KEY=sk-... |
anthropic/... |
ANTHROPIC_API_KEY |
export ANTHROPIC_API_KEY=sk-ant-... |
deepseek/... |
DEEPSEEK_API_KEY |
export DEEPSEEK_API_KEY=... |
Handling validation errors:
If you receive a validation error, fix the input and retry:
# Error: --session-id 'abc123' not found
# Fix: Use 'current' or omit --session
sidecar start --model gemini --prompt "Task" --session-id current
# Error: --agent cannot be empty
# Fix: Use a valid OpenCode agent
sidecar start --model gemini --prompt "Task" --agent Build
# Error: --prompt cannot be empty
# Fix: Provide a non-empty briefing
sidecar start --model gemini --prompt "Detailed task description"
# Error: OPENROUTER_API_KEY environment variable is required
# Fix: Set the API key for your provider
export OPENROUTER_API_KEY=sk-or-your-key
sidecar start --model gemini --prompt "Task"sidecar list
sidecar list --status complete
sidecar list --all # All projects
sidecar list --json # Output as JSONOptional:
--status <filter>: Filter by status (running,complete)--all: Show sessions from all projects--json: Output as JSON format (for programmatic use)--cwd <path>: Project directory (default: current directory)
sidecar resume <task_id>Reopens a previous session with full conversation history. The sidecar continues in the same OpenCode session — all previous messages and tool state are preserved.
Use resume when: You want to pick up exactly where you left off (e.g., re-examine findings, ask follow-up questions in the same conversation).
Optional:
--no-ui: Continue session in autonomous mode--timeout <minutes>: Timeout for headless mode (default: 15)--cwd <path>: Project directory (default: current directory)
sidecar continue <task_id> --prompt "<new task>"Starts a new sidecar session that inherits the old session's conversation as context. The previous session's messages become read-only background context for the new task.
Use continue when: You want to build on previous findings with a new task or different model (e.g., "Now implement the fix from the previous analysis").
Required:
--prompt: New task description for the continuation
Optional:
--model <model>: Override model (defaults to original session's model)--context-turns <N>: Max turns from previous session to include (default: 50)--context-max-tokens <N>: Max tokens for context (default: 80000)--no-ui: Run in autonomous mode--timeout <minutes>: Timeout for headless mode (default: 15)--cwd <path>: Project directory (default: current directory)
sidecar read <task_id> # Show summary
sidecar read <task_id> --conversation # Show full conversation
sidecar read <task_id> --metadata # Show session metadataOptional:
--summary: Show summary (default if no option specified)--conversation: Show full conversation history--metadata: Show session metadata (model, agent, timestamps, etc.)--cwd <path>: Project directory (default: current directory)
🚧 Planned Feature: Subagent commands are documented for future reference but are not yet implemented in the current CLI. Running these commands will result in "Unknown command: subagent" errors. This section describes the planned API for when the feature is released.
Spawn and manage subagents within a sidecar session. Subagents run in parallel with the main session.
sidecar subagent spawn \
--parent <sidecar-task-id> \
--agent <General|Explore> \
--prompt "<task description>"Required:
--parent: The task ID of the parent sidecar session--agent: Subagent type -General(full access) orExplore(read-only)--prompt: Task description for the subagent
Example:
sidecar subagent spawn --parent abc123 --agent Explore --prompt "Find all API endpoints in src/"
sidecar subagent spawn --parent abc123 --agent General --prompt "Research authentication patterns"sidecar subagent list --parent <sidecar-task-id>
sidecar subagent list --parent abc123 --status running
sidecar subagent list --parent abc123 --status completedsidecar subagent read <subagent-id> # Show summary
sidecar subagent read <subagent-id> --conversation # Show full conversationUse short aliases (run sidecar guide to see all available aliases and their current model IDs):
--model gemini-- Google Gemini (fast, large context)--model opus-- Claude Opus (deep analysis)--model gpt-- OpenAI GPT--model deepseek-- DeepSeek- Omit
--modelentirely to use your configured default
Full model strings also work: --model openrouter/provider/model-id
Note: Model names change frequently as providers release new versions. To verify current model names:
# List available OpenRouter models
curl https://openrouter.ai/api/v1/models | jq '.data[].id' | grep -i gemini
# Or check the OpenRouter website
# https://openrouter.ai/modelsAlways verify model names before using them in production scripts.
For reliable context passing, provide your session ID:
sidecar start --session-id "a1b2c3d4-..." --model ... --prompt ...How to find your session ID:
Your conversations are stored in:
~/.claude/projects/[encoded-project-path]/[session-id].jsonl
The encoded path replaces /, \, and _ with -. For example:
- Project:
/Users/john/myproject - Encoded:
-Users-john-myproject - Full path:
~/.claude/projects/-Users-john-myproject/
List session files to find yours:
ls -lt ~/.claude/projects/-Users-john-myproject/*.jsonl | head -5The most recently modified file is likely your current session. Extract the UUID from the filename.
Session ID behavior:
- Omit
--sessionor use--session-id current: Uses the most recently modified session file (less reliable if multiple sessions are active) - Explicit session ID (
--session-id abc123-def456): Must exist or the command fails immediately with:Error: --session-id 'abc123-def456' not found
If you get a session not found error:
- List available sessions:
sidecar list - Use one of the listed session IDs, OR
- Omit
--sessionto use the most recent session
You create the briefing—it should be a comprehensive handoff document:
## Task Briefing
**Objective:** [One-line goal]
**Background:** [What led to this task, relevant context]
**What's been tried:** [Previous attempts, if any]
**Files of interest:**
- path/to/relevant/file.ts
- path/to/another/file.ts
**Success criteria:** [How to know when done]
**Constraints:** [Time limits, scope limits, things to avoid]Example:
sidecar start \
--model gemini-pro \
--session-id "abc123-def456" \
--prompt "## Task Briefing
**Objective:** Debug the intermittent 401 errors on mobile
**Background:** Users report sporadic auth failures. Server logs show
token refresh race conditions. I suspect TokenManager.ts.
**Files of interest:**
- src/auth/TokenManager.ts (main suspect)
- src/api/client.ts (where tokens are used)
- logs/auth-errors-2025-01-25.txt
**Success criteria:** Identify root cause and propose fix
**Constraints:** Focus on auth flow only, don't refactor unrelated code"By default, sidecar includes your parent conversation history (up to 80k tokens). Skip this with --no-context when the task is self-contained.
- Task references something from the current conversation ("that bug", "the approach you suggested")
- Fact checking, second opinions, or code review of recent work
- "Does this look right?" or validation requests
- Continuing a debugging thread
- Greenfield tasks with explicit file paths and instructions
- General knowledge or research questions
- Tasks fully described in the briefing (files, criteria, constraints all specified)
- Independent analysis unrelated to current conversation
sidecar start \
--model gemini \
--no-context \
--prompt "## Task Briefing
**Objective:** Add retry logic with exponential backoff to the HTTP client
**Files to read:**
- src/api/client.ts (current implementation)
- src/utils/retry.ts (existing retry utility, if any)
**Success criteria:**
- Retries up to 3 times on 5xx errors
- Exponential backoff: 1s, 2s, 4s
- No retry on 4xx errors
- Add unit tests
**Constraints:** Don't modify the public API surface"Sidecar uses OpenCode's agent framework with three primary modes and two subagent types:
| Agent | Reads | Writes/Edits | Bash | Default |
|---|---|---|---|---|
| Chat | auto | asks permission | asks permission | Yes |
| Plan | auto | denied | denied | No |
| Build | auto | auto | auto | No |
Conversational mode — reads are auto-approved, writes and bash commands prompt for user permission in the UI. This is the default when no --agent flag is provided.
# These are equivalent — Chat is the default
sidecar start --model gemini --prompt "Analyze the auth flow"
sidecar start --model gemini --prompt "Analyze the auth flow" --agent ChatUse Chat agent when:
- Asking questions or requesting analysis
- You want to review and approve any file changes
- Interactive exploration where the model might suggest edits
- Any task where you want human-in-the-loop control over writes
Strict read-only mode — file modifications are completely blocked:
sidecar start --model gemini --prompt "Review the codebase architecture" --agent PlanUse Plan agent when:
- Comprehensive analysis where no changes should happen
- Code review and security audits
- Architecture exploration
Full autonomous access — all operations auto-approved:
sidecar start --model gemini --prompt "Implement the login feature" --agent BuildUse Build agent when:
- Offloading development tasks to the sidecar
- User explicitly requests implementation ("implement", "fix", "write", "create")
- Headless batch operations (test generation, linting, etc.)
These agents are spawned from within a sidecar session using sidecar subagent spawn:
Full-access subagent for research and parallel tasks:
- Same capabilities as Build agent
- Used for spawning parallel work within a session
sidecar subagent spawn --parent abc123 --agent General --prompt "Research auth patterns"Read-only subagent for codebase exploration:
- Optimized for searching and understanding code
- Read-only access (no writes, no bash)
sidecar subagent spawn --parent abc123 --agent Explore --prompt "Find all API endpoints"Important: When using sidecar start, use Chat, Plan, or Build. When using sidecar subagent spawn, use General or Explore.
- Opens a GUI window
- User can converse with the sidecar
- Model Picker: Click the model name in the input area to switch models mid-conversation
- Click FOLD when done to generate summary
- Summary returns to your context via stdout
Use for: Debugging, exploration, architectural discussions
Mid-Conversation Model Switching: In interactive mode, you can change models without restarting:
- Click the model dropdown (shows current model name)
- Select a different model from the categorized list
- A system message confirms the switch
- Subsequent messages use the new model
This is useful when you want to:
- Start fast with Flash, then switch to Pro for complex analysis
- Try a different model's perspective on a problem
- Use reasoning models (o3-mini) for specific parts of the task
- Runs autonomously, no GUI
- Agent works until done or timeout
- Summary returns automatically
- Default agent is
build—chatagent requires interactive UI and will stall in headless mode - Always use headless when spawning multiple sidecars at once (see Multi-LLM rule below)
Multi-LLM Rule: When the user asks to query two or more LLMs simultaneously (e.g., "ask Gemini and ChatGPT", "compare three models"), use --no-ui for all of them. Launch them in parallel with run_in_background: true. Only switch to interactive if the user explicitly asks.
Agent Headless Compatibility:
| Agent | Headless Safe | Notes |
|---|---|---|
build |
Yes | Default for --no-ui — full autonomous access |
plan |
Yes | Read-only analysis |
explore |
Yes | Read-only codebase exploration |
general |
Yes | Full-access subagent |
chat |
No | Blocked — requires interactive mode for write permissions |
# Error: chat + headless
sidecar start --model gemini --prompt "..." --agent chat --no-ui
# → Error: --agent chat requires interactive mode (remove --no-ui or use --agent build)Use for: Bulk tasks, test generation, documentation, linting
sidecar start \
--model gemini \
--prompt "Generate unit tests for src/utils/. Use Jest." \
--no-ui \
--timeout 20ALWAYS run sidecar commands in the background. Use the Bash tool's run_in_background: true parameter for every sidecar start, sidecar resume, and sidecar continue invocation. This ensures:
- No timeout ceiling — tasks can run for the full 15+ minutes
- You can continue working while the sidecar runs
- You'll be automatically notified when it completes
Example invocation pattern:
Bash tool:
command: "sidecar start --model gemini --prompt '...' --no-ui"
run_in_background: true
After launching, tell the user:
"Sidecar is running in the background. I'll share the results when it completes."
When the background task completes, you will be automatically notified. Use the TaskOutput tool with the task ID to read the sidecar's summary output, then present it to the user. Do NOT poll or sleep — the notification arrives automatically.
Interactive mode note: When running without --no-ui, the Electron GUI opens in a separate window. Backgrounding frees the terminal but does not prevent the user from interacting with the GUI window.
Important: Warn users about potential file conflicts:
"I recommend committing your current changes before the sidecar completes, in case there are file conflicts."
The summary includes:
## Sidecar Results: [Title]
📍 **Context Age:** [How stale the context might be]
⚠️ **FILE CONFLICT WARNING** [If files were modified externally]
**Task:** [What was requested]
**Findings:** [Key discoveries]
**Attempted Approaches:** [What was tried but didn't work]
**Recommendations:** [Suggested actions]
**Code Changes:** [Specific code with paths]
**Files Modified:** [List of files]
**Assumptions Made:** [Things to verify]
**Open Questions:** [Remaining uncertainties]Act on the summary:
- Implement recommended fixes
- Verify assumptions listed
- Don't repeat failed approaches
- Review files with conflict warnings carefully
When sidecar emits [SIDECAR_CONFIG_UPDATE] on stderr, the user's model aliases have changed (via sidecar setup). The message contains a hash comment and a markdown alias table.
Action required:
- Find the "Model Aliases" section in the project's CLAUDE.md
- Replace that section's content with the block provided in the message
- If no "Model Aliases" section exists, add one after the title line
- Do not remove the
<!-- sidecar-config-hash: ... -->comment — it tracks config changes and prevents duplicate notifications
The update is purely informational (alias mappings) and safe to apply without user confirmation.
Before spawning a new sidecar, check if relevant work exists:
sidecar listIf a relevant sidecar exists:
- Read its findings:
sidecar read <id> - Reopen it:
sidecar resume <id> - Build on it:
sidecar continue <id> --prompt "..."
Ask the user if you're unsure whether to resume or start fresh.
# Default Chat mode — can read freely, asks before writing
sidecar start \
--model gpt \
--session-id "$(ls -t ~/.claude/projects/-Users-john-myproject/*.jsonl | head -1 | xargs basename .jsonl)" \
--prompt "## Debug Memory Leak
**Objective:** Find the source of memory growth in the worker process
**Background:** Memory usage grows 50MB/hour. Heap snapshots show retained
closures but I can't identify the source.
**Files of interest:**
- src/workers/processor.ts
- src/cache/lru.ts
**Success criteria:** Identify the leak and propose fix"# Build mode is appropriate here because user explicitly requested file creation
sidecar start \
--model gemini \
--agent Build \
--prompt "Generate comprehensive Jest tests for all exported functions
in src/utils/. Include edge cases. Write to tests/utils/." \
--no-ui \
--timeout 15# Plan mode for strict read-only review
sidecar start \
--model gemini \
--agent Plan \
--prompt "Review the authentication flow for security issues.
Focus on: token handling, session management, CSRF protection.
Analyze and report findings."# First, start a sidecar (defaults to Chat mode)
sidecar start --model gemini --prompt "Debug auth issues"
# Output: Started sidecar with task ID: abc123
# Spawn an Explore subagent for codebase search
sidecar subagent spawn \
--parent abc123 \
--agent Explore \
--prompt "Find all database queries and list which files they're in"
# Spawn a General subagent for parallel research
sidecar subagent spawn \
--parent abc123 \
--agent General \
--prompt "Research best practices for JWT token refresh"
# Check subagent status
sidecar subagent list --parent abc123# First, check what exists
sidecar list
# Read what was found
sidecar read abc123
# Continue with a follow-up task
sidecar continue abc123 \
--model gpt \
--prompt "Implement the fix recommended in the previous session.
The mutex approach looks correct. Add tests."API keys in ~/.zshrc are not available in non-interactive shells. Resolution order: process.env > ~/.config/sidecar/.env > ~/.local/share/opencode/auth.json (first wins). Fix:
- Run
sidecar setup(stores keys in~/.config/sidecar/.env) - Or move exports to
~/.zshenv - Or add credentials to
~/.local/share/opencode/auth.json
Your project path encoding may not match. Check:
ls ~/.claude/projects/Find the correct encoded path for your project. Remember that /, \, and _ are all converted to -.
You have multiple Claude Code windows. Pass --session explicitly:
ls -lt ~/.claude/projects/[your-path]/*.jsonl | head -3
# Pick the correct session UUIDCheck API credentials are configured:
For OpenRouter:
cat ~/.local/share/opencode/auth.json
# Should contain: {"openrouter": {"apiKey": "sk-or-v1-..."}}For direct API keys:
echo $GOOGLE_GENERATIVE_AI_API_KEY # For Google models
echo $OPENAI_API_KEY # For OpenAI models
echo $ANTHROPIC_API_KEY # For Anthropic models-
Verify you're using the correct model name format:
- OpenRouter models:
openrouter/provider/model - Direct API models:
provider/model
- OpenRouter models:
-
Check your API key is valid and has credits
-
For OpenRouter, ensure auth.json exists:
cat ~/.local/share/opencode/auth.json
- Increase timeout:
--timeout 30 - Enable debug logging:
LOG_LEVEL=debug sidecar start ...
Debug output may be leaking to stdout. Check for console.log statements if you've modified the sidecar code. All logging should go to stderr via the structured logger.
"Error: --prompt cannot be empty or whitespace-only"
The briefing must contain actual content:
# Wrong
sidecar start --model gemini --prompt ""
sidecar start --model gemini --prompt " "
# Right
sidecar start --model gemini --prompt "Debug the auth issue in TokenManager.ts""Error: --session-id '' not found"
The explicit session ID doesn't exist. Either:
- Use
sidecar listto find valid session IDs - Omit
--sessionto use the most recent session - Use
--session-id currentfor automatic resolution
# Find valid sessions
sidecar list
# Use most recent session
sidecar start --model gemini --prompt "Task""Error: --cwd path does not exist"
The specified project directory doesn't exist:
# Wrong
sidecar start --model ... --prompt "..." --cwd /nonexistent/path
# Right - use current directory
sidecar start --model ... --prompt "..." --cwd .
# Right - use full path
sidecar start --model ... --prompt "..." --cwd /Users/john/myproject"Error: --agent cannot be empty"
The agent name cannot be empty. Use an OpenCode native agent or a custom agent:
# Wrong - empty agent
sidecar start --model ... --prompt "..." --agent ""
# Right - use OpenCode native agent
sidecar start --model ... --prompt "..." --agent Explore
# Right - use custom agent (defined in ~/.config/opencode/agents/)
sidecar start --model ... --prompt "..." --agent MyCustomAgent"Error: <KEY_NAME> environment variable is required for models"
The API key for the model's provider is not set:
# For OpenRouter models (openrouter/...)
export OPENROUTER_API_KEY=sk-or-your-key
# For Google models (google/...)
export GOOGLE_GENERATIVE_AI_API_KEY=your-google-key
# For OpenAI models (openai/...)
export OPENAI_API_KEY=sk-your-openai-key
# For Anthropic models (anthropic/...)
export ANTHROPIC_API_KEY=sk-ant-your-key
# For DeepSeek models (deepseek/...)
export DEEPSEEK_API_KEY=your-deepseek-key
# Then retry
sidecar start --model gemini --prompt "Task"- Install sidecar:
npm install -g claude-sidecar - Configure API access (choose one):
- OpenRouter: Create
~/.local/share/opencode/auth.jsonwith your key - Direct API: Set environment variable (
GOOGLE_GENERATIVE_AI_API_KEY, etc.)
- OpenRouter: Create
- Test sidecar:
sidecar start --model <your-model> --prompt "Hello" --no-ui