feat: Auto Claude's MCP Contribution, RDR let Claude and Auto-Claude finish your tasks in your absence, use MCP to create batches of tasks#1855
Conversation
Phase 4: Crash Detection - Modified agent-process.ts exit handler to detect crashes - Crash = non-zero exit code + signal !== 'SIGTERM' - Writes .restart-requested marker on crash - Triggers buildAndRestart after 5s delay if autoRestartOnFailure enabled Phase 5: MCP Tool Integration - Added trigger_auto_restart MCP tool to mcp-server/index.ts - Tool accepts reason (prompt_loop|crash|manual|error) and optional buildCommand - Checks if feature is enabled before triggering - Saves running task state before restart - Returns success/error status to caller With Phases 1-6 complete, Auto-Claude can now: - Detect prompt loops via Claude Code hook - Detect crashes via process exit monitoring - Rebuild and restart automatically - Resume running tasks after restart - Rate limit restarts (3/hour, 5min cooldown) - Be controlled via MCP for manual restart - Run unattended overnight with auto-shutdown integration Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Build error fix:
- Changed restart-handlers.ts to receive agentManager as parameter
- Updated registerRestartHandlers() and checkAndHandleRestart() signatures
- Modified calls in index.ts and ipc-handlers/index.ts to pass agentManager
- Removed saveRestartState calls from agent-process.ts and mcp-server/index.ts
(they don't have access to agentManager, task state will be saved on startup)
Why this was needed:
- restart-handlers.ts tried to import { agentManager } but it's not exported
- AgentManager is instantiated as local var in main/index.ts
- Other IPC handlers receive agentManager as parameter (standard pattern)
Build now succeeds ✅
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Changes: - Added CV Project path (C:\Users\topem\Desktop\CV Project) to CLAUDE.md - Documented RDR→MCP integration for automatic task recovery - Added workflow for Claude Code to auto-invoke /auto-claude-mcp skill - Specified Priority 1 (auto-resume) and Priority 3 (JSON fix) workflows - Added project path resolution guide (CV Project, Auto-Claude Mod) RDR Integration: - When Claude Code receives RDR notifications, it should automatically: 1. Invoke /auto-claude-mcp skill 2. Determine project path from context or ask user 3. Apply Priority 1 recovery (resume incomplete tasks) 4. Apply Priority 3 recovery (fix JSON errors) 5. Confirm recovery to user Fixed 9 tasks in CV Project: - Priority 1 (resumed): 071-marko, 073-qwik, 077-shadow-component-libs, 079-alpine-htmx-knockout, 080-svelte-aurelia, 081-ats-major - Priority 3 (JSON fixed): 082-ats-other, 083-rte-major, 084-rte-other Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Replace numeric window handles with title pattern matching to prevent "Invalid window handle" errors. Windows are now re-enumerated just before sending messages, ensuring fresh handles are always used. Changes: - window-manager.ts: Accept titlePattern instead of handle - rdr-handlers.ts: Pass titlePattern to window manager - task-api.ts: Update API type definitions - KanbanBoard.tsx: Extract title from selected window Fixes race condition where window handles become stale between enumeration (30s+ due to batching/idle checks) and usage. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
…R detection
PROBLEM:
- Frontend RDR checks for task.exitReason === 'error' or 'prompt_loop'
- Backend NEVER wrote this field to implementation_plan.json
- Result: RDR could never detect errors or prompt loops
ROOT CAUSE:
- run_agent_session() returned status="error" in memory only
- No code anywhere wrote {"exitReason": "error"} to JSON file
- Frontend reads JSON file → finds NO exitReason → RDR skips task
FIX:
1. Added _save_exit_reason() helper to write exitReason to JSON
2. Call it when status == "error" (session crashes)
3. Call it when attempt_count >= 3 (prompt loop detection)
4. Updated both coder.py and planner.py
FILES CHANGED:
- apps/backend/agents/coder.py: Write exitReason on errors and prompt loops
- apps/backend/agents/planner.py: Write exitReason on planning failures
BENEFITS:
- RDR can now detect tasks with exitReason: "error"
- RDR can now detect tasks with exitReason: "prompt_loop" (3+ failed attempts)
- Frontend detection logic (lines 1045-1048 in rdr-handlers.ts) now functional
- Auto-recovery system works as designed
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
PROBLEM:
RDR was detecting all 19 tasks as having "Empty plan (0 phases)" despite
them having phases. This caused false positives where every task was
flagged as needing intervention.
ROOT CAUSE:
Data structure mismatch between ProjectStore and RDR detection logic:
- RDR checks task.phases (array) in needsIntervention()
- ProjectStore returned phaseCount (number) for logging only
- ProjectStore didn't include phases array in task object
- Result: !task.phases was true for ALL tasks → false positives
CONSOLE EVIDENCE:
[ProjectStore] Loaded plan for 073-qwik: { phaseCount: 5, subtaskCount: 21 }
[RDR] ✅ Task 073-qwik needs intervention: Empty plan (0 phases)
FIX APPLIED:
1. Added phases, planStatus to TaskInfo interface (rdr-handlers.ts)
2. Added phases, planStatus to Task interface (task.ts)
3. Updated ProjectStore to include phases: plan?.phases in task object
4. Updated ProjectStore to include planStatus: plan?.planStatus
RDR now receives full task data including:
- phases: Array<Phase> - for empty plan detection
- planStatus: string - for plan approval detection
- exitReason: string - for error/prompt_loop detection (already working)
VERIFICATION:
- Frontend build passes with no TypeScript errors
- Task object now matches RDR expectations
- False positives should be eliminated
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Previously, RDR would interrupt Claude Code during active work sessions (plan mode, coding, etc.) because the output monitor sometimes showed 'IDLE' state even when Claude was actively working. Changes: - Add minimum idle time check (30 seconds) in checkClaudeCodeBusy() - Require Claude to be idle for 30s before considering truly idle - This prevents interrupting during active work while still allowing RDR to trigger after genuine idle periods The timer-based batching (30s collection window) and busy checks are already working correctly. This fix just makes the idle detection more conservative to prevent false positives. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Previously, RDR required 30 seconds of idle time even when the IDLE state was from aged-out JSONL files (no recent transcripts found). This blocked RDR from sending notifications after interrupted tool use or when sessions genuinely ended. Changes: - Skip minimum idle wait when recentOutputFiles === 0 (aged-out session) - Preserve 30-second wait for recently completed active work sessions - This allows RDR to send immediately for abandoned sessions while still preventing interruptions during active work Fixes the issue where interrupted tool use blocked RDR indefinitely. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Previously, RDR blocked notifications when Claude was AT_PROMPT (waiting for user input), treating it the same as PROCESSING (active work). This was incorrect - AT_PROMPT means Claude is idle and waiting for input, so an RDR notification is just another form of input and should be allowed. Changes: - Remove AT_PROMPT from busy check (only block on PROCESSING) - Add log message when AT_PROMPT state is OK for RDR - RDR now sends notifications when Claude is IDLE or AT_PROMPT This allows RDR to work immediately when Claude asks questions or waits for user input, instead of being blocked indefinitely. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Previously, RDR blocked notifications when Claude was AT_PROMPT (waiting for user input), treating it the same as PROCESSING (active work). This was incorrect - AT_PROMPT means Claude is idle and waiting for input, so an RDR notification is just another form of input and should be allowed. Changes: - Remove AT_PROMPT from busy check (only block on PROCESSING) - Add log message when AT_PROMPT state is OK for RDR - RDR now sends notifications when Claude is IDLE or AT_PROMPT This allows RDR to work immediately when Claude asks questions or waits for user input, instead of being blocked indefinitely. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
…s, PR review Resolves 3 conflicts: - agent-process.ts: keep both PID helpers (ours) + debugLog (upstream) - subprocess-runner.ts: take upstream's getEffectiveSourcePath() path resolver - rate-limit-detector.ts: keep both parseResetTime (ours) + debugLog (upstream)
|
Caution Review failedFailed to post review comments 📝 WalkthroughWalkthroughAdds a large feature set: a filesystem-backed local MCP server, full Recover‑Debug‑Resend (RDR) system, crash/watchdog tooling and auto-restart, rate‑limit wait/resume, Hugging Face integration, extensive frontend IPC/UI changes (Kanban RDR, auto‑shutdown), platform/worktree/process utilities, and documentation. Changes
Sequence Diagram(s)sequenceDiagram
participant UI as Renderer (Kanban / UI)
participant Main as Electron Main
participant MCP as Local MCP Server
participant FS as Filesystem (specs / signal files)
participant Claude as Claude Code / Agent
UI->>Main: trigger RDR (manual ping / auto-trigger)
Main->>MCP: request batch processing / process_rdr_batch
MCP->>FS: read implementation_plan.json, task_metadata, logs
MCP->>MCP: categorize tasks & build batch prompt
MCP->>FS: write rdr-pending signal file
MCP->>Claude: notify via filesystem signal or platform sender
Claude->>FS: process batch, write responses / update plans
Claude->>Main: output-monitor signals state changes (PROCESSING → IDLE)
Main->>UI: emit TASK_STATUS_CHANGED and RDR results
UI->>Main: request recover_stuck_task or submit_task_fix_request
Main->>MCP: invoke recovery tool (recover_stuck_task / submit_task_fix_request)
MCP->>FS: update plans / write QA_FIX_REQUEST.md
MCP->>Main: return operation result
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related issues
Possibly related PRs
Suggested labels
Suggested reviewers
🚥 Pre-merge checks | ✅ 4✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
🎉 Thanks for your first PR!
A maintainer will review it soon. Please make sure:
- Your branch is synced with
develop - CI checks pass
- You've followed our contribution guide
Welcome to the Auto Claude community!
Codecov Report❌ Patch coverage is
📢 Thoughts on this report? Let us know! |
There was a problem hiding this comment.
CodeQL found more than 20 potential problems in the proposed changes. Check the Files changed tab for more details.
Summary of ChangesHello @topemalheiro, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances Auto-Claude's autonomy and resilience by introducing a robust MCP server for programmatic control, an advanced RDR system for automated task recovery, and various automation features like auto-shutdown and crash recovery. It also expands integration capabilities with HuggingFace and refines core task management and Git operations for improved stability and user experience. The changes aim to enable unattended, overnight batch runs and more intelligent handling of task failures. Highlights
Changelog
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This is an impressive and substantial pull request that introduces a powerful set of features for task automation, recovery, and management through the MCP server and RDR system. The changes are extensive and demonstrate a deep understanding of the architecture, with thoughtful additions for stability, platform-specific workarounds, and user experience. My review focuses on a few areas to enhance maintainability and consistency. Overall, this is a fantastic contribution.
| "feature": "My Task", | ||
| "description": "Task description", | ||
| "status": "start_requested", | ||
| "start_requested_at": "2026-01-29T05:00:00Z", |
There was a problem hiding this comment.
The example timestamp uses a hardcoded future date (2026-01-29T05:00:00Z). While this is for documentation, it could be confusing or accidentally copied. Consider using a placeholder like <YYYY-MM-DDTHH:MM:SSZ> or a more realistic example date to improve clarity and prevent potential issues.
| "start_requested_at": "2026-01-29T05:00:00Z", | |
| "start_requested_at": "<YYYY-MM-DDTHH:MM:SSZ>", |
| [Auto-Claude Crash Recovery] ⚠️ APP RESTARTED AFTER CRASH | ||
|
|
||
| **Crash Details:** | ||
| - **Time:** 2026-02-03 14:32:45 |
There was a problem hiding this comment.
The example timestamp uses a hardcoded future date (2026-02-03 14:32:45). To avoid confusion and prevent this from being copied verbatim into any configurations or tests, it would be better to use a placeholder like <YYYY-MM-DD HH:MM:SS>.
| - **Time:** 2026-02-03 14:32:45 | |
| - **Time:** <YYYY-MM-DD HH:MM:SS> |
.gitignore
Outdated
| /shared_docs | ||
| logs/security/ | ||
| Agents.md | ||
| nul |
| "created_at": "2026-01-31T00:00:00Z", | ||
| "updated_at": "2026-01-31T00:00:00Z", |
There was a problem hiding this comment.
The example timestamps use hardcoded future dates (2026-01-31T00:00:00Z). For clarity and to prevent accidental copy-pasting of invalid dates, it's better to use placeholders like <YYYY-MM-DDTHH:MM:SSZ>.
| "created_at": "2026-01-31T00:00:00Z", | |
| "updated_at": "2026-01-31T00:00:00Z", | |
| "created_at": "<YYYY-MM-DDTHH:MM:SSZ>", | |
| "updated_at": "<YYYY-MM-DDTHH:MM:SSZ>", |
| const result = await mainWindow.webContents.executeJavaScript(` | ||
| (async () => { | ||
| try { | ||
| // Use the existing sendRdrToWindow API | ||
| const result = await window.electronAPI.sendRdrToWindow('Claude Code', ${JSON.stringify(message)}); | ||
| return result; | ||
| } catch (error) { | ||
| return { success: false, error: error.message }; | ||
| } | ||
| })() | ||
| `); |
There was a problem hiding this comment.
The mechanism for sending the crash notification involves the main process executing JavaScript in the renderer process, which then makes an IPC call back to the main process. This round-trip is convoluted and can be simplified.
Since checkAndNotifyCrash is in the main process and has access to mainWindow, it can more directly trigger the notification logic. Consider refactoring this to have checkAndNotifyCrash call the underlying message-sending utility (e.g., sendRdrMessage) directly, removing the need for executeJavaScript and the IPC round-trip. This would improve code clarity and reduce complexity.
There was a problem hiding this comment.
Actionable comments posted: 147
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (5)
apps/backend/implementation_plan/plan.py (1)
52-78:⚠️ Potential issue | 🟠 Major
exitReasonset by_save_exit_reasonwill be silently dropped whenImplementationPlan.save()is called.
_save_exit_reason(incoder.pyandplanner.py) writesexitReasondirectly into the JSON file via raw dict manipulation. However,ImplementationPlanhas noexitReasonfield, so any subsequentImplementationPlan.load()→.save()round-trip (e.g., line 188 inplanner.py:await plan.async_save(plan_file)) will serialize viato_dict()and silently discardexitReason. This breaks the RDR detection contract, asexitReasonis critical for the RDR auto-recovery system to determine which tasks need intervention.Add
exitReasonas a field onImplementationPlan(with serialization into_dict/from_dict), or ensure no code path round-trips the plan through the dataclass after_save_exit_reasonhas written it.apps/frontend/src/main/utils/worktree-cleanup.ts (1)
132-154:⚠️ Potential issue | 🟡 MinorReplace direct
process.platformcheck withisWindows()utility function.Line 133 uses
process.platform === 'win32'directly, which violates the coding guideline requiring platform abstraction imports. Add import at the top:+import { isWindows } from '../platform';Then replace:
- if (process.platform === 'win32') { + if (isWindows()) {apps/frontend/src/main/rate-limit-detector.ts (1)
720-767: 🛠️ Refactor suggestion | 🟠 MajorConsolidate
SDKRateLimitInfointerface to single source inapps/frontend/src/shared/types/terminal.ts.Two identical definitions of
SDKRateLimitInfoexist:
apps/frontend/src/main/rate-limit-detector.ts:720(local duplicate)apps/frontend/src/shared/types/terminal.ts:101(canonical location)Imports are inconsistent: the renderer (
SDKRateLimitModal.tsx), preload API, and IPC handlers all import fromshared/types, while some main-folder files import fromrate-limit-detector.ts. This creates risk of divergence during maintenance. Remove the duplicate fromrate-limit-detector.tsand ensure all imports reference the canonical definition inshared/types.apps/frontend/src/preload/api/agent-api.ts (1)
1-12: 🧹 Nitpick | 🔵 TrivialUpdate the module doc comment to include HuggingFace.
The file header lists the combined API modules but omits HuggingFace Integration, which is now part of the aggregated API.
📝 Suggested doc update
* This file serves as the main entry point for agent APIs, combining: * - Roadmap operations * - Ideation operations * - Insights operations * - Changelog operations * - Linear integration * - GitHub integration + * - HuggingFace integration * - Shell operationsapps/frontend/src/renderer/components/GitHubSetupModal.tsx (1)
248-311:⚠️ Potential issue | 🟠 MajorLocalize newly added hardcoded strings in provider selection and Hugging Face flows.
The new
provider-select(lines 407–452) andhuggingface-auth(lines 477–487) steps, along with validation error messages (lines 248–311), contain hardcoded user-facing text that must be added to i18n translation keys. Add these to thegithubSetupnamespace in bothen/dialogs.jsonandfr/dialogs.json:
- Provider selection title, description, and provider labels
- Hugging Face connection title and description
- Validation error messages
Example keys to add
"githubSetup": { "providerSelectTitle": "Choose Provider", "providerSelectDescription": "Select where you want to store your project. You can use GitHub for code repositories or Hugging Face for ML models.", "providers": { "github": "GitHub", "huggingFace": "Hugging Face" }, "huggingFaceTitle": "Connect to Hugging Face", "huggingFaceDescription": "Authenticate with Hugging Face to manage your ML models." }Then use
t('dialogs:githubSetup.providerSelectTitle')in the component.
| *.lnk | ||
|
|
||
| # =========================== | ||
| # Personal / accidental files | ||
| # =========================== | ||
| *.bat | ||
| !Auto-Claude-MCP.example.bat | ||
| *.vbs | ||
| .mcp.json | ||
| CHANGES-RDR-ARCHIVE-FIXES.md | ||
| npm_install_output.txt | ||
| scripts/image/ | ||
| nul |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Duplicate nul entry.
nul appears both at Line 22 and Line 192. One can be removed.
🤖 Prompt for AI Agents
In @.gitignore around lines 10 - 22, Remove the duplicate "nul" entry from the
.gitignore so only a single "nul" line remains; locate the two occurrences of
the "nul" pattern in the file and delete one of them (leaving the other
unchanged) to avoid redundant entries.
apps/backend/agents/coder.py
Outdated
| elif status == "error": | ||
| # Write exitReason to implementation_plan.json so RDR can detect it | ||
| _save_exit_reason(spec_dir, "error") |
There was a problem hiding this comment.
Generic "error" exit reason on all non-classified errors.
This is fine as a catch-all, but note that the "error" reason is also written on concurrency errors that later recover via retry (lines 999+). Since _save_exit_reason is called at Line 990 before the concurrency-error branch at Line 999 is entered, every transient concurrency error will write exitReason="error" to the plan — even though the agent is about to retry. If RDR polls the plan during the retry window, it may trigger unnecessary recovery actions.
Consider moving the _save_exit_reason call into the else branch at Line 1215 (non-concurrency, non-rate-limit, non-auth errors) so transient retryable errors don't trigger RDR prematurely.
Proposed fix
elif status == "error":
- # Write exitReason to implementation_plan.json so RDR can detect it
- _save_exit_reason(spec_dir, "error")
-
emit_phase(ExecutionPhase.FAILED, "Session encountered an error")
# Check if this is a tool concurrency error (400)
@@ -1215,6 +1212,9 @@
else:
# Other errors - use standard retry logic
+ _save_exit_reason(spec_dir, "error")
+
print_status("Session encountered an error", "error")🤖 Prompt for AI Agents
In `@apps/backend/agents/coder.py` around lines 988 - 990, The current logic calls
_save_exit_reason(spec_dir, "error") as soon as status == "error", which causes
transient retryable errors (e.g., concurrency, rate-limit, auth) to be recorded
as a final "error" before the retry branches run; move the _save_exit_reason
call out of the early status == "error" branch and instead call it only in the
final non-retryable error handling branch (the else branch that handles
non-concurrency, non-rate-limit, non-auth failures) so that transient errors
that trigger the concurrency/rate-limit/auth retry logic do not write
exitReason="error" prematurely. Ensure you keep references to status, spec_dir
and _save_exit_reason and do not change retry flow or error detection logic—only
relocate the write so it executes exclusively for true terminal errors.
| def _save_exit_reason(spec_dir: Path, exit_reason: str) -> None: | ||
| """ | ||
| Write exitReason to implementation_plan.json so RDR can detect it. | ||
|
|
||
| Args: | ||
| spec_dir: Spec directory containing implementation_plan.json | ||
| exit_reason: The reason for exit ("error", "stuckRetry_loop", etc.) | ||
| """ | ||
| try: | ||
| plan = load_implementation_plan(spec_dir) | ||
| if plan: | ||
| plan["exitReason"] = exit_reason | ||
| plan["updated_at"] = datetime.now(timezone.utc).isoformat() | ||
|
|
||
| plan_file = spec_dir / "implementation_plan.json" | ||
| with open(plan_file, "w", encoding="utf-8") as f: | ||
| json.dump(plan, f, indent=2) | ||
|
|
||
| logger.info(f"Set exitReason={exit_reason} in implementation_plan.json") | ||
| except Exception as e: | ||
| logger.warning(f"Failed to write exitReason to plan: {e}") |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Duplicated function — extract _save_exit_reason to a shared module.
This function is identical to the one in coder.py (lines 101-124). Extract it into agents/utils.py (which is already imported here) to avoid maintaining two copies.
Additionally, this uses a plain open() + json.dump() write, while the rest of the codebase uses write_json_atomic for implementation_plan.json (see plan.py Line 132). A crash or interrupt during the write could corrupt the plan file. Use write_json_atomic here for consistency and safety.
Proposed shared helper in agents/utils.py
+# In agents/utils.py
+from core.file_utils import write_json_atomic
+
+def save_exit_reason(spec_dir: Path, exit_reason: str) -> None:
+ """Write exitReason to implementation_plan.json so RDR can detect it."""
+ try:
+ plan = load_implementation_plan(spec_dir)
+ if plan:
+ plan["exitReason"] = exit_reason
+ plan["updated_at"] = datetime.now(timezone.utc).isoformat()
+ plan_file = spec_dir / "implementation_plan.json"
+ write_json_atomic(plan_file, plan, indent=2, ensure_ascii=False)
+ logger.info(f"Set exitReason={exit_reason} in implementation_plan.json")
+ except Exception as e:
+ logger.warning(f"Failed to write exitReason to plan: {e}")🤖 Prompt for AI Agents
In `@apps/backend/agents/planner.py` around lines 43 - 63, Extract the duplicated
_save_exit_reason implementation (currently in planner.py and coder.py) into
agents/utils.py as a shared helper named _save_exit_reason (or save_exit_reason)
and have both planner.py and coder.py import it from agents.utils; in the new
helper call load_implementation_plan to read the plan, set "exitReason" and
"updated_at", and write the file using the project’s write_json_atomic utility
(instead of open/json.dump) to atomically persist implementation_plan.json;
update planner.py and coder.py to remove their local copies and import the
shared function, and ensure any references to load_implementation_plan and
write_json_atomic are imported in agents.utils.py.
| # On Windows, skip the SDK's version check to avoid a potential anyio.open_process() failure. | ||
| # The version check spawns a subprocess with different params than the main connect() call, | ||
| # and silently catches errors — but it can interfere with the main process on Windows. | ||
| if is_windows(): | ||
| # Validated against claude-agent-sdk v1.x — becomes a no-op if SDK removes this env var | ||
| sdk_env["CLAUDE_AGENT_SDK_SKIP_VERSION_CHECK"] = "1" |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Check project dependencies for SDK version
fd -t f "(requirements|setup|pyproject|poetry|pipfile)" -e txt -e toml -e cfg | head -20Repository: AndyMik90/Auto-Claude
Length of output: 150
🏁 Script executed:
# Search for the env var in the codebase
rg "CLAUDE_AGENT_SDK_SKIP_VERSION_CHECK" --type pyRepository: AndyMik90/Auto-Claude
Length of output: 153
🏁 Script executed:
# Check if there are any SDK version references in the codebase
rg "claude.agent.sdk|claude-agent-sdk" -t py --type-add 'config:(toml|txt|cfg|lock)' -g "requirements*" -g "pyproject.toml" -g "setup.py"Repository: AndyMik90/Auto-Claude
Length of output: 179
🏁 Script executed:
# Look for version specifications
cat pyproject.toml 2>/dev/null || cat requirements.txt 2>/dev/null || cat setup.py 2>/dev/null | head -50Repository: AndyMik90/Auto-Claude
Length of output: 47
🏁 Script executed:
sed -n '508,513p' apps/backend/core/client.pyRepository: AndyMik90/Auto-Claude
Length of output: 527
🏁 Script executed:
# Search for any SDK documentation or version-related comments in the file
rg "v0\.|v1\.|claude-agent-sdk" apps/backend/core/client.py -A 2 -B 2Repository: AndyMik90/Auto-Claude
Length of output: 334
🏁 Script executed:
# Check imports to see how the SDK is being used
head -50 apps/backend/core/client.py | rg -i "import|from"Repository: AndyMik90/Auto-Claude
Length of output: 416
🏁 Script executed:
# Check if there are any comments or documentation about SDK versions elsewhere
rg "0\.1\.|claude-agent-sdk" apps/backend/ -i --type py | head -20Repository: AndyMik90/Auto-Claude
Length of output: 904
🌐 Web query:
claude-agent-sdk 0.1.33 CLAUDE_AGENT_SDK_SKIP_VERSION_CHECK environment variable documentation
💡 Result:
In claude-agent-sdk 0.1.33 (Python) you can disable the Claude Code CLI minimum-version compatibility check by setting this environment variable before starting your program:
CLAUDE_AGENT_SDK_SKIP_VERSION_CHECK=true(any “truthy” value is typically used, e.g.true/1)
This env var was added specifically to “skip the minimum version validation when the SDK connects to Claude Code.” [1] (It’s documented as “add CLAUDE_AGENT_SDK_SKIP_VERSION_CHECK env var” in the project’s release notes.) [2] A simple usage example is: export CLAUDE_AGENT_SDK_SKIP_VERSION_CHECK=true. [3]
Sources: [1] [2] [3]
Update the version reference in the comment to match the actual SDK version in use.
Line 512 states "Validated against claude-agent-sdk v1.x" but the project requires claude-agent-sdk>=0.1.33. The environment variable CLAUDE_AGENT_SDK_SKIP_VERSION_CHECK is recognized in v0.1.33, so the code works correctly—just update the comment to reference the actual tested version (v0.1.33) rather than an incorrect v1.x reference.
🤖 Prompt for AI Agents
In `@apps/backend/core/client.py` around lines 508 - 513, Update the misleading
comment that says "Validated against claude-agent-sdk v1.x" to reference the
actual tested SDK version (v0.1.33) so it accurately reflects validation; edit
the comment near the is_windows() block that sets
sdk_env["CLAUDE_AGENT_SDK_SKIP_VERSION_CHECK"] = "1" to state it was validated
against claude-agent-sdk v0.1.33 (or the project's required version) instead of
v1.x.
| # Windows: Avoid WinError 206 (command line too long). | ||
| # Windows CreateProcessW has a 32,767 character limit for the entire command line. | ||
| # Large system prompts (e.g. with CLAUDE.md) and inline MCP config can exceed this. | ||
| # Fix: write oversized system prompt to a file, pass a short reference instead. | ||
| # Also write MCP config to a file so --mcp-config uses a file path, not inline JSON. | ||
| _WIN_CMD_SAFE_LIMIT = 28000 # Leave headroom below 32,767 | ||
|
|
||
| if is_windows() and len(base_prompt) > _WIN_CMD_SAFE_LIMIT: | ||
| prompt_cache_file = spec_dir / "system_prompt_cache.md" | ||
| prompt_cache_file.write_text(base_prompt, encoding="utf-8") | ||
| logger.info( | ||
| f"[WinCmdLen] System prompt too long ({len(base_prompt)} chars), " | ||
| f"wrote to {prompt_cache_file}" | ||
| ) | ||
| print( | ||
| f" - System prompt: externalized to file ({len(base_prompt)} chars > {_WIN_CMD_SAFE_LIMIT} limit)" | ||
| ) | ||
| base_prompt = ( | ||
| f"CRITICAL: Your complete system instructions are in this file:\n" | ||
| f" {prompt_cache_file.resolve()}\n\n" | ||
| f"You MUST read this file with the Read tool IMMEDIATELY before doing anything else.\n" | ||
| f"Your working directory is: {project_dir.resolve()}\n" | ||
| f"Follow ALL instructions in that file. Do not skip this step." | ||
| ) |
There was a problem hiding this comment.
Missing error handling and potential concurrency issue in prompt externalization.
Two concerns:
-
No error handling on write:
prompt_cache_file.write_text()(Line 848) can raiseOSError(permissions, disk full). This would crashcreate_client()without a clear diagnostic. Wrap in try/except and fall back to the (possibly too-long) inline prompt with a warning. -
Fixed filename risks overwrite:
system_prompt_cache.mdis a static name. If two concurrent agent sessions share the samespec_dir, one will overwrite the other's prompt file mid-session. Consider using a unique suffix (e.g., PID or UUID).
Proposed fix
- if is_windows() and len(base_prompt) > _WIN_CMD_SAFE_LIMIT:
- prompt_cache_file = spec_dir / "system_prompt_cache.md"
- prompt_cache_file.write_text(base_prompt, encoding="utf-8")
+ if is_windows() and len(base_prompt) > _WIN_CMD_SAFE_LIMIT:
+ import uuid
+ prompt_cache_file = spec_dir / f"system_prompt_cache_{uuid.uuid4().hex[:8]}.md"
+ try:
+ prompt_cache_file.write_text(base_prompt, encoding="utf-8")
+ except OSError as e:
+ logger.warning(
+ f"[WinCmdLen] Failed to externalize system prompt: {e}. "
+ f"Proceeding with inline prompt (may exceed command-line limit)."
+ )
+ prompt_cache_file = NoneThen guard the prompt replacement on prompt_cache_file is not None.
🤖 Prompt for AI Agents
In `@apps/backend/core/client.py` around lines 839 - 862, The
Windows-prompt-externalization lacks error handling and uses a fixed filename
which can cause overwrites; wrap the prompt write in a try/except around
prompt_cache_file.write_text(...) (catch OSError/Exception), log processLogger/
logger warning with the exception, and if writing fails fall back to leaving
base_prompt inline (do not replace it) so create_client continues; also generate
a unique filename for prompt_cache_file (e.g., include os.getpid() or
uuid.uuid4() in the name or use tempfile.NamedTemporaryFile in spec_dir) so
concurrent sessions won't clobber each other, and only perform the base_prompt
replacement (the CRITICAL message that points to prompt_cache_file.resolve())
when the write succeeded (guard on a successful write flag).
| onTaskListRefresh: (callback: (projectId: string) => void) => () => void; | ||
| onTaskAutoStart: (callback: (projectId: string, taskId: string) => void) => () => void; | ||
| onTaskStatusChanged: (callback: (data: { | ||
| projectId: string; | ||
| taskId: string; | ||
| specId: string; | ||
| oldStatus: TaskStatus; | ||
| newStatus: TaskStatus; | ||
| }) => void) => () => void; | ||
| onTaskRegressionDetected: (callback: (data: { | ||
| projectId: string; | ||
| specId: string; | ||
| oldStatus: string; | ||
| newStatus: string; | ||
| timestamp: string; | ||
| }) => void) => () => void; |
There was a problem hiding this comment.
onTaskAutoRefresh is missing from the TaskAPI interface.
The implementation at lines 422–436 defines onTaskAutoRefresh, but this method is not declared in the TaskAPI interface. Consumers typing against TaskAPI (i.e., the renderer via window.electronAPI) won't see it, and under strict excess-property checks this would be a compile error.
Proposed fix — add the declaration to the interface (after line 125)
onTaskRegressionDetected: (callback: (data: {
projectId: string;
specId: string;
oldStatus: string;
newStatus: string;
timestamp: string;
}) => void) => () => void;
+ onTaskAutoRefresh: (callback: (data: {
+ reason: string;
+ projectId: string;
+ specId: string;
+ }) => void) => () => void;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| onTaskListRefresh: (callback: (projectId: string) => void) => () => void; | |
| onTaskAutoStart: (callback: (projectId: string, taskId: string) => void) => () => void; | |
| onTaskStatusChanged: (callback: (data: { | |
| projectId: string; | |
| taskId: string; | |
| specId: string; | |
| oldStatus: TaskStatus; | |
| newStatus: TaskStatus; | |
| }) => void) => () => void; | |
| onTaskRegressionDetected: (callback: (data: { | |
| projectId: string; | |
| specId: string; | |
| oldStatus: string; | |
| newStatus: string; | |
| timestamp: string; | |
| }) => void) => () => void; | |
| onTaskListRefresh: (callback: (projectId: string) => void) => () => void; | |
| onTaskAutoStart: (callback: (projectId: string, taskId: string) => void) => () => void; | |
| onTaskStatusChanged: (callback: (data: { | |
| projectId: string; | |
| taskId: string; | |
| specId: string; | |
| oldStatus: TaskStatus; | |
| newStatus: TaskStatus; | |
| }) => void) => () => void; | |
| onTaskRegressionDetected: (callback: (data: { | |
| projectId: string; | |
| specId: string; | |
| oldStatus: string; | |
| newStatus: string; | |
| timestamp: string; | |
| }) => void) => () => void; | |
| onTaskAutoRefresh: (callback: (data: { | |
| reason: string; | |
| projectId: string; | |
| specId: string; | |
| }) => void) => () => void; |
🤖 Prompt for AI Agents
In `@apps/frontend/src/preload/api/task-api.ts` around lines 110 - 125, The
TaskAPI interface is missing the onTaskAutoRefresh declaration that the
implementation provides; add an onTaskAutoRefresh entry to the TaskAPI interface
with the exact callback signature used by the implementation (match the
implementation in onTaskAutoRefresh at lines ~422–436), e.g., add a method named
onTaskAutoRefresh with the same parameter types and return type as the other
listeners (so consumers of TaskAPI / window.electronAPI see it and TypeScript
types align with the implementation).
| // Immediate RDR ping - writes signal file now (no 30s timer) | ||
| pingRdrImmediate: (projectId: string, tasks: Task[]): Promise<IPCResult<{ taskCount: number; signalPath: string }>> => | ||
| ipcRenderer.invoke(IPC_CHANNELS.PING_RDR_IMMEDIATE, projectId, tasks), |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
pingRdrImmediate sends full Task[] objects over IPC — prefer task IDs.
Every other method in this file passes task IDs (string[]) and lets the main process look up the data. Serializing full Task objects across the IPC bridge is heavier and creates a risk of stale data if the main-process state has diverged. Consider aligning with the rest of the API by accepting taskIds: string[] instead.
🤖 Prompt for AI Agents
In `@apps/frontend/src/preload/api/task-api.ts` around lines 501 - 503, Change
pingRdrImmediate to accept taskIds: string[] instead of full Task[] to match
other API methods and avoid serializing large/stale objects; update the function
signature in apps/frontend/src/preload/api/task-api.ts (pingRdrImmediate) to
(projectId: string, taskIds: string[]) and call
ipcRenderer.invoke(IPC_CHANNELS.PING_RDR_IMMEDIATE, projectId, taskIds) so the
main process can look up tasks by ID (ensure IPC_CHANNELS.PING_RDR_IMMEDIATE
handling in the main process expects string[] and update any callers to pass
task IDs).
| // Auto Shutdown (Global - monitors ALL projects) | ||
| getAutoShutdownStatus: () => | ||
| ipcRenderer.invoke(IPC_CHANNELS.GET_AUTO_SHUTDOWN_STATUS), | ||
|
|
||
| setAutoShutdown: (enabled: boolean) => | ||
| ipcRenderer.invoke(IPC_CHANNELS.SET_AUTO_SHUTDOWN, enabled), | ||
|
|
||
| cancelAutoShutdown: () => | ||
| ipcRenderer.invoke(IPC_CHANNELS.CANCEL_AUTO_SHUTDOWN), |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Missing explicit return-type annotations on auto-shutdown methods.
Every other ipcRenderer.invoke call in this file includes an explicit return-type annotation (e.g., : Promise<IPCResult<boolean>>). These three methods omit them, relying on inference from ipcRenderer.invoke (which returns Promise<any>). This silently bypasses type checking on the response shape.
Proposed fix
- getAutoShutdownStatus: () =>
+ getAutoShutdownStatus: (): Promise<IPCResult<AutoShutdownStatus>> =>
ipcRenderer.invoke(IPC_CHANNELS.GET_AUTO_SHUTDOWN_STATUS),
- setAutoShutdown: (enabled: boolean) =>
+ setAutoShutdown: (enabled: boolean): Promise<IPCResult<AutoShutdownStatus>> =>
ipcRenderer.invoke(IPC_CHANNELS.SET_AUTO_SHUTDOWN, enabled),
- cancelAutoShutdown: () =>
+ cancelAutoShutdown: (): Promise<IPCResult<void>> =>
ipcRenderer.invoke(IPC_CHANNELS.CANCEL_AUTO_SHUTDOWN),📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Auto Shutdown (Global - monitors ALL projects) | |
| getAutoShutdownStatus: () => | |
| ipcRenderer.invoke(IPC_CHANNELS.GET_AUTO_SHUTDOWN_STATUS), | |
| setAutoShutdown: (enabled: boolean) => | |
| ipcRenderer.invoke(IPC_CHANNELS.SET_AUTO_SHUTDOWN, enabled), | |
| cancelAutoShutdown: () => | |
| ipcRenderer.invoke(IPC_CHANNELS.CANCEL_AUTO_SHUTDOWN), | |
| // Auto Shutdown (Global - monitors ALL projects) | |
| getAutoShutdownStatus: (): Promise<IPCResult<AutoShutdownStatus>> => | |
| ipcRenderer.invoke(IPC_CHANNELS.GET_AUTO_SHUTDOWN_STATUS), | |
| setAutoShutdown: (enabled: boolean): Promise<IPCResult<AutoShutdownStatus>> => | |
| ipcRenderer.invoke(IPC_CHANNELS.SET_AUTO_SHUTDOWN, enabled), | |
| cancelAutoShutdown: (): Promise<IPCResult<void>> => | |
| ipcRenderer.invoke(IPC_CHANNELS.CANCEL_AUTO_SHUTDOWN), |
🤖 Prompt for AI Agents
In `@apps/frontend/src/preload/api/task-api.ts` around lines 524 - 532, Add
explicit return-type annotations to the three auto-shutdown methods so their
responses are type-checked instead of inferred as any: update
getAutoShutdownStatus, setAutoShutdown, and cancelAutoShutdown to declare their
return types (e.g., Promise<IPCResult<boolean>> to match other
ipcRenderer.invoke usages in this file) so the IPC result shape is enforced.
| // Debug Events | ||
| onDebugEvent: ( | ||
| callback: (data: { type: string; taskId?: string; agentKilled?: boolean; timestamp: string; [key: string]: unknown }) => void | ||
| ): (() => void) => { | ||
| const handler = ( | ||
| _event: Electron.IpcRendererEvent, | ||
| data: { type: string; taskId?: string; agentKilled?: boolean; timestamp: string } | ||
| ): void => { | ||
| callback(data); | ||
| }; | ||
| ipcRenderer.on(IPC_CHANNELS.DEBUG_EVENT, handler); | ||
| return () => { | ||
| ipcRenderer.removeListener(IPC_CHANNELS.DEBUG_EVENT, handler); | ||
| }; | ||
| } |
There was a problem hiding this comment.
Type mismatch: onDebugEvent handler drops the index signature.
The TaskAPI interface (line 156) declares the callback data with [key: string]: unknown to allow arbitrary extra fields, but the handler type on line 540 omits it. While JavaScript will pass extra properties through at runtime, the types should match for consistency and so that the handler's internal type doesn't silently narrow the data.
Proposed fix
const handler = (
_event: Electron.IpcRendererEvent,
- data: { type: string; taskId?: string; agentKilled?: boolean; timestamp: string }
+ data: { type: string; taskId?: string; agentKilled?: boolean; timestamp: string; [key: string]: unknown }
): void => {🤖 Prompt for AI Agents
In `@apps/frontend/src/preload/api/task-api.ts` around lines 534 - 548, The
onDebugEvent IPC handler narrows the incoming data type by omitting the index
signature present on the TaskAPI callback type; update the handler parameter
type in onDebugEvent so it matches the declared callback shape (include [key:
string]: unknown) — i.e., change the handler's data parameter type to { type:
string; taskId?: string; agentKilled?: boolean; timestamp: string; [key:
string]: unknown } so the handler and the callback have identical types.
| # Merge Testing Checklist: Main -> Develop | ||
|
|
||
| Use this checklist to verify all features work after resolving the 33 merge conflicts. | ||
|
|
||
|
|
||
| ## Main checklist | ||
|
|
||
| - [x?] RDR priority escalation works (P1 -> P3 after 3 attempts) | ||
| - [x] Auto-Recover — recover a single or batches of tasks | ||
| - [x] Auto-Continue working in batch or single task tool [x] Working for In Progress | ||
| - [x] Queue not moving when task regresses from Progress to Planning and [x?] sending RDR when RDR is ON - needs to be tested. [ ] Might need to have tasks that get user stopped into HR get RDR single task toggle offed | ||
| - [x?] Regression of tasks RDR detection working | ||
| - [x] All incomplete tasks in HR and Regressed to backlog pinged in RDR - test with the method from previous entry | ||
| - [...?] Auto-Resume on Rate Limit Reset | ||
|
|
||
| ## Build & Launch | ||
|
|
||
| - [x] `npm install` in `apps/frontend` -- no errors | ||
| - [x] `npm run build` -- TypeScript compiles with no errors | ||
| - [x] `npm run dev` -- app launches without crashes | ||
| - [x] No console errors on startup | ||
|
|
||
| --- | ||
|
|
||
| ## Critical -- Our Mod Features (conflict-affected) | ||
|
|
||
| ### RDR System | ||
| - [x] Start 2+ tasks on CV Project | ||
| - [x] Wait for a task to get stuck -> RDR detects it and sends recovery message | ||
| - [x] RDR does NOT flag actively running tasks (backlog false positive fix) | ||
| - [x?] RDR priority escalation works (P1 -> P3 after 3 attempts) | ||
| - **Files:** `rdr-handlers.ts`, `KanbanBoard.tsx`, `ipc-handlers/index.ts` | ||
|
|
||
| ### Auto-Shutdown | ||
| - [x] Enable auto-shutdown in settings | ||
| - [x] Start tasks -> auto-shutdown detects when all reach human_review/done | ||
| - [x] Shutdown monitor spawns correctly (no terminal popup on Windows) | ||
| - **Files:** `auto-shutdown-handlers.ts`, `index.ts`, `shutdown-monitor.ts` | ||
|
|
||
| ### MCP Server and tools | ||
| - [x] Claude Code connects to Auto-Claude MCP server | ||
| - [x] `list_tasks` returns correct task list | ||
| - [x] `create_task` creates a task (appears on Kanban within 2-3s) | ||
| - [x] `process_rdr_batch` restarts stuck tasks | ||
| - [x] `recover_stuck_task` removes yellow outline and restarts | ||
| - [x] Auto-Continue working in batch or single task tool [x] Working for In Progress | ||
| - [x...?] Queue not moving when task regresses from Progress to Planning and sending RDR when RDR is ON | ||
| - [x?] Regression of tasks RDR detection working | ||
| - [x] All incomplete tasks in HR and Regressed to backlog pinged in RDR - test with the method from previous entry | ||
| - **Files:** `mcp-server/index.ts`, `project-store.ts` | ||
|
|
||
| ### Task Crash Recovery | ||
| - [x] Kill a task agent process manually | ||
| - [x] Crash is detected (exit code != 0) | ||
| - [???] Auto-restart triggers if enabled | ||
| - [x] Crash info persisted to `implementation_plan.json` | ||
| - **Files:** `agent-process.ts`, `agent-events-handlers.ts` | ||
|
|
||
| ### File Watcher | ||
| - [x] Create a new spec directory -> UI auto-refreshes within 2-3s | ||
| - [x] Modify `implementation_plan.json` -> board updates automatically | ||
| - [x] `start_requested` status triggers agent start | ||
| - **Files:** `file-watcher.ts`, `project-store.ts` | ||
|
|
||
| ### Exit Reason Persistence | ||
| - [x] Run a task to completion -> `exitReason: "success"` in plan | ||
| - [x] Task crashes -> `exitReason: "error"` saved | ||
| - **Files:** `coder.py`, `planner.py` | ||
|
|
||
| --- | ||
|
|
||
| ## High -- Upstream Features (must not break) | ||
|
|
||
| ### Queue Management | ||
| - [x] Open queue settings modal from Kanban | ||
| - [x] Set queue capacity limits | ||
| - [x] Queue enforces capacity (no more than N concurrent tasks) | ||
| - **Files:** `KanbanBoard.tsx` | ||
|
|
||
| ### State Machine Transitions | ||
| - [x] Task lifecycle: backlog -> planning -> in_progress -> ai_review -> human_review | ||
| - [x] No stuck transitions or missing state updates | ||
| - **Files:** `agent-events-handlers.ts`, `execution-handlers.ts` | ||
|
|
||
| --- | ||
|
|
||
| ## Notes | ||
|
|
||
| _Issues found and fixed during testing:_ | ||
|
|
||
| - **WinError 206** (fixed `e41b64e7`): System prompt too long for Windows CreateProcessW limit. Externalized to `system_prompt_cache.md`. | ||
| - **MCP config crash** (fixed `efa37e6f`): `json.dump()` on SDK Server instance. Removed caching. | ||
| - **writeFileSync missing** (fixed `efa37e6f`): Import dropped in merge. | ||
| - **shouldSkipStuckCheck missing** (fixed `b292cee4`): Function dropped in merge. | ||
| - **XState MARK_DONE** (fixed `914698af`): Only accepted from 3 states, added to all non-terminal states. | ||
| - **MCP phase name mapping** (fixed `5a3e01e0`): `specCreation` not mapped to `spec` in `utils.ts`. | ||
| - **OutputMonitor false idle** (fixed `5a3e01e0`): Added `hasUnresolvedToolUse()` dynamic check. | ||
| - **processQueue merge regression** (fixed `d458073e`): Andy's fix lost in merge, restored + added settings change trigger. |
There was a problem hiding this comment.
Several checklist items remain unverified.
Lines 8, 11-12, 14, 31, 47-48, and 55 have uncertain markers ([x?], [...?], [???]). If this file ships as part of the PR, consider either resolving these items or clearly marking them as known gaps so they aren't forgotten post-merge.
🧰 Tools
🪛 LanguageTool
[grammar] ~11-~11: Use a hyphen to join words.
Context: ... get user stopped into HR get RDR single task toggle offed - [x?] Regression of t...
(QB_NEW_EN_HYPHEN)
[style] ~89-~89: Consider using a different verb for a more formal wording.
Context: ...s.ts` --- ## Notes Issues found and fixed during testing: - WinError 206 (f...
(FIX_RESOLVE)
🪛 markdownlint-cli2 (0.20.0)
[warning] 27-27: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
[warning] 34-34: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
[warning] 40-40: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
[warning] 52-52: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
[warning] 59-59: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
[warning] 65-65: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
[warning] 74-74: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
[warning] 80-80: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
🤖 Prompt for AI Agents
In `@MERGE-TESTING-CHECKLIST.md` around lines 1 - 98, The checklist contains
ambiguous completion markers ([x?], [...?], [???]) under sections like "Main
checklist", "RDR System", "MCP Server and tools", and "Task Crash Recovery";
update the checklist to remove ambiguity by either (a) verifying each uncertain
item and replacing the marker with a definitive state "[x]" or "[-]" plus a
short note (e.g., "needs follow-up: <who/when>") or (b) explicitly label
remaining items under a "Known Gaps" subsection with a brief rationale and
owner, ensuring all instances of "[x?]", "[...?]", and "[???]" are resolved or
documented so they won't be forgotten post-merge.
|
I have read the CLA Document and I hereby sign the CLA |
- env-utils.ts: Read both env.PATH and env.Path to preserve original
system path on Windows (spread makes process.env case-sensitive)
- agent-process.ts: Remove redundant late Path fix (now handled at source)
- coder.py: Move _save_exit_reason("error") after concurrency/rate-limit/auth
checks so retryable errors don't trigger RDR during retry windows
… comment) - client.py: split long logger.info lines to pass ruff format check - client.py: correct SDK version comment from v1.x to v0.1.33 - .gitignore: remove duplicate nul entry (line 192)
Base Branch
developbranch (required for all feature/fix PRs)main(hotfix only - maintainers)Description
Adds an MCP server (15 tools) that lets Claude Code orchestrate Auto-Claude tasks programmatically and/or unattended. Includes RDR (Recover, Debug, Resend) — a 6-priority automatic recovery system that detects stuck/failed tasks and sends recovery prompts to a master LLM via MCP. Enables unattended overnight batch runs with auto-shutdown when all tasks are complete.
Key additions:
Related Issue
N/A — Feature contribution from fork
Type of Change
Area
Commit Message Format
feat: MCP server, RDR auto-recovery & batch task orchestration.AI Disclosure
Tool(s) used: Claude Code (Opus)
Testing level:
Untested -- AI output not yet verified
Lightly tested -- ran the app / spot-checked key paths
Fully tested -- all tests pass, manually verified behavior
I understand what this PR does and how the underlying code works
Checklist
developbranchPlatform Testing Checklist
platform/module instead of directprocess.platformchecksfindExecutable()or platform abstractions)CI/Testing Requirements
Screenshots
N/A — Backend/infrastructure changes
Feature Toggle
Breaking Changes
Breaking: No
Summary by CodeRabbit
New Features
Improvements
Bug Fixes
Documentation