fix: watch worktree path for implementation_plan.json changes (#1805)#1842
fix: watch worktree path for implementation_plan.json changes (#1805)#1842
Conversation
The FileWatcher was always watching the main project's spec directory for implementation_plan.json changes. When tasks run in a worktree, the backend writes the plan file to the worktree directory instead, so the watcher never detected changes and subtask progress was never sent to the UI. Changes: - Add getSpecDirForWatcher() helper that checks worktree path first - Update all 3 file watcher setup locations (TASK_START, TASK_UPDATE_STATUS auto-start, TASK_RECOVER_STUCK auto-restart) to use worktree-aware paths - Add re-watch logic in execution-progress handler: when a worktree appears after task start, automatically switch the watcher to the worktree path - Add worktree fallback in exit handler for reading final plan state - Add getWatchedSpecDir() method to FileWatcher for path comparison Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
📝 WalkthroughWalkthroughAdds guarded watcher creation and a public accessor to FileWatcher; updates task/agent IPC handlers and watcher resolution to prefer worktree spec directories, read/fallback to worktree implementation plans, and reconfigure watchers to worktree spec dirs when appropriate. Changes
Sequence Diagram(s)sequenceDiagram
participant TaskManager
participant IPCHandler
participant FileWatcher
participant FS as Worktree/Project FS
TaskManager->>IPCHandler: task exit / persist progress
IPCHandler->>FileWatcher: getWatchedSpecDir(taskId)
FileWatcher-->>IPCHandler: watchedDir or null
alt watchedDir present and in-memory plan absent
IPCHandler->>FS: compute worktreeSpecDir & read implementation_plan.json
FS-->>IPCHandler: implementation_plan.json (found / not found)
alt plan found in worktree
IPCHandler->>FileWatcher: compare watchedDir vs worktreeSpecDir
alt different
IPCHandler->>FileWatcher: rewatch(taskId, worktreeSpecDir)
FileWatcher-->>IPCHandler: success / error
end
end
end
IPCHandler->>FS: persist phase/progress to chosen spec dir
FS-->>IPCHandler: write ack
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~22 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 6✅ Passed checks (6 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary of ChangesHello @AndyMik90, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request addresses a critical issue where subtasks were not being generated or updated correctly in the UI due to the file watcher failing to track changes in Highlights
Changelog
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request effectively resolves the issue of the file watcher not tracking implementation_plan.json changes in worktrees. The core of the fix involves introducing getSpecDirForWatcher to correctly determine whether to watch the main project path or the worktree path, and applying this at all file watcher setup locations. The addition of re-watching logic in agent-events-handlers.ts for tasks where the worktree is created post-start is a thoughtful touch that handles an important edge case. The fallback mechanism in the exit handler to read the final plan from the worktree is also a solid defensive addition.
The code is well-structured and the changes are logical. I have a couple of suggestions regarding variable naming consistency in execution-handlers.ts to improve long-term maintainability. Overall, this is a great fix.
| const specsBaseDirForRecovery = getSpecsDir(project.autoBuildPath); | ||
| const specDirForWatcher = getSpecDirForWatcher(project.path, specsBaseDirForRecovery, task.specId); | ||
| fileWatcher.watch(taskId, specDirForWatcher); | ||
|
|
||
| // Check if spec.md exists to determine whether to run spec creation or task execution | ||
| const specFilePath = path.join(specDirForWatcher, AUTO_BUILD_PATHS.SPEC_FILE); | ||
| // Check main project path for spec file (spec is created before worktree) | ||
| const mainSpecDirForRecovery = path.join(project.path, specsBaseDirForRecovery, task.specId); | ||
| const specFilePath = path.join(mainSpecDirForRecovery, AUTO_BUILD_PATHS.SPEC_FILE); |
There was a problem hiding this comment.
For consistency with the TASK_START and TASK_UPDATE_STATUS handlers, it would be clearer to standardize the variable names used here. Using specsBaseDir, watchSpecDir, and mainSpecDir would improve maintainability across the file.
| const specsBaseDirForRecovery = getSpecsDir(project.autoBuildPath); | |
| const specDirForWatcher = getSpecDirForWatcher(project.path, specsBaseDirForRecovery, task.specId); | |
| fileWatcher.watch(taskId, specDirForWatcher); | |
| // Check if spec.md exists to determine whether to run spec creation or task execution | |
| const specFilePath = path.join(specDirForWatcher, AUTO_BUILD_PATHS.SPEC_FILE); | |
| // Check main project path for spec file (spec is created before worktree) | |
| const mainSpecDirForRecovery = path.join(project.path, specsBaseDirForRecovery, task.specId); | |
| const specFilePath = path.join(mainSpecDirForRecovery, AUTO_BUILD_PATHS.SPEC_FILE); | |
| const specsBaseDir = getSpecsDir(project.autoBuildPath); | |
| const watchSpecDir = getSpecDirForWatcher(project.path, specsBaseDir, task.specId); | |
| fileWatcher.watch(taskId, watchSpecDir); | |
| // Check if spec.md exists to determine whether to run spec creation or task execution | |
| // Check main project path for spec file (spec is created before worktree) | |
| const mainSpecDir = path.join(project.path, specsBaseDir, task.specId); | |
| const specFilePath = path.join(mainSpecDir, AUTO_BUILD_PATHS.SPEC_FILE); |
| const taskDescription = task.description || task.title; | ||
| console.warn(`[Recovery] Starting spec creation for: ${task.specId}`); | ||
| agentManager.startSpecCreation(taskId, project.path, taskDescription, specDirForWatcher, task.metadata, baseBranchForRecovery, project.id); | ||
| agentManager.startSpecCreation(taskId, project.path, taskDescription, mainSpecDirForRecovery, task.metadata, baseBranchForRecovery, project.id); |
There was a problem hiding this comment.
To align with the suggested variable name standardization in the previous block, please update mainSpecDirForRecovery to mainSpecDir here as well.
| agentManager.startSpecCreation(taskId, project.path, taskDescription, mainSpecDirForRecovery, task.metadata, baseBranchForRecovery, project.id); | |
| agentManager.startSpecCreation(taskId, project.path, taskDescription, mainSpecDir, task.metadata, baseBranchForRecovery, project.id); |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@apps/frontend/src/main/ipc-handlers/agent-events-handlers.ts`:
- Around line 238-247: The call to fileWatcher.watch(taskId, worktreeSpecDir) is
asynchronous and may reject; update the re-watch block (the logic using
fileWatcher.getWatchedSpecDir, currentWatchDir, worktreeSpecDir,
existsSync(worktreePlanPath), and taskId) to properly handle the returned
Promise instead of fire-and-forget — either await the call inside an async
function or attach a .catch that logs errors (e.g., using console.error or
process logger) so any rejection from internal unwatch()/watch() is caught and
does not create an unhandled promise rejection.
AndyMik90
left a comment
There was a problem hiding this comment.
✅ Auto Claude Review - APPROVED
Status: Ready to Merge
Summary: ### Merge Verdict: ✅ READY TO MERGE
✅ Ready to merge - All checks passing, no blocking issues found.
No blocking issues found
Risk Assessment
| Factor | Level | Notes |
|---|---|---|
| Complexity | Low | Based on lines changed |
| Security Impact | None | Based on security findings |
| Scope Coherence | Good | Based on structural review |
Generated by Auto Claude PR Review
This automated review found no blocking issues. The PR can be safely merged.
Generated by Auto Claude
…dling (#1805) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In `@apps/frontend/src/main/ipc-handlers/agent-events-handlers.ts`:
- Around line 238-249: The FileWatcher.watch() call here can race because
watch() is async and updates watcher state after awaiting unwatch(), so change
the call in agent-events-handlers (where fileWatcher.watch(taskId,
worktreeSpecDir) is invoked) to await the promise to serialize updates (i.e.,
await fileWatcher.watch(taskId, worktreeSpecDir).catch(...)); alternatively, if
you prefer fixing internally, modify FileWatcher.watch() so it sets
this.watchers.set(taskId, pendingMarker) synchronously before awaiting
this.unwatch(taskId) to ensure the in-memory state prevents overlapping calls;
reference FileWatcher.watch(), FileWatcher.unwatch(),
fileWatcher.getWatchedSpecDir(taskId) and the current call site in
agent-events-handlers.ts.
In `@apps/frontend/src/main/ipc-handlers/task/execution-handlers.ts`:
- Around line 221-224: fileWatcher.watch(...) is being called fire-and-forget in
execution-handlers (e.g., the call where specsBaseDir is derived via getSpecsDir
and watchSpecDir via getSpecDirForWatcher) which can cause unhandled promise
rejections; update each fileWatcher.watch call (the one using taskId and
watchSpecDir, and the other two calls around the same areas) to append a
.catch(...) that logs the error (use the same logging pattern used in
agent-events-handlers.ts) so any rejection is handled and reported. Ensure you
reference the existing taskId/watchSpecDir variables and the
fileWatcher.watch(...) call sites when adding the .catch.
- Around line 1163-1171: The inner const specsBaseDir declared inside the
autoRestart block duplicates and shadows the outer specsBaseDir computed earlier
via getSpecsDir(project.autoBuildPath); remove the inner declaration and reuse
the outer specsBaseDir (used together with getSpecDirForWatcher, watchSpecDir,
mainSpecDir, and specFilePath) so the code no longer redeclares specsBaseDir and
relies on the single previously computed value.
…rror handling - Add pendingWatches guard in FileWatcher.watch() to prevent overlapping async calls from creating duplicate watchers (CodeRabbit critical finding) - Add .catch() to all three fire-and-forget fileWatcher.watch() calls in execution-handlers.ts to prevent unhandled promise rejections - Remove shadowed specsBaseDir re-declaration in autoRestart block, reusing the outer variable from the same TASK_RECOVER_STUCK handler scope
Review Finding AnalysisFindings Addressed (commit e7875dd)CodeRabbit - Critical: Race condition in FileWatcher.watch() (agent-events-handlers.ts:238-249)
CodeRabbit - Trivial: Missing
CodeRabbit - Trivial: Shadowed
Findings Already ResolvedGemini - Variable naming consistency (execution-handlers.ts:1164-1182)
|
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In `@apps/frontend/src/main/file-watcher.ts`:
- Around line 32-38: The watch/unwatch race can interleave while awaiting
existing.watcher.close(); update unwatch(taskId) to check a pendingWatches set
and return early if a watch(taskId) is in-flight (e.g., if
(this.pendingWatches.has(taskId)) return;), and ensure watch(taskId) adds taskId
to this.pendingWatches at start and removes it after the async close/create flow
completes so cleanup is coordinated; reference the methods watch(), unwatch(),
the this.watchers map, existing.watcher.close(), and the pendingWatches set when
making the change.
- Around line 42-46: The current check in the file-watcher that calls
this.emit('error', taskId, `Plan file not found: ${planPath}`) when
existsSync(planPath) is false should be changed to silently return instead of
emitting an error to avoid UI noise during normal watch() retry flows; locate
the block that references existsSync and planPath and remove the emit('error',
...) call so the function simply returns when the plan file is missing
(optionally keep a non-error debug/log statement if you need visibility).
In `@apps/frontend/src/main/ipc-handlers/task/execution-handlers.ts`:
- Around line 1174-1176: The inner block re-declares const mainSpecDir shadowing
the outer mainSpecDir; remove the inner re-declaration inside the autoRestart
block and reuse the outer mainSpecDir (which is computed from project.path,
specsBaseDir and task.specId) when constructing specFilePath (using
AUTO_BUILD_PATHS.SPEC_FILE). Ensure you only remove the duplicated const
mainSpecDir = ... line and keep the path.join(mainSpecDir,
AUTO_BUILD_PATHS.SPEC_FILE) usage so no other changes are needed.
| try { | ||
| // Close any existing watcher for this task | ||
| const existing = this.watchers.get(taskId); | ||
| if (existing) { | ||
| await existing.watcher.close(); | ||
| this.watchers.delete(taskId); | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Minor: unwatch() and watch() can interleave during close().
If unwatch(taskId) is called while watch(taskId) is awaiting existing.watcher.close() on line 36, unwatch will also attempt to close and delete the same watcher. The double-close on a chokidar FSWatcher is benign in practice, but unwatch completing first means line 37 here (this.watchers.delete) operates on an already-empty slot, and the new watcher is then stored without issue. No functional bug, but consider checking pendingWatches inside unwatch() to coordinate:
// in unwatch():
if (this.pendingWatches.has(taskId)) {
// A watch() is in flight — it will handle cleanup
return;
}This is low-priority since the current code is safe in Node's single-threaded model (the only real yield point is the close() await).
🤖 Prompt for AI Agents
In `@apps/frontend/src/main/file-watcher.ts` around lines 32 - 38, The
watch/unwatch race can interleave while awaiting existing.watcher.close();
update unwatch(taskId) to check a pendingWatches set and return early if a
watch(taskId) is in-flight (e.g., if (this.pendingWatches.has(taskId)) return;),
and ensure watch(taskId) adds taskId to this.pendingWatches at start and removes
it after the async close/create flow completes so cleanup is coordinated;
reference the methods watch(), unwatch(), the this.watchers map,
existing.watcher.close(), and the pendingWatches set when making the change.
| // Check if plan file exists | ||
| if (!existsSync(planPath)) { | ||
| this.emit('error', taskId, `Plan file not found: ${planPath}`); | ||
| return; | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# First, locate the file-watcher.ts file
find . -name "file-watcher.ts" -type fRepository: AndyMik90/Auto-Claude
Length of output: 105
🏁 Script executed:
#!/bin/bash
# Read the file-watcher.ts to understand the context
cd apps/frontend/src/main
wc -l file-watcher.ts
head -80 file-watcher.tsRepository: AndyMik90/Auto-Claude
Length of output: 2543
🏁 Script executed:
#!/bin/bash
# Search for error event listeners on fileWatcher instances
rg -n "fileWatcher\.on\(\s*['\"]error['\"]" --type=ts -C5
rg -n "\.on\(\s*['\"]error['\"]" --type=ts -C5 | head -100Repository: AndyMik90/Auto-Claude
Length of output: 8240
🏁 Script executed:
#!/bin/bash
# Search for where FileWatcher is instantiated and used
rg -n "new FileWatcher|new file-watcher|FileWatcher" --type=ts | head -20Repository: AndyMik90/Auto-Claude
Length of output: 2447
🏁 Script executed:
#!/bin/bash
# Search for 'not-ready' or 'warn' event patterns to see if already used
rg -n "emit\(['\"]warn|emit\(['\"]not-ready|emit\(['\"]error" --type=ts apps/frontend/src/main/Repository: AndyMik90/Auto-Claude
Length of output: 4681
Emitting 'error' for a missing plan file creates unnecessary UI noise during normal retry flow.
The error listener in agent-events-handlers.ts (line 309) sends all 'error' events to the renderer via IPC_CHANNELS.TASK_ERROR, surfacing them in the UI. During normal operation—when the worktree and implementation_plan.json are created after the initial watch() call—this emits alarming error messages even though callers already handle retries silently.
Remove the error emission for missing files and return silently:
Proposed fix
// Check if plan file exists
if (!existsSync(planPath)) {
- this.emit('error', taskId, `Plan file not found: ${planPath}`);
+ // Plan file may not exist yet (e.g. worktree still being created).
+ // Callers handle re-watching, so a silent return avoids noisy error events.
return;
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Check if plan file exists | |
| if (!existsSync(planPath)) { | |
| this.emit('error', taskId, `Plan file not found: ${planPath}`); | |
| return; | |
| } | |
| // Check if plan file exists | |
| if (!existsSync(planPath)) { | |
| // Plan file may not exist yet (e.g. worktree still being created). | |
| // Callers handle re-watching, so a silent return avoids noisy error events. | |
| return; | |
| } |
🤖 Prompt for AI Agents
In `@apps/frontend/src/main/file-watcher.ts` around lines 42 - 46, The current
check in the file-watcher that calls this.emit('error', taskId, `Plan file not
found: ${planPath}`) when existsSync(planPath) is false should be changed to
silently return instead of emitting an error to avoid UI noise during normal
watch() retry flows; locate the block that references existsSync and planPath
and remove the emit('error', ...) call so the function simply returns when the
plan file is missing (optionally keep a non-error debug/log statement if you
need visibility).
| // Check main project path for spec file (spec is created before worktree) | ||
| const mainSpecDir = path.join(project.path, specsBaseDir, task.specId); | ||
| const specFilePath = path.join(mainSpecDir, AUTO_BUILD_PATHS.SPEC_FILE); |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Shadowed mainSpecDir re-declaration — reuse the outer variable.
Line 1175 re-declares const mainSpecDir inside the autoRestart block, shadowing the identical declaration at Line 935. Both compute path.join(project.path, specsBaseDir, task.specId) from the same inputs. This is the same pattern that was already cleaned up for specsBaseDir.
♻️ Remove the inner re-declaration
// Check if spec.md exists to determine whether to run spec creation or task execution
// Check main project path for spec file (spec is created before worktree)
- const mainSpecDir = path.join(project.path, specsBaseDir, task.specId);
const specFilePath = path.join(mainSpecDir, AUTO_BUILD_PATHS.SPEC_FILE);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Check main project path for spec file (spec is created before worktree) | |
| const mainSpecDir = path.join(project.path, specsBaseDir, task.specId); | |
| const specFilePath = path.join(mainSpecDir, AUTO_BUILD_PATHS.SPEC_FILE); | |
| // Check if spec.md exists to determine whether to run spec creation or task execution | |
| // Check main project path for spec file (spec is created before worktree) | |
| const specFilePath = path.join(mainSpecDir, AUTO_BUILD_PATHS.SPEC_FILE); |
🤖 Prompt for AI Agents
In `@apps/frontend/src/main/ipc-handlers/task/execution-handlers.ts` around lines
1174 - 1176, The inner block re-declares const mainSpecDir shadowing the outer
mainSpecDir; remove the inner re-declaration inside the autoRestart block and
reuse the outer mainSpecDir (which is computed from project.path, specsBaseDir
and task.specId) when constructing specFilePath (using
AUTO_BUILD_PATHS.SPEC_FILE). Ensure you only remove the duplicated const
mainSpecDir = ... line and keep the path.join(mainSpecDir,
AUTO_BUILD_PATHS.SPEC_FILE) usage so no other changes are needed.
AndyMik90
left a comment
There was a problem hiding this comment.
🤖 Auto Claude PR Review
🟠 Follow-up Review: Needs Revision
🟠 Needs revision - 3 blocking issue(s) require fixes.
Resolution Status
- ✅ Resolved: 0 previous findings addressed
- ❌ Unresolved: 0 previous findings remain
- 🆕 New Issues: 4 new findings in recent changes
Finding Validation
- 🔍 Dismissed as False Positives: 0 findings were re-investigated and found to be incorrect
- ✓ Confirmed Valid: 4 findings verified as genuine issues
- 👤 Needs Human Review: 0 findings require manual verification
🚨 Blocking Issues
- quality: pendingWatches guard silently drops re-watch calls with different specDir
- quality: pendingWatches does not coordinate with unwatch(), allowing watcher leak on quick task exit
- quality: unwatch() promise not handled in non-async exit handler — inconsistent with watch() .catch() pattern
Verdict
All 20 CI checks are passing. No previous findings to verify. The 2 commits address CodeRabbit's critical race condition finding by adding a pendingWatches guard and .catch() handlers to watch() calls — good progress. However, 3 new MEDIUM severity issues were identified and confirmed valid by finding-validator: (1) NEW-001: The pendingWatches guard uses Set keyed only by taskId, silently dropping legitimate re-watch calls that carry a different specDir — this undermines the worktree re-watch logic in agent-events-handlers.ts:246. (2) NEW-002: unwatch() has no awareness of pendingWatches, so if an agent exits quickly during watch() setup, the newly created watcher leaks. This is ironic since the pendingWatches mechanism was specifically introduced to fix race conditions. (3) NEW-003: fileWatcher.unwatch(taskId) at agent-events-handlers.ts:128 lacks .catch(), inconsistent with the .catch() pattern applied to all watch() calls in this same PR. Additionally, 1 LOW severity quality issue (CMT-001: shadowed mainSpecDir). All findings are easily fixable. The race condition fixes are a step in the right direction but need refinement to fully address the concurrency edge cases.
Review Process
Agents invoked: new-code-reviewer, comment-analyzer, finding-validator
This is an AI-generated follow-up review using parallel specialist analysis with finding validation.
Findings (4 selected of 4 total)
🟡 [NEW-001] [MEDIUM] pendingWatches guard silently drops re-watch calls with different specDir
📁 apps/frontend/src/main/file-watcher.ts:27
The pendingWatches guard uses a Set keyed only by taskId, so it cannot distinguish between a redundant duplicate call (same specDir) and a legitimate re-watch call (different specDir from agent-events-handlers.ts:246). If the re-watch fires while initial watch() is still pending (e.g., awaiting watcher.close()), the re-watch is silently discarded, leaving the watcher on the wrong path. Fix: Use a Map<string, string> storing taskId -> specDir, and only skip when specDir matches.
Suggested fix:
Change pendingWatches from Set<string> to Map<string, string> storing taskId -> specDir. Only return early if the pending specDir matches the requested specDir. When a different specDir is requested, allow the call to proceed (it will close the old watcher in the try block).
🟡 [NEW-002] [MEDIUM] pendingWatches does not coordinate with unwatch(), allowing watcher leak on quick task exit
📁 apps/frontend/src/main/file-watcher.ts:27
unwatch() has no awareness of pendingWatches. If unwatch() runs during watch()'s await (agent exits quickly after start): unwatch() closes old watcher and deletes from map, then watch() resumes creating a NEW watcher that is stored but never cleaned up. The leaked file watcher remains open until app shutdown (unwatchAll) or next watch() for the same taskId. Fix: Add a cancellation mechanism so unwatch() can signal watch() to abort after its await points.
Suggested fix:
Add a cancelledWatches Set. In unwatch(), add taskId to cancelledWatches. In watch(), after each await point, check if cancelledWatches.has(taskId) and return early before creating the new watcher. Clean up cancelledWatches in both methods' finally blocks.
🟡 [NEW-003] [MEDIUM] unwatch() promise not handled in non-async exit handler — inconsistent with watch() .catch() pattern
📁 apps/frontend/src/main/ipc-handlers/agent-events-handlers.ts:128
fileWatcher.unwatch(taskId) at line 128 is called in a non-async callback without .catch(). Since unwatch() is async and internally awaits watcher.close(), a rejection produces an unhandled promise rejection. This is inconsistent with the .catch() pattern this PR applied to all watch() calls (agent-events-handlers.ts:246, execution-handlers.ts:224, 729, 1169).
Suggested fix:
Add .catch() handler: fileWatcher.unwatch(taskId).catch((err) => { console.error(`[agent-events-handlers] Failed to unwatch for ${taskId}:`, err); });
🔵 [CMT-001] [LOW] Shadowed mainSpecDir re-declaration in autoRestart block
📁 apps/frontend/src/main/ipc-handlers/task/execution-handlers.ts:1175
const mainSpecDir is declared at line 935 in the outer try block and again at line 1175 inside the nested autoRestart block. Both compute path.join(project.path, specsBaseDir, task.specId) from the same inputs. The inner declaration is redundant and shadows the outer variable which is already in scope.
Suggested fix:
Remove the redundant const mainSpecDir declaration at line 1175 and use the outer variable directly.
This review was generated by Auto Claude.
Summary
Fixes #1805 - Subtasks not generated/updated correctly since v2.7.6-beta.3
Root Cause
The
FileWatcherwas always watching the main project's spec directory forimplementation_plan.jsonchanges. When tasks run in a worktree (the default since v2.7.6-beta.3), the backend writes the plan file to the worktree's spec directory instead. TheFileWatchernever detected changes, soTASK_PROGRESSIPC events were never sent to the renderer, resulting in "0 subtasks" in the UI.Changes
file-watcher.ts: AddedgetWatchedSpecDir()method to check what path is currently being watchedexecution-handlers.ts: AddedgetSpecDirForWatcher()helper that checks worktree path first, falls back to main project path. Updated all 3 file watcher setup locations:TASK_STARThandlerTASK_UPDATE_STATUSauto-startTASK_RECOVER_STUCKauto-restartagent-events-handlers.ts:execution-progresshandler: when a worktree appears after task start (task started before worktree was created), automatically switches the watcher to the worktree pathexithandler for reading final plan state when the watcher was on the wrong pathEdge Cases Handled
Closes #1805
Summary by CodeRabbit
New Features
Bug Fixes