Skip to content

fix: watch worktree path for implementation_plan.json changes (#1805)#1842

Open
AndyMik90 wants to merge 3 commits intodevelopfrom
fix/1805-subtasks-not-updating
Open

fix: watch worktree path for implementation_plan.json changes (#1805)#1842
AndyMik90 wants to merge 3 commits intodevelopfrom
fix/1805-subtasks-not-updating

Conversation

@AndyMik90
Copy link
Owner

@AndyMik90 AndyMik90 commented Feb 15, 2026

Summary

Fixes #1805 - Subtasks not generated/updated correctly since v2.7.6-beta.3

Root Cause

The FileWatcher was always watching the main project's spec directory for implementation_plan.json changes. When tasks run in a worktree (the default since v2.7.6-beta.3), the backend writes the plan file to the worktree's spec directory instead. The FileWatcher never detected changes, so TASK_PROGRESS IPC events were never sent to the renderer, resulting in "0 subtasks" in the UI.

Changes

  • file-watcher.ts: Added getWatchedSpecDir() method to check what path is currently being watched
  • execution-handlers.ts: Added getSpecDirForWatcher() helper that checks worktree path first, falls back to main project path. Updated all 3 file watcher setup locations:
    1. TASK_START handler
    2. TASK_UPDATE_STATUS auto-start
    3. TASK_RECOVER_STUCK auto-restart
  • agent-events-handlers.ts:
    • Added re-watch logic in execution-progress handler: when a worktree appears after task start (task started before worktree was created), automatically switches the watcher to the worktree path
    • Added worktree fallback in exit handler for reading final plan state when the watcher was on the wrong path

Edge Cases Handled

  • Worktree exists at task start: Watches worktree path directly
  • Worktree created after task start: Re-watch logic in execution-progress handler switches to worktree when it appears
  • Worktree deleted after completion: Exit handler cleanly unwatches; chokidar handles missing files gracefully
  • Non-worktree tasks: Falls back to main project path (behavior unchanged)

Closes #1805

Summary by CodeRabbit

  • New Features

    • Added a way to query which spec directory a watcher is observing.
    • Watchers now emit an initial progress update when set up.
  • Bug Fixes

    • Improved final plan recovery with a reliable fallback when the primary source is missing.
    • Better handling of spec directories across worktrees to ensure correct watching and progress persistence.
    • Automatically reconfigure watchers when a spec directory shifts, and prevent overlapping watcher operations to reduce spurious errors.

The FileWatcher was always watching the main project's spec directory for
implementation_plan.json changes. When tasks run in a worktree, the backend
writes the plan file to the worktree directory instead, so the watcher never
detected changes and subtask progress was never sent to the UI.

Changes:
- Add getSpecDirForWatcher() helper that checks worktree path first
- Update all 3 file watcher setup locations (TASK_START, TASK_UPDATE_STATUS
  auto-start, TASK_RECOVER_STUCK auto-restart) to use worktree-aware paths
- Add re-watch logic in execution-progress handler: when a worktree appears
  after task start, automatically switch the watcher to the worktree path
- Add worktree fallback in exit handler for reading final plan state
- Add getWatchedSpecDir() method to FileWatcher for path comparison

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@github-actions github-actions bot added area/frontend This is frontend only bug Something isn't working size/S Small (10-99 lines) labels Feb 15, 2026
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 15, 2026

📝 Walkthrough

Walkthrough

Adds guarded watcher creation and a public accessor to FileWatcher; updates task/agent IPC handlers and watcher resolution to prefer worktree spec directories, read/fallback to worktree implementation plans, and reconfigure watchers to worktree spec dirs when appropriate.

Changes

Cohort / File(s) Summary
File Watcher Enhancement
apps/frontend/src/main/file-watcher.ts
Add pendingWatches guarding to avoid overlapping watch() calls, ensure safe watcher restart and stable plan reads, emit progress/error events, and expose `getWatchedSpecDir(taskId: string): string
Agent events & plan persistence
apps/frontend/src/main/ipc-handlers/agent-events-handlers.ts
When final plan missing, attempt to read implementation_plan.json from the task worktree; compute and use worktreeSpecDir for persisting progress; if a valid worktree plan exists but watcher still watches main spec dir, log and attempt to re-watch the worktree spec dir.
Worktree-aware watcher resolution (task execution)
apps/frontend/src/main/ipc-handlers/task/execution-handlers.ts
Introduce getSpecDirForWatcher(projectPath, specsBaseDir, specId) and switch watcher initialization across start/restart/recovery to prefer worktree paths while preserving main-project checks (e.g., spec.md) against the main project path; ensure spec creation uses main project path when appropriate.

Sequence Diagram(s)

sequenceDiagram
    participant TaskManager
    participant IPCHandler
    participant FileWatcher
    participant FS as Worktree/Project FS

    TaskManager->>IPCHandler: task exit / persist progress
    IPCHandler->>FileWatcher: getWatchedSpecDir(taskId)
    FileWatcher-->>IPCHandler: watchedDir or null
    alt watchedDir present and in-memory plan absent
        IPCHandler->>FS: compute worktreeSpecDir & read implementation_plan.json
        FS-->>IPCHandler: implementation_plan.json (found / not found)
        alt plan found in worktree
            IPCHandler->>FileWatcher: compare watchedDir vs worktreeSpecDir
            alt different
                IPCHandler->>FileWatcher: rewatch(taskId, worktreeSpecDir)
                FileWatcher-->>IPCHandler: success / error
            end
        end
    end
    IPCHandler->>FS: persist phase/progress to chosen spec dir
    FS-->>IPCHandler: write ack
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~22 minutes

Possibly related PRs

Suggested labels

🔄 Checking, size/M

Suggested reviewers

  • MikeeBuilds
  • AlexMadera

Poem

🐰 I nibble paths where plans may hide,

I hop from main to worktree side.
If memory sleeps and files are faint,
I peek the worktree — no need to paint.
Watchers swap, and progress wakes — hooray!

🚥 Pre-merge checks | ✅ 6
✅ Passed checks (6 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately and specifically describes the main fix: watching the worktree path for implementation_plan.json changes, which directly addresses the root cause of issue #1805.
Linked Issues check ✅ Passed The changes comprehensively address issue #1805's requirements: file watcher now watches worktree paths when present, execution handlers use worktree-aware paths, and re-watch logic handles dynamic worktree creation, fixing the root cause of missing task progress updates.
Out of Scope Changes check ✅ Passed All changes are directly scoped to fixing issue #1805: improving file watcher path resolution, implementing worktree-aware watching, and adding fallback logic for plan retrieval—no unrelated modifications detected.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
Merge Conflict Detection ✅ Passed ✅ No merge conflicts detected when merging into develop

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch fix/1805-subtasks-not-updating

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @AndyMik90, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a critical issue where subtasks were not being generated or updated correctly in the UI due to the file watcher failing to track changes in implementation_plan.json within worktree environments. The core problem stemmed from the watcher always monitoring the main project's spec directory, while tasks running in worktrees write their plan files to the worktree's spec directory. The changes ensure the file watcher correctly identifies and monitors the implementation_plan.json file, whether it resides in the main project or a worktree, and dynamically adapts to worktree creation or deletion, thereby resolving the '0 subtasks' bug.

Highlights

  • File Watcher Enhancement: Introduced a new method getWatchedSpecDir() in file-watcher.ts to retrieve the currently watched spec directory for a given task, enabling dynamic path verification.
  • Worktree Path Prioritization: Implemented a getSpecDirForWatcher() helper in execution-handlers.ts that prioritizes watching the worktree's spec directory for implementation_plan.json changes, falling back to the main project path if no worktree is present. This helper is now used in all three file watcher setup locations: TASK_START, TASK_UPDATE_STATUS auto-start, and TASK_RECOVER_STUCK auto-restart.
  • Dynamic Re-watching and Fallback: Added re-watch logic in agent-events-handlers.ts to automatically switch the file watcher to the worktree path if a worktree appears after a task has started. Additionally, an exit handler fallback was added to read the final plan state directly from the worktree if the watcher was initially on the wrong path.
Changelog
  • apps/frontend/src/main/file-watcher.ts
    • Added getWatchedSpecDir method to retrieve the currently watched spec directory for a task.
  • apps/frontend/src/main/ipc-handlers/agent-events-handlers.ts
    • Modified the exit handler to attempt reading the final plan state from the worktree path if the initial read from the watcher's path fails.
    • Implemented re-watch logic in the execution-progress handler to switch the file watcher to the worktree path if it becomes available after task start.
  • apps/frontend/src/main/ipc-handlers/task/execution-handlers.ts
    • Introduced getSpecDirForWatcher helper function to determine the correct spec directory to watch, prioritizing worktree paths.
    • Updated TASK_START handler to use getSpecDirForWatcher for file watcher setup and clarified spec.md check path.
    • Updated TASK_UPDATE_STATUS auto-start logic to use getSpecDirForWatcher for file watcher setup.
    • Updated TASK_RECOVER_STUCK auto-restart logic to use getSpecDirForWatcher for file watcher setup and adjusted spec.md check path.
    • Modified startSpecCreation call in recovery logic to use the main project's spec directory for spec creation.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request effectively resolves the issue of the file watcher not tracking implementation_plan.json changes in worktrees. The core of the fix involves introducing getSpecDirForWatcher to correctly determine whether to watch the main project path or the worktree path, and applying this at all file watcher setup locations. The addition of re-watching logic in agent-events-handlers.ts for tasks where the worktree is created post-start is a thoughtful touch that handles an important edge case. The fallback mechanism in the exit handler to read the final plan from the worktree is also a solid defensive addition.

The code is well-structured and the changes are logical. I have a couple of suggestions regarding variable naming consistency in execution-handlers.ts to improve long-term maintainability. Overall, this is a great fix.

Comment on lines 1164 to 1171
const specsBaseDirForRecovery = getSpecsDir(project.autoBuildPath);
const specDirForWatcher = getSpecDirForWatcher(project.path, specsBaseDirForRecovery, task.specId);
fileWatcher.watch(taskId, specDirForWatcher);

// Check if spec.md exists to determine whether to run spec creation or task execution
const specFilePath = path.join(specDirForWatcher, AUTO_BUILD_PATHS.SPEC_FILE);
// Check main project path for spec file (spec is created before worktree)
const mainSpecDirForRecovery = path.join(project.path, specsBaseDirForRecovery, task.specId);
const specFilePath = path.join(mainSpecDirForRecovery, AUTO_BUILD_PATHS.SPEC_FILE);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For consistency with the TASK_START and TASK_UPDATE_STATUS handlers, it would be clearer to standardize the variable names used here. Using specsBaseDir, watchSpecDir, and mainSpecDir would improve maintainability across the file.

Suggested change
const specsBaseDirForRecovery = getSpecsDir(project.autoBuildPath);
const specDirForWatcher = getSpecDirForWatcher(project.path, specsBaseDirForRecovery, task.specId);
fileWatcher.watch(taskId, specDirForWatcher);
// Check if spec.md exists to determine whether to run spec creation or task execution
const specFilePath = path.join(specDirForWatcher, AUTO_BUILD_PATHS.SPEC_FILE);
// Check main project path for spec file (spec is created before worktree)
const mainSpecDirForRecovery = path.join(project.path, specsBaseDirForRecovery, task.specId);
const specFilePath = path.join(mainSpecDirForRecovery, AUTO_BUILD_PATHS.SPEC_FILE);
const specsBaseDir = getSpecsDir(project.autoBuildPath);
const watchSpecDir = getSpecDirForWatcher(project.path, specsBaseDir, task.specId);
fileWatcher.watch(taskId, watchSpecDir);
// Check if spec.md exists to determine whether to run spec creation or task execution
// Check main project path for spec file (spec is created before worktree)
const mainSpecDir = path.join(project.path, specsBaseDir, task.specId);
const specFilePath = path.join(mainSpecDir, AUTO_BUILD_PATHS.SPEC_FILE);

const taskDescription = task.description || task.title;
console.warn(`[Recovery] Starting spec creation for: ${task.specId}`);
agentManager.startSpecCreation(taskId, project.path, taskDescription, specDirForWatcher, task.metadata, baseBranchForRecovery, project.id);
agentManager.startSpecCreation(taskId, project.path, taskDescription, mainSpecDirForRecovery, task.metadata, baseBranchForRecovery, project.id);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

To align with the suggested variable name standardization in the previous block, please update mainSpecDirForRecovery to mainSpecDir here as well.

Suggested change
agentManager.startSpecCreation(taskId, project.path, taskDescription, mainSpecDirForRecovery, task.metadata, baseBranchForRecovery, project.id);
agentManager.startSpecCreation(taskId, project.path, taskDescription, mainSpecDir, task.metadata, baseBranchForRecovery, project.id);

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@apps/frontend/src/main/ipc-handlers/agent-events-handlers.ts`:
- Around line 238-247: The call to fileWatcher.watch(taskId, worktreeSpecDir) is
asynchronous and may reject; update the re-watch block (the logic using
fileWatcher.getWatchedSpecDir, currentWatchDir, worktreeSpecDir,
existsSync(worktreePlanPath), and taskId) to properly handle the returned
Promise instead of fire-and-forget — either await the call inside an async
function or attach a .catch that logs errors (e.g., using console.error or
process logger) so any rejection from internal unwatch()/watch() is caught and
does not create an unhandled promise rejection.

@AndyMik90 AndyMik90 self-assigned this Feb 15, 2026
Copy link
Owner Author

@AndyMik90 AndyMik90 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

✅ Auto Claude Review - APPROVED

Status: Ready to Merge

Summary: ### Merge Verdict: ✅ READY TO MERGE

✅ Ready to merge - All checks passing, no blocking issues found.

No blocking issues found

Risk Assessment

Factor Level Notes
Complexity Low Based on lines changed
Security Impact None Based on security findings
Scope Coherence Good Based on structural review

Generated by Auto Claude PR Review


This automated review found no blocking issues. The PR can be safely merged.

Generated by Auto Claude

…dling (#1805)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Fix all issues with AI agents
In `@apps/frontend/src/main/ipc-handlers/agent-events-handlers.ts`:
- Around line 238-249: The FileWatcher.watch() call here can race because
watch() is async and updates watcher state after awaiting unwatch(), so change
the call in agent-events-handlers (where fileWatcher.watch(taskId,
worktreeSpecDir) is invoked) to await the promise to serialize updates (i.e.,
await fileWatcher.watch(taskId, worktreeSpecDir).catch(...)); alternatively, if
you prefer fixing internally, modify FileWatcher.watch() so it sets
this.watchers.set(taskId, pendingMarker) synchronously before awaiting
this.unwatch(taskId) to ensure the in-memory state prevents overlapping calls;
reference FileWatcher.watch(), FileWatcher.unwatch(),
fileWatcher.getWatchedSpecDir(taskId) and the current call site in
agent-events-handlers.ts.

In `@apps/frontend/src/main/ipc-handlers/task/execution-handlers.ts`:
- Around line 221-224: fileWatcher.watch(...) is being called fire-and-forget in
execution-handlers (e.g., the call where specsBaseDir is derived via getSpecsDir
and watchSpecDir via getSpecDirForWatcher) which can cause unhandled promise
rejections; update each fileWatcher.watch call (the one using taskId and
watchSpecDir, and the other two calls around the same areas) to append a
.catch(...) that logs the error (use the same logging pattern used in
agent-events-handlers.ts) so any rejection is handled and reported. Ensure you
reference the existing taskId/watchSpecDir variables and the
fileWatcher.watch(...) call sites when adding the .catch.
- Around line 1163-1171: The inner const specsBaseDir declared inside the
autoRestart block duplicates and shadows the outer specsBaseDir computed earlier
via getSpecsDir(project.autoBuildPath); remove the inner declaration and reuse
the outer specsBaseDir (used together with getSpecDirForWatcher, watchSpecDir,
mainSpecDir, and specFilePath) so the code no longer redeclares specsBaseDir and
relies on the single previously computed value.

…rror handling

- Add pendingWatches guard in FileWatcher.watch() to prevent overlapping async
  calls from creating duplicate watchers (CodeRabbit critical finding)
- Add .catch() to all three fire-and-forget fileWatcher.watch() calls in
  execution-handlers.ts to prevent unhandled promise rejections
- Remove shadowed specsBaseDir re-declaration in autoRestart block, reusing
  the outer variable from the same TASK_RECOVER_STUCK handler scope
@AndyMik90
Copy link
Owner Author

Review Finding Analysis

Findings Addressed (commit e7875dd)

CodeRabbit - Critical: Race condition in FileWatcher.watch() (agent-events-handlers.ts:238-249)

  • Status: Fixed - Added pendingWatches guard in FileWatcher.watch() to prevent overlapping async calls from creating duplicate watchers. The synchronous Set check prevents concurrent watch() calls for the same taskId from entering the async section simultaneously.

CodeRabbit - Trivial: Missing .catch() on fileWatcher.watch() calls (execution-handlers.ts:224, 727, 1166)

  • Status: Fixed - Added .catch() with descriptive error logging to all three fire-and-forget fileWatcher.watch() calls, matching the pattern already used in agent-events-handlers.ts.

CodeRabbit - Trivial: Shadowed specsBaseDir re-declaration (execution-handlers.ts:1164)

  • Status: Fixed - Removed the inner const specsBaseDir declaration inside the autoRestart block; it now reuses the outer specsBaseDir from line 930 in the same TASK_RECOVER_STUCK handler scope.

Findings Already Resolved

Gemini - Variable naming consistency (execution-handlers.ts:1164-1182)

  • Status: Already fixed in commit 78491d2. The variables were already renamed from specsBaseDirForRecovery/mainSpecDirForRecovery to specsBaseDir/watchSpecDir/mainSpecDir before this review round.

@github-actions github-actions bot added size/M Medium (100-499 lines) and removed size/S Small (10-99 lines) labels Feb 16, 2026
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Fix all issues with AI agents
In `@apps/frontend/src/main/file-watcher.ts`:
- Around line 32-38: The watch/unwatch race can interleave while awaiting
existing.watcher.close(); update unwatch(taskId) to check a pendingWatches set
and return early if a watch(taskId) is in-flight (e.g., if
(this.pendingWatches.has(taskId)) return;), and ensure watch(taskId) adds taskId
to this.pendingWatches at start and removes it after the async close/create flow
completes so cleanup is coordinated; reference the methods watch(), unwatch(),
the this.watchers map, existing.watcher.close(), and the pendingWatches set when
making the change.
- Around line 42-46: The current check in the file-watcher that calls
this.emit('error', taskId, `Plan file not found: ${planPath}`) when
existsSync(planPath) is false should be changed to silently return instead of
emitting an error to avoid UI noise during normal watch() retry flows; locate
the block that references existsSync and planPath and remove the emit('error',
...) call so the function simply returns when the plan file is missing
(optionally keep a non-error debug/log statement if you need visibility).

In `@apps/frontend/src/main/ipc-handlers/task/execution-handlers.ts`:
- Around line 1174-1176: The inner block re-declares const mainSpecDir shadowing
the outer mainSpecDir; remove the inner re-declaration inside the autoRestart
block and reuse the outer mainSpecDir (which is computed from project.path,
specsBaseDir and task.specId) when constructing specFilePath (using
AUTO_BUILD_PATHS.SPEC_FILE). Ensure you only remove the duplicated const
mainSpecDir = ... line and keep the path.join(mainSpecDir,
AUTO_BUILD_PATHS.SPEC_FILE) usage so no other changes are needed.

Comment on lines +32 to 38
try {
// Close any existing watcher for this task
const existing = this.watchers.get(taskId);
if (existing) {
await existing.watcher.close();
this.watchers.delete(taskId);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Minor: unwatch() and watch() can interleave during close().

If unwatch(taskId) is called while watch(taskId) is awaiting existing.watcher.close() on line 36, unwatch will also attempt to close and delete the same watcher. The double-close on a chokidar FSWatcher is benign in practice, but unwatch completing first means line 37 here (this.watchers.delete) operates on an already-empty slot, and the new watcher is then stored without issue. No functional bug, but consider checking pendingWatches inside unwatch() to coordinate:

// in unwatch():
if (this.pendingWatches.has(taskId)) {
  // A watch() is in flight — it will handle cleanup
  return;
}

This is low-priority since the current code is safe in Node's single-threaded model (the only real yield point is the close() await).

🤖 Prompt for AI Agents
In `@apps/frontend/src/main/file-watcher.ts` around lines 32 - 38, The
watch/unwatch race can interleave while awaiting existing.watcher.close();
update unwatch(taskId) to check a pendingWatches set and return early if a
watch(taskId) is in-flight (e.g., if (this.pendingWatches.has(taskId)) return;),
and ensure watch(taskId) adds taskId to this.pendingWatches at start and removes
it after the async close/create flow completes so cleanup is coordinated;
reference the methods watch(), unwatch(), the this.watchers map,
existing.watcher.close(), and the pendingWatches set when making the change.

Comment on lines +42 to +46
// Check if plan file exists
if (!existsSync(planPath)) {
this.emit('error', taskId, `Plan file not found: ${planPath}`);
return;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# First, locate the file-watcher.ts file
find . -name "file-watcher.ts" -type f

Repository: AndyMik90/Auto-Claude

Length of output: 105


🏁 Script executed:

#!/bin/bash
# Read the file-watcher.ts to understand the context
cd apps/frontend/src/main
wc -l file-watcher.ts
head -80 file-watcher.ts

Repository: AndyMik90/Auto-Claude

Length of output: 2543


🏁 Script executed:

#!/bin/bash
# Search for error event listeners on fileWatcher instances
rg -n "fileWatcher\.on\(\s*['\"]error['\"]" --type=ts -C5
rg -n "\.on\(\s*['\"]error['\"]" --type=ts -C5 | head -100

Repository: AndyMik90/Auto-Claude

Length of output: 8240


🏁 Script executed:

#!/bin/bash
# Search for where FileWatcher is instantiated and used
rg -n "new FileWatcher|new file-watcher|FileWatcher" --type=ts | head -20

Repository: AndyMik90/Auto-Claude

Length of output: 2447


🏁 Script executed:

#!/bin/bash
# Search for 'not-ready' or 'warn' event patterns to see if already used
rg -n "emit\(['\"]warn|emit\(['\"]not-ready|emit\(['\"]error" --type=ts apps/frontend/src/main/

Repository: AndyMik90/Auto-Claude

Length of output: 4681


Emitting 'error' for a missing plan file creates unnecessary UI noise during normal retry flow.

The error listener in agent-events-handlers.ts (line 309) sends all 'error' events to the renderer via IPC_CHANNELS.TASK_ERROR, surfacing them in the UI. During normal operation—when the worktree and implementation_plan.json are created after the initial watch() call—this emits alarming error messages even though callers already handle retries silently.

Remove the error emission for missing files and return silently:

Proposed fix
       // Check if plan file exists
       if (!existsSync(planPath)) {
-        this.emit('error', taskId, `Plan file not found: ${planPath}`);
+        // Plan file may not exist yet (e.g. worktree still being created).
+        // Callers handle re-watching, so a silent return avoids noisy error events.
         return;
       }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Check if plan file exists
if (!existsSync(planPath)) {
this.emit('error', taskId, `Plan file not found: ${planPath}`);
return;
}
// Check if plan file exists
if (!existsSync(planPath)) {
// Plan file may not exist yet (e.g. worktree still being created).
// Callers handle re-watching, so a silent return avoids noisy error events.
return;
}
🤖 Prompt for AI Agents
In `@apps/frontend/src/main/file-watcher.ts` around lines 42 - 46, The current
check in the file-watcher that calls this.emit('error', taskId, `Plan file not
found: ${planPath}`) when existsSync(planPath) is false should be changed to
silently return instead of emitting an error to avoid UI noise during normal
watch() retry flows; locate the block that references existsSync and planPath
and remove the emit('error', ...) call so the function simply returns when the
plan file is missing (optionally keep a non-error debug/log statement if you
need visibility).

Comment on lines +1174 to +1176
// Check main project path for spec file (spec is created before worktree)
const mainSpecDir = path.join(project.path, specsBaseDir, task.specId);
const specFilePath = path.join(mainSpecDir, AUTO_BUILD_PATHS.SPEC_FILE);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Shadowed mainSpecDir re-declaration — reuse the outer variable.

Line 1175 re-declares const mainSpecDir inside the autoRestart block, shadowing the identical declaration at Line 935. Both compute path.join(project.path, specsBaseDir, task.specId) from the same inputs. This is the same pattern that was already cleaned up for specsBaseDir.

♻️ Remove the inner re-declaration
            // Check if spec.md exists to determine whether to run spec creation or task execution
            // Check main project path for spec file (spec is created before worktree)
-            const mainSpecDir = path.join(project.path, specsBaseDir, task.specId);
             const specFilePath = path.join(mainSpecDir, AUTO_BUILD_PATHS.SPEC_FILE);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Check main project path for spec file (spec is created before worktree)
const mainSpecDir = path.join(project.path, specsBaseDir, task.specId);
const specFilePath = path.join(mainSpecDir, AUTO_BUILD_PATHS.SPEC_FILE);
// Check if spec.md exists to determine whether to run spec creation or task execution
// Check main project path for spec file (spec is created before worktree)
const specFilePath = path.join(mainSpecDir, AUTO_BUILD_PATHS.SPEC_FILE);
🤖 Prompt for AI Agents
In `@apps/frontend/src/main/ipc-handlers/task/execution-handlers.ts` around lines
1174 - 1176, The inner block re-declares const mainSpecDir shadowing the outer
mainSpecDir; remove the inner re-declaration inside the autoRestart block and
reuse the outer mainSpecDir (which is computed from project.path, specsBaseDir
and task.specId) when constructing specFilePath (using
AUTO_BUILD_PATHS.SPEC_FILE). Ensure you only remove the duplicated const
mainSpecDir = ... line and keep the path.join(mainSpecDir,
AUTO_BUILD_PATHS.SPEC_FILE) usage so no other changes are needed.

Copy link
Owner Author

@AndyMik90 AndyMik90 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🤖 Auto Claude PR Review

🟠 Follow-up Review: Needs Revision

🟠 Needs revision - 3 blocking issue(s) require fixes.

Resolution Status

  • Resolved: 0 previous findings addressed
  • Unresolved: 0 previous findings remain
  • 🆕 New Issues: 4 new findings in recent changes

Finding Validation

  • 🔍 Dismissed as False Positives: 0 findings were re-investigated and found to be incorrect
  • Confirmed Valid: 4 findings verified as genuine issues
  • 👤 Needs Human Review: 0 findings require manual verification

🚨 Blocking Issues

  • quality: pendingWatches guard silently drops re-watch calls with different specDir
  • quality: pendingWatches does not coordinate with unwatch(), allowing watcher leak on quick task exit
  • quality: unwatch() promise not handled in non-async exit handler — inconsistent with watch() .catch() pattern

Verdict

All 20 CI checks are passing. No previous findings to verify. The 2 commits address CodeRabbit's critical race condition finding by adding a pendingWatches guard and .catch() handlers to watch() calls — good progress. However, 3 new MEDIUM severity issues were identified and confirmed valid by finding-validator: (1) NEW-001: The pendingWatches guard uses Set keyed only by taskId, silently dropping legitimate re-watch calls that carry a different specDir — this undermines the worktree re-watch logic in agent-events-handlers.ts:246. (2) NEW-002: unwatch() has no awareness of pendingWatches, so if an agent exits quickly during watch() setup, the newly created watcher leaks. This is ironic since the pendingWatches mechanism was specifically introduced to fix race conditions. (3) NEW-003: fileWatcher.unwatch(taskId) at agent-events-handlers.ts:128 lacks .catch(), inconsistent with the .catch() pattern applied to all watch() calls in this same PR. Additionally, 1 LOW severity quality issue (CMT-001: shadowed mainSpecDir). All findings are easily fixable. The race condition fixes are a step in the right direction but need refinement to fully address the concurrency edge cases.

Review Process

Agents invoked: new-code-reviewer, comment-analyzer, finding-validator


This is an AI-generated follow-up review using parallel specialist analysis with finding validation.

Findings (4 selected of 4 total)

🟡 [NEW-001] [MEDIUM] pendingWatches guard silently drops re-watch calls with different specDir

📁 apps/frontend/src/main/file-watcher.ts:27

The pendingWatches guard uses a Set keyed only by taskId, so it cannot distinguish between a redundant duplicate call (same specDir) and a legitimate re-watch call (different specDir from agent-events-handlers.ts:246). If the re-watch fires while initial watch() is still pending (e.g., awaiting watcher.close()), the re-watch is silently discarded, leaving the watcher on the wrong path. Fix: Use a Map<string, string> storing taskId -> specDir, and only skip when specDir matches.

Suggested fix:

Change pendingWatches from Set<string> to Map<string, string> storing taskId -> specDir. Only return early if the pending specDir matches the requested specDir. When a different specDir is requested, allow the call to proceed (it will close the old watcher in the try block).

🟡 [NEW-002] [MEDIUM] pendingWatches does not coordinate with unwatch(), allowing watcher leak on quick task exit

📁 apps/frontend/src/main/file-watcher.ts:27

unwatch() has no awareness of pendingWatches. If unwatch() runs during watch()'s await (agent exits quickly after start): unwatch() closes old watcher and deletes from map, then watch() resumes creating a NEW watcher that is stored but never cleaned up. The leaked file watcher remains open until app shutdown (unwatchAll) or next watch() for the same taskId. Fix: Add a cancellation mechanism so unwatch() can signal watch() to abort after its await points.

Suggested fix:

Add a cancelledWatches Set. In unwatch(), add taskId to cancelledWatches. In watch(), after each await point, check if cancelledWatches.has(taskId) and return early before creating the new watcher. Clean up cancelledWatches in both methods' finally blocks.

🟡 [NEW-003] [MEDIUM] unwatch() promise not handled in non-async exit handler — inconsistent with watch() .catch() pattern

📁 apps/frontend/src/main/ipc-handlers/agent-events-handlers.ts:128

fileWatcher.unwatch(taskId) at line 128 is called in a non-async callback without .catch(). Since unwatch() is async and internally awaits watcher.close(), a rejection produces an unhandled promise rejection. This is inconsistent with the .catch() pattern this PR applied to all watch() calls (agent-events-handlers.ts:246, execution-handlers.ts:224, 729, 1169).

Suggested fix:

Add .catch() handler: fileWatcher.unwatch(taskId).catch((err) => { console.error(`[agent-events-handlers] Failed to unwatch for ${taskId}:`, err); });

🔵 [CMT-001] [LOW] Shadowed mainSpecDir re-declaration in autoRestart block

📁 apps/frontend/src/main/ipc-handlers/task/execution-handlers.ts:1175

const mainSpecDir is declared at line 935 in the outer try block and again at line 1175 inside the nested autoRestart block. Both compute path.join(project.path, specsBaseDir, task.specId) from the same inputs. The inner declaration is redundant and shadows the outer variable which is already in scope.

Suggested fix:

Remove the redundant const mainSpecDir declaration at line 1175 and use the outer variable directly.

This review was generated by Auto Claude.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area/frontend This is frontend only bug Something isn't working size/M Medium (100-499 lines)

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Subtasks not generated and not updated correctly since v2.7.6-beta.3 (PRD tests included)

1 participant