diff --git a/docs/releases.md b/docs/releases.md index a70ad9eb7..5a7a29c11 100644 --- a/docs/releases.md +++ b/docs/releases.md @@ -17,59 +17,60 @@ Maestro can update itself automatically! This feature was introduced in **v0.8.7 **Latest: v0.15.1** | Released March 3, 2026 -๐ŸŽถ Maestro Symphony - Contribute to open source with AI assistance! Browse curated issues from projects with the `runmaestro.ai` label, clone repos with one click, and automatically process the relevant Auto Run playbooks. Track your contributions, streaks, and stats. You're contributing CPU and tokens towards your favorite open source projects and features. NOTE: Making changes here active based on user feedback ๐Ÿ™ - -๐ŸŽฌ Director's Notes. Aggregates history across all agents into a unified timeline with search, filters, and an activity graph. Includes an AI Overview tab that generates a structured synopsis of recent work. Off by default, gated behind a new "Encore Features" panel under settings. This is a precursor to an eventual plugin system. Allowing for extensions and customizations without bloating the core app. - -๐Ÿท๏ธ Conductor Profile - Available under Settings > General. Provide a short description on how Maestro agents should interface with you. - -๐Ÿง  Three-State Thinking Toggle - The thinking toggle now cycles through three modes: off, on, and sticky. Sticky mode keeps thinking content visible after the response completes. Cycle with CMD/CTRL+SHIFT+K โŒจ๏ธ (#165). - -๐Ÿค– Factory.ai Droid Support - Added support for the [Factory.ai](https://factory.ai/product/cli) droid agent. Full session management and output parsing integration (#223). - -๐Ÿ”ง Windows and SSH Stability Improvements - Major fixes for remote agent execution including wizard support, synopsis generation, and proper shell profile sourcing across platforms. (#131, #156, #159, #187, #195). - -## Security Fixes - -Addressed some security issues, all thanks to @VVX7 -- #421 History panel stored XSS -- #422 Stored XSS to reverse shell -- #423 Stored XSS to SSRF - -## Smaller Changes in 0.15.x - -- Added safety controls around agent working directory deletion ๐Ÿ”’ (#206) -- Added `/skills` command for enumerate Claude Code skills ๐Ÿงฐ (#154) -- Renamed "Audio Notifications" to "Custom Notifications" ๐Ÿ”” (#168) -- Auto-scroll now respects user scroll position in AI mode ๐Ÿ“œ (#237) -- Spec-Kit and OpenSpec commands now accept arguments properly โš™๏ธ (#238) -- You can now @ message entire groups of agents in Group Chat. ๐Ÿ‘ฅ -- Group chats can be archived. ๐Ÿ“ฆ -- You can now swap the provider behind an agent. โ†ช๏ธ -- Added ability to scroll to latest messages as they are streamed from the agent. ๐Ÿ“œ -- Expanded maestro-cli to include agent message send on new or resumed sessions, this means you can control any agent behind Maestro easily ๐Ÿงต -- Added VSCode-style semantic icon mapping in the file explorer โœ… -- New tabs are automatically named by default, this can be disabled under settings ๐Ÿท๏ธ -- Added WakaTime integration โฑ๏ธ -- Added window chrome options to toggle off the header bar ๐ŸชŸ -- Broke Settings > General up, there's now a Settings > Display โš™๏ธ -- Added a table of contents jump menu for markdown files being previewed ๐Ÿ“„ -- Added option to open document preview from within the graph view ๐Ÿ“ˆ -- Added configuration options to ignore remote file patterns over SSH connections ๐Ÿšฏ -- Fixed context consumption calculation bugs ๐Ÿงฎ -- AI responses can now be saved to markdown on disk ๐Ÿ’พ -- Hide Empty "Ungrouped Agents" Folder ๐Ÿ“ -- File preview detects updates on disk and shows refresh button โ†ช๏ธ -- Auto Run task calculation is now dynamic to count tasks added mid-flight โœˆ๏ธ -- When you stop an Auto Run, you can now force kill the running task ๐Ÿ”ช -- Web interface synchronization improvements ๐ŸŒ -- Added shortcuts to all panel search under command palette ๐Ÿ” -- All sorts of other bug fixes and usability improvements ๐Ÿ› -- Enhanced batch runner with agent prompt validation for task references โ˜‘๏ธ -- Added recovery mechanism for expired group chat sessions automatically ๐Ÿ”„ -- Improved history help modal with SSH remote session limitations notice ๐Ÿ“ -- The wand glyph on the upper right sparkles if any agent is working โœจ - +๐ŸŽถ Maestro Symphony - Contribute to open source with AI assistance! Browse curated issues from projects with the `runmaestro.ai` label, clone repos with one click, and automatically process the relevant Auto Run playbooks. Track your contributions, streaks, and stats. You're contributing CPU and tokens towards your favorite open source projects and features. NOTE: Making changes here active based on user feedback ๐Ÿ™ + +๐ŸŽฌ Director's Notes. Aggregates history across all agents into a unified timeline with search, filters, and an activity graph. Includes an AI Overview tab that generates a structured synopsis of recent work. Off by default, gated behind a new "Encore Features" panel under settings. This is a precursor to an eventual plugin system. Allowing for extensions and customizations without bloating the core app. + +๐Ÿท๏ธ Conductor Profile - Available under Settings > General. Provide a short description on how Maestro agents should interface with you. + +๐Ÿง  Three-State Thinking Toggle - The thinking toggle now cycles through three modes: off, on, and sticky. Sticky mode keeps thinking content visible after the response completes. Cycle with CMD/CTRL+SHIFT+K โŒจ๏ธ (#165). + +๐Ÿค– Factory.ai Droid Support - Added support for the [Factory.ai](https://factory.ai/product/cli) droid agent. Full session management and output parsing integration (#223). + +๐Ÿ”ง Windows and SSH Stability Improvements - Major fixes for remote agent execution including wizard support, synopsis generation, and proper shell profile sourcing across platforms. (#131, #156, #159, #187, #195). + +## Security Fixes + +Addressed some security issues, all thanks to @VVX7 + +- #421 History panel stored XSS +- #422 Stored XSS to reverse shell +- #423 Stored XSS to SSRF + +## Smaller Changes in 0.15.x + +- Added safety controls around agent working directory deletion ๐Ÿ”’ (#206) +- Added `/skills` command for enumerate Claude Code skills ๐Ÿงฐ (#154) +- Renamed "Audio Notifications" to "Custom Notifications" ๐Ÿ”” (#168) +- Auto-scroll now respects user scroll position in AI mode ๐Ÿ“œ (#237) +- Spec-Kit and OpenSpec commands now accept arguments properly โš™๏ธ (#238) +- You can now @ message entire groups of agents in Group Chat. ๐Ÿ‘ฅ +- Group chats can be archived. ๐Ÿ“ฆ +- You can now swap the provider behind an agent. โ†ช๏ธ +- Added ability to scroll to latest messages as they are streamed from the agent. ๐Ÿ“œ +- Expanded maestro-cli to include agent message send on new or resumed sessions, this means you can control any agent behind Maestro easily ๐Ÿงต +- Added VSCode-style semantic icon mapping in the file explorer โœ… +- New tabs are automatically named by default, this can be disabled under settings ๐Ÿท๏ธ +- Added WakaTime integration โฑ๏ธ +- Added window chrome options to toggle off the header bar ๐ŸชŸ +- Broke Settings > General up, there's now a Settings > Display โš™๏ธ +- Added a table of contents jump menu for markdown files being previewed ๐Ÿ“„ +- Added option to open document preview from within the graph view ๐Ÿ“ˆ +- Added configuration options to ignore remote file patterns over SSH connections ๐Ÿšฏ +- Fixed context consumption calculation bugs ๐Ÿงฎ +- AI responses can now be saved to markdown on disk ๐Ÿ’พ +- Hide Empty "Ungrouped Agents" Folder ๐Ÿ“ +- File preview detects updates on disk and shows refresh button โ†ช๏ธ +- Auto Run task calculation is now dynamic to count tasks added mid-flight โœˆ๏ธ +- When you stop an Auto Run, you can now force kill the running task ๐Ÿ”ช +- Web interface synchronization improvements ๐ŸŒ +- Added shortcuts to all panel search under command palette ๐Ÿ” +- All sorts of other bug fixes and usability improvements ๐Ÿ› +- Enhanced batch runner with agent prompt validation for task references โ˜‘๏ธ +- Added recovery mechanism for expired group chat sessions automatically ๐Ÿ”„ +- Improved history help modal with SSH remote session limitations notice ๐Ÿ“ +- The wand glyph on the upper right sparkles if any agent is working โœจ + ... and of course tons of other little fixes and creature comforts too numerous to enumerate here. --- @@ -78,41 +79,41 @@ Addressed some security issues, all thanks to @VVX7 **Latest: v0.14.5** | Released January 24, 2026 -Changes in this point release include: - -- Desktop app performance improvements (more to come on this, we want Maestro blazing fast) ๐ŸŒ -- Added local manifest feature for custom playbooks ๐Ÿ“– -- Agents are now inherently aware of your activity history as seen in the history panel ๐Ÿ“œ (this is built-in cross context memory!) -- Added markdown rendering support for AI responses in mobile view ๐Ÿ“ฑ -- Bugfix in tracking costs from JSONL files that were aged out ๐Ÿฆ -- Added BlueSky social media handle for leaderboard ๐Ÿฆ‹ -- Added options to disable GPU rendering and confetti ๐ŸŽŠ -- Better handling of large files in preview ๐Ÿ—„๏ธ -- Bug fix in Claude context calculation ๐Ÿงฎ -- Addressed bug in OpenSpec version reporting ๐Ÿ› - -The major contributions to 0.14.x remain: - -๐Ÿ—„๏ธ Document Graphs. Launch from file preview or from the FIle tree panel. Explore relationships between Markdown documents that contain links between documents and to URLs. - -๐Ÿ“ถ SSH support for agents. Manage a remote agent with feature parity over SSH. Includes support for Git and File tree panels. Manage agents on remote systems or in containers. This even works for Group Chat, which is rad as hell. - -๐Ÿง™โ€โ™‚๏ธ Added an in-tab wizard for generating Auto Run Playbooks via `/wizard` or a new button in the Auto Run panel. - -# Smaller Changes in 014.x - -- Improved User Dashboard, available from hamburger menu, command palette or hotkey ๐ŸŽ›๏ธ -- Leaderboard tracking now works across multiple systems and syncs level from cloud ๐Ÿ† -- Agent duplication. Pro tip: Consider a group of unused "Template" agents โœŒ๏ธ -- New setting to prevent system from going to sleep while agents are active ๐Ÿ›๏ธ -- The tab menu has a new "Publish as GitHub Gist" option ๐Ÿ“ -- The tab menu has options to move the tab to the first or last position ๐Ÿ”€ -- [Maestro-Playbooks](https://github.com/pedramamini/Maestro-Playbooks) can now contain non-markdown assets ๐Ÿ“™ -- Improved default shell detection ๐Ÿš -- Added logic to prevent overlapping TTS notifications ๐Ÿ’ฌ -- Added "Toggle Bookmark" shortcut (CTRL/CMD+SHIFT+B) โŒจ๏ธ -- Gist publishing now shows previous URLs with copy button ๐Ÿ“‹ - +Changes in this point release include: + +- Desktop app performance improvements (more to come on this, we want Maestro blazing fast) ๐ŸŒ +- Added local manifest feature for custom playbooks ๐Ÿ“– +- Agents are now inherently aware of your activity history as seen in the history panel ๐Ÿ“œ (this is built-in cross context memory!) +- Added markdown rendering support for AI responses in mobile view ๐Ÿ“ฑ +- Bugfix in tracking costs from JSONL files that were aged out ๐Ÿฆ +- Added BlueSky social media handle for leaderboard ๐Ÿฆ‹ +- Added options to disable GPU rendering and confetti ๐ŸŽŠ +- Better handling of large files in preview ๐Ÿ—„๏ธ +- Bug fix in Claude context calculation ๐Ÿงฎ +- Addressed bug in OpenSpec version reporting ๐Ÿ› + +The major contributions to 0.14.x remain: + +๐Ÿ—„๏ธ Document Graphs. Launch from file preview or from the FIle tree panel. Explore relationships between Markdown documents that contain links between documents and to URLs. + +๐Ÿ“ถ SSH support for agents. Manage a remote agent with feature parity over SSH. Includes support for Git and File tree panels. Manage agents on remote systems or in containers. This even works for Group Chat, which is rad as hell. + +๐Ÿง™โ€โ™‚๏ธ Added an in-tab wizard for generating Auto Run Playbooks via `/wizard` or a new button in the Auto Run panel. + +# Smaller Changes in 014.x + +- Improved User Dashboard, available from hamburger menu, command palette or hotkey ๐ŸŽ›๏ธ +- Leaderboard tracking now works across multiple systems and syncs level from cloud ๐Ÿ† +- Agent duplication. Pro tip: Consider a group of unused "Template" agents โœŒ๏ธ +- New setting to prevent system from going to sleep while agents are active ๐Ÿ›๏ธ +- The tab menu has a new "Publish as GitHub Gist" option ๐Ÿ“ +- The tab menu has options to move the tab to the first or last position ๐Ÿ”€ +- [Maestro-Playbooks](https://github.com/pedramamini/Maestro-Playbooks) can now contain non-markdown assets ๐Ÿ“™ +- Improved default shell detection ๐Ÿš +- Added logic to prevent overlapping TTS notifications ๐Ÿ’ฌ +- Added "Toggle Bookmark" shortcut (CTRL/CMD+SHIFT+B) โŒจ๏ธ +- Gist publishing now shows previous URLs with copy button ๐Ÿ“‹ + Thanks for the contributions: @t1mmen @aejfager @Crumbgrabber @whglaser @b3nw @deandebeer @shadown @breki @charles-dyfis-net @ronaldeddings @jlengrand @ksylvan ### Previous Releases in this Series @@ -131,20 +132,22 @@ Thanks for the contributions: @t1mmen @aejfager @Crumbgrabber @whglaser @b3nw @d ### Changes -- TAKE TWO! Fixed Linux ARM64 build architecture contamination issues ๐Ÿ—๏ธ - -### v0.13.1 Changes -- Fixed Linux ARM64 build architecture contamination issues ๐Ÿ—๏ธ -- Enhanced error handling for Auto Run batch processing ๐Ÿšจ - -### v0.13.0 Changes -- Added a global usage dashboard, data collection begins with this install ๐ŸŽ›๏ธ -- Added a Playbook Exchange for downloading pre-defined Auto Run playbooks from [Maestro-Playbooks](https://github.com/pedramamini/Maestro-Playbooks) ๐Ÿ“• -- Bundled OpenSpec commands for structured change proposals ๐Ÿ“ -- Added pre-release channel support for beta/RC updates ๐Ÿงช -- Implemented global hands-on time tracking across sessions โฑ๏ธ -- Added new keyboard shortcut for agent settings (Opt+Cmd+, | Ctrl+Alt+,) โŒจ๏ธ -- Added directory size calculation with file/folder counts in file explorer ๐Ÿ“Š +- TAKE TWO! Fixed Linux ARM64 build architecture contamination issues ๐Ÿ—๏ธ + +### v0.13.1 Changes + +- Fixed Linux ARM64 build architecture contamination issues ๐Ÿ—๏ธ +- Enhanced error handling for Auto Run batch processing ๐Ÿšจ + +### v0.13.0 Changes + +- Added a global usage dashboard, data collection begins with this install ๐ŸŽ›๏ธ +- Added a Playbook Exchange for downloading pre-defined Auto Run playbooks from [Maestro-Playbooks](https://github.com/pedramamini/Maestro-Playbooks) ๐Ÿ“• +- Bundled OpenSpec commands for structured change proposals ๐Ÿ“ +- Added pre-release channel support for beta/RC updates ๐Ÿงช +- Implemented global hands-on time tracking across sessions โฑ๏ธ +- Added new keyboard shortcut for agent settings (Opt+Cmd+, | Ctrl+Alt+,) โŒจ๏ธ +- Added directory size calculation with file/folder counts in file explorer ๐Ÿ“Š - Added sleep detection to exclude laptop sleep from time tracking โฐ ### Previous Releases in this Series @@ -158,22 +161,26 @@ Thanks for the contributions: @t1mmen @aejfager @Crumbgrabber @whglaser @b3nw @d **Latest: v0.12.3** | Released December 28, 2025 -The big changes in the v0.12.x line are the following three: - -## Show Thinking -๐Ÿค” There is now a toggle to show thinking for the agent, the default for new tabs is off, though this can be changed under Settings > General. The toggle shows next to History and Read-Only. Very similar pattern. This has been the #1 most requested feature, though personally, I don't think I'll use it as I prefer to not see the details of the work, but the results of the work. Just as we work with our colleagues. - -## GitHub Spec-Kit Integration -๐ŸŽฏ Added [GitHub Spec-Kit](https://github.com/github/spec-kit) commands into Maestro with a built in updater to grab the latest prompts from the repository. We do override `/speckit-implement` (the final step) to create Auto Run docs and guide the user through their execution, which thanks to Wortrees from v0.11.x allows us to run in parallel! - -## Context Management Tools -๐Ÿ“– Added context management options from tab right-click menu. You can now compress, merge, and transfer contexts between agents. You will received (configurable) warnings at 60% and 80% context consumption with a hint to compact. - -## Changes Specific to v0.12.3: -- We now have hosted documentation through Mintlify ๐Ÿ“š -- Export any tab conversation as self-contained themed HTML file ๐Ÿ“„ -- Publish files as private/public Gists ๐ŸŒ -- Added tab hover overlay menu with close operations and export ๐Ÿ“‹ +The big changes in the v0.12.x line are the following three: + +## Show Thinking + +๐Ÿค” There is now a toggle to show thinking for the agent, the default for new tabs is off, though this can be changed under Settings > General. The toggle shows next to History and Read-Only. Very similar pattern. This has been the #1 most requested feature, though personally, I don't think I'll use it as I prefer to not see the details of the work, but the results of the work. Just as we work with our colleagues. + +## GitHub Spec-Kit Integration + +๐ŸŽฏ Added [GitHub Spec-Kit](https://github.com/github/spec-kit) commands into Maestro with a built in updater to grab the latest prompts from the repository. We do override `/speckit-implement` (the final step) to create Auto Run docs and guide the user through their execution, which thanks to Wortrees from v0.11.x allows us to run in parallel! + +## Context Management Tools + +๐Ÿ“– Added context management options from tab right-click menu. You can now compress, merge, and transfer contexts between agents. You will received (configurable) warnings at 60% and 80% context consumption with a hint to compact. + +## Changes Specific to v0.12.3: + +- We now have hosted documentation through Mintlify ๐Ÿ“š +- Export any tab conversation as self-contained themed HTML file ๐Ÿ“„ +- Publish files as private/public Gists ๐ŸŒ +- Added tab hover overlay menu with close operations and export ๐Ÿ“‹ - Added social handles to achievement share images ๐Ÿ† ### Previous Releases in this Series @@ -187,12 +194,12 @@ The big changes in the v0.12.x line are the following three: **Latest: v0.11.0** | Released December 22, 2025 -๐ŸŒณ Github Worktree support was added. Any agent bound to a Git repository has the option to enable worktrees, each of which show up as a sub-agent with their own write-lock and Auto Run capability. Now you can truly develop in parallel on the same project and issue PRs when you're ready, all from within Maestro. Huge improvement, major thanks to @petersilberman. - -# Other Changes - -- @ file mentions now include documents from your Auto Run folder (which may not live in your agent working directory) ๐Ÿ—„๏ธ -- The wizard is now capable of detecting and continuing on past started projects ๐Ÿง™ +๐ŸŒณ Github Worktree support was added. Any agent bound to a Git repository has the option to enable worktrees, each of which show up as a sub-agent with their own write-lock and Auto Run capability. Now you can truly develop in parallel on the same project and issue PRs when you're ready, all from within Maestro. Huge improvement, major thanks to @petersilberman. + +# Other Changes + +- @ file mentions now include documents from your Auto Run folder (which may not live in your agent working directory) ๐Ÿ—„๏ธ +- The wizard is now capable of detecting and continuing on past started projects ๐Ÿง™ - Bug fixes ๐Ÿ›๐Ÿœ๐Ÿž --- @@ -203,14 +210,14 @@ The big changes in the v0.12.x line are the following three: ### Changes -- Export group chats as self-contained HTML โฌ‡๏ธ -- Enhanced system process viewer now has details view with full process args ๐Ÿ’ป -- Update button hides until platform binaries are available in releases. โณ -- Added Auto Run stall detection at the loop level, if no documents are updated after a loop ๐Ÿ” -- Improved Codex session discovery ๐Ÿ” -- Windows compatibility fixes ๐Ÿ› -- 64-bit Linux ARM build issue fixed (thanks @LilYoopug) ๐Ÿœ -- Addressed session enumeration issues with Codex and OpenCode ๐Ÿž +- Export group chats as self-contained HTML โฌ‡๏ธ +- Enhanced system process viewer now has details view with full process args ๐Ÿ’ป +- Update button hides until platform binaries are available in releases. โณ +- Added Auto Run stall detection at the loop level, if no documents are updated after a loop ๐Ÿ” +- Improved Codex session discovery ๐Ÿ” +- Windows compatibility fixes ๐Ÿ› +- 64-bit Linux ARM build issue fixed (thanks @LilYoopug) ๐Ÿœ +- Addressed session enumeration issues with Codex and OpenCode ๐Ÿž - Addressed pathing issues around gh command (thanks @oliveiraantoniocc) ๐Ÿ ### Previous Releases in this Series @@ -226,13 +233,13 @@ The big changes in the v0.12.x line are the following three: ### Changes -- Add Sentry crashing reporting monitoring with opt-out ๐Ÿ› -- Stability fixes on v0.9.0 along with all the changes it brought along, including... - - Major refactor to enable supporting of multiple providers ๐Ÿ‘จโ€๐Ÿ‘ฉโ€๐Ÿ‘งโ€๐Ÿ‘ฆ - - Added OpenAI Codex support ๐Ÿ‘จโ€๐Ÿ’ป - - Added OpenCode support ๐Ÿ‘ฉโ€๐Ÿ’ป - - Error handling system detects and recovers from agent failures ๐Ÿšจ - - Added option to specify CLI arguments to AI providers โœจ +- Add Sentry crashing reporting monitoring with opt-out ๐Ÿ› +- Stability fixes on v0.9.0 along with all the changes it brought along, including... + - Major refactor to enable supporting of multiple providers ๐Ÿ‘จโ€๐Ÿ‘ฉโ€๐Ÿ‘งโ€๐Ÿ‘ฆ + - Added OpenAI Codex support ๐Ÿ‘จโ€๐Ÿ’ป + - Added OpenCode support ๐Ÿ‘ฉโ€๐Ÿ’ป + - Error handling system detects and recovers from agent failures ๐Ÿšจ + - Added option to specify CLI arguments to AI providers โœจ - Bunch of other little tweaks and additions ๐Ÿ’Ž ### Previous Releases in this Series @@ -247,19 +254,19 @@ The big changes in the v0.12.x line are the following three: ### Changes -- Added "Nudge" messages. Short static copy to include with every interactive message sent, perhaps to remind the agent on how to work ๐Ÿ“Œ -- Addressed various resource consumption issues to reduce battery cost ๐Ÿ“‰ -- Implemented fuzzy file search in quick actions for instant navigation ๐Ÿ” -- Added "clear" command support to clean terminal shell logs ๐Ÿงน -- Simplified search highlighting by integrating into markdown pipeline โœจ -- Enhanced update checker to filter prerelease tags like -rc, -beta ๐Ÿš€ -- Fixed RPM package compatibility for OpenSUSE Tumbleweed ๐Ÿง (H/T @JOduMonT) -- Added libuuid1 support alongside standard libuuid dependency ๐Ÿ“ฆ -- Introduced Cmd+Shift+U shortcut for tab unread toggle โŒจ๏ธ -- Enhanced keyboard navigation for marking tabs unread ๐ŸŽฏ -- Expanded Linux distribution support with smart dependencies ๐ŸŒ -- Major underlying code re-structuring for maintainability ๐Ÿงน -- Improved stall detection to allow for individual docs to stall out while not affecting the entire playbook ๐Ÿ“– (H/T @mattjay) +- Added "Nudge" messages. Short static copy to include with every interactive message sent, perhaps to remind the agent on how to work ๐Ÿ“Œ +- Addressed various resource consumption issues to reduce battery cost ๐Ÿ“‰ +- Implemented fuzzy file search in quick actions for instant navigation ๐Ÿ” +- Added "clear" command support to clean terminal shell logs ๐Ÿงน +- Simplified search highlighting by integrating into markdown pipeline โœจ +- Enhanced update checker to filter prerelease tags like -rc, -beta ๐Ÿš€ +- Fixed RPM package compatibility for OpenSUSE Tumbleweed ๐Ÿง (H/T @JOduMonT) +- Added libuuid1 support alongside standard libuuid dependency ๐Ÿ“ฆ +- Introduced Cmd+Shift+U shortcut for tab unread toggle โŒจ๏ธ +- Enhanced keyboard navigation for marking tabs unread ๐ŸŽฏ +- Expanded Linux distribution support with smart dependencies ๐ŸŒ +- Major underlying code re-structuring for maintainability ๐Ÿงน +- Improved stall detection to allow for individual docs to stall out while not affecting the entire playbook ๐Ÿ“– (H/T @mattjay) - Added option to select a static listening port for remote control ๐ŸŽฎ (H/T @b3nw) ### Previous Releases in this Series @@ -279,35 +286,40 @@ The big changes in the v0.12.x line are the following three: **Latest: v0.7.4** | Released December 12, 2025 -Minor bugfixes on top of v0.7.3: - -# Onboarding, Wizard, and Tours -- Implemented comprehensive onboarding wizard with integrated tour system ๐Ÿš€ -- Added project-understanding confidence display to wizard UI ๐ŸŽจ -- Enhanced keyboard navigation across all wizard screens โŒจ๏ธ -- Added analytics tracking for wizard and tour completion ๐Ÿ“ˆ -- Added First Run Celebration modal with confetti animation ๐ŸŽ‰ - -# UI / UX Enhancements -- Added expand-to-fullscreen button for Auto Run interface ๐Ÿ–ฅ๏ธ -- Created dedicated modal component and improved modal priority constants for expanded Auto Run view ๐Ÿ“ -- Enhanced user experience with fullscreen editing capabilities โœจ -- Fixed tab name display to correctly show full name for active tabs ๐Ÿท๏ธ -- Added performance optimizations with throttling and caching for scrolling โšก -- Implemented drag-and-drop reordering for execution queue items ๐ŸŽฏ -- Enhanced toast context with agent name for OS notifications ๐Ÿ“ข - -# Auto Run Workflow Improvements -- Created phase document generation for Auto Run workflow ๐Ÿ“„ -- Added real-time log streaming to the LogViewer component ๐Ÿ“Š - -# Application Behavior / Core Fixes -- Added validation to prevent nested worktrees inside the main repository ๐Ÿšซ -- Fixed process manager to properly emit exit events on errors ๐Ÿ”ง -- Fixed process exit handling to ensure proper cleanup ๐Ÿงน - -# Update System -- Implemented automatic update checking on application startup ๐Ÿš€ +Minor bugfixes on top of v0.7.3: + +# Onboarding, Wizard, and Tours + +- Implemented comprehensive onboarding wizard with integrated tour system ๐Ÿš€ +- Added project-understanding confidence display to wizard UI ๐ŸŽจ +- Enhanced keyboard navigation across all wizard screens โŒจ๏ธ +- Added analytics tracking for wizard and tour completion ๐Ÿ“ˆ +- Added First Run Celebration modal with confetti animation ๐ŸŽ‰ + +# UI / UX Enhancements + +- Added expand-to-fullscreen button for Auto Run interface ๐Ÿ–ฅ๏ธ +- Created dedicated modal component and improved modal priority constants for expanded Auto Run view ๐Ÿ“ +- Enhanced user experience with fullscreen editing capabilities โœจ +- Fixed tab name display to correctly show full name for active tabs ๐Ÿท๏ธ +- Added performance optimizations with throttling and caching for scrolling โšก +- Implemented drag-and-drop reordering for execution queue items ๐ŸŽฏ +- Enhanced toast context with agent name for OS notifications ๐Ÿ“ข + +# Auto Run Workflow Improvements + +- Created phase document generation for Auto Run workflow ๐Ÿ“„ +- Added real-time log streaming to the LogViewer component ๐Ÿ“Š + +# Application Behavior / Core Fixes + +- Added validation to prevent nested worktrees inside the main repository ๐Ÿšซ +- Fixed process manager to properly emit exit events on errors ๐Ÿ”ง +- Fixed process exit handling to ensure proper cleanup ๐Ÿงน + +# Update System + +- Implemented automatic update checking on application startup ๐Ÿš€ - Added settings toggle for enabling/disabling startup update checks โš™๏ธ ### Previous Releases in this Series @@ -323,38 +335,40 @@ Minor bugfixes on top of v0.7.3: **Latest: v0.6.1** | Released December 4, 2025 -In this release... -- Added recursive subfolder support for Auto Run markdown files ๐Ÿ—‚๏ธ -- Enhanced document tree display with expandable folder navigation ๐ŸŒณ -- Enabled creating documents in subfolders with path selection ๐Ÿ“ -- Improved batch runner UI with inline progress bars and loop indicators ๐Ÿ“Š -- Fixed execution queue display bug for immediate command processing ๐Ÿ› -- Added folder icons and better visual hierarchy for document browser ๐ŸŽจ -- Implemented dynamic task re-counting for batch run loop iterations ๐Ÿ”„ -- Enhanced create document modal with location selector dropdown ๐Ÿ“ -- Improved progress tracking with per-document completion visualization ๐Ÿ“ˆ -- Added support for nested folder structures in document management ๐Ÿ—๏ธ - -Plus the pre-release ALPHA... -- Template vars now set context in default autorun prompt ๐Ÿš€ -- Added Enter key support for queued message confirmation dialog โŒจ๏ธ -- Kill process capability added to System Process Monitor ๐Ÿ’€ -- Toggle markdown rendering added to Cmd+K Quick Actions ๐Ÿ“ -- Fixed cloudflared detection in packaged app environments ๐Ÿ”ง -- Added debugging logs for process exit diagnostics ๐Ÿ› -- Tab switcher shows last activity timestamps and filters by project ๐Ÿ• -- Slash commands now fill text on Tab/Enter instead of executing โšก -- Added GitHub Actions workflow for auto-assigning issues/PRs ๐Ÿค– -- Graceful handling for playbooks with missing documents implemented โœจ -- Added multi-document batch processing for Auto Run ๐Ÿš€ -- Introduced Git worktree support for parallel execution ๐ŸŒณ -- Created playbook system for saving run configurations ๐Ÿ“š -- Implemented document reset-on-completion with loop mode ๐Ÿ”„ -- Added drag-and-drop document reordering interface ๐ŸŽฏ -- Built Auto Run folder selector with file management ๐Ÿ“ -- Enhanced progress tracking with per-document metrics ๐Ÿ“Š -- Integrated PR creation after worktree completion ๐Ÿ”€ -- Added undo/redo support in document editor โ†ฉ๏ธ +In this release... + +- Added recursive subfolder support for Auto Run markdown files ๐Ÿ—‚๏ธ +- Enhanced document tree display with expandable folder navigation ๐ŸŒณ +- Enabled creating documents in subfolders with path selection ๐Ÿ“ +- Improved batch runner UI with inline progress bars and loop indicators ๐Ÿ“Š +- Fixed execution queue display bug for immediate command processing ๐Ÿ› +- Added folder icons and better visual hierarchy for document browser ๐ŸŽจ +- Implemented dynamic task re-counting for batch run loop iterations ๐Ÿ”„ +- Enhanced create document modal with location selector dropdown ๐Ÿ“ +- Improved progress tracking with per-document completion visualization ๐Ÿ“ˆ +- Added support for nested folder structures in document management ๐Ÿ—๏ธ + +Plus the pre-release ALPHA... + +- Template vars now set context in default autorun prompt ๐Ÿš€ +- Added Enter key support for queued message confirmation dialog โŒจ๏ธ +- Kill process capability added to System Process Monitor ๐Ÿ’€ +- Toggle markdown rendering added to Cmd+K Quick Actions ๐Ÿ“ +- Fixed cloudflared detection in packaged app environments ๐Ÿ”ง +- Added debugging logs for process exit diagnostics ๐Ÿ› +- Tab switcher shows last activity timestamps and filters by project ๐Ÿ• +- Slash commands now fill text on Tab/Enter instead of executing โšก +- Added GitHub Actions workflow for auto-assigning issues/PRs ๐Ÿค– +- Graceful handling for playbooks with missing documents implemented โœจ +- Added multi-document batch processing for Auto Run ๐Ÿš€ +- Introduced Git worktree support for parallel execution ๐ŸŒณ +- Created playbook system for saving run configurations ๐Ÿ“š +- Implemented document reset-on-completion with loop mode ๐Ÿ”„ +- Added drag-and-drop document reordering interface ๐ŸŽฏ +- Built Auto Run folder selector with file management ๐Ÿ“ +- Enhanced progress tracking with per-document metrics ๐Ÿ“Š +- Integrated PR creation after worktree completion ๐Ÿ”€ +- Added undo/redo support in document editor โ†ฉ๏ธ - Implemented auto-save with 5-second debounce ๐Ÿ’พ ### Previous Releases in this Series @@ -369,15 +383,15 @@ Plus the pre-release ALPHA... ### Changes -- Added "Made with Maestro" badge to README header ๐ŸŽฏ -- Redesigned app icon with darker purple color scheme ๐ŸŽจ -- Created new SVG badge for project attribution ๐Ÿท๏ธ -- Added side-by-side image diff viewer for git changes ๐Ÿ–ผ๏ธ -- Enhanced confetti animation with realistic cannon-style bursts ๐ŸŽŠ -- Fixed z-index layering for standing ovation overlay ๐Ÿ“Š -- Improved tab switcher to show all named sessions ๐Ÿ” -- Enhanced batch synopsis prompts for cleaner summaries ๐Ÿ“ -- Added binary file detection in git diff parser ๐Ÿ”ง +- Added "Made with Maestro" badge to README header ๐ŸŽฏ +- Redesigned app icon with darker purple color scheme ๐ŸŽจ +- Created new SVG badge for project attribution ๐Ÿท๏ธ +- Added side-by-side image diff viewer for git changes ๐Ÿ–ผ๏ธ +- Enhanced confetti animation with realistic cannon-style bursts ๐ŸŽŠ +- Fixed z-index layering for standing ovation overlay ๐Ÿ“Š +- Improved tab switcher to show all named sessions ๐Ÿ” +- Enhanced batch synopsis prompts for cleaner summaries ๐Ÿ“ +- Added binary file detection in git diff parser ๐Ÿ”ง - Implemented git file reading at specific refs ๐Ÿ“ ### Previous Releases in this Series @@ -392,24 +406,24 @@ Plus the pre-release ALPHA... ### Changes -- Added Tab Switcher modal for quick navigation between AI tabs ๐Ÿš€ -- Implemented @ mention file completion for AI mode references ๐Ÿ“ -- Added navigation history with back/forward through sessions and tabs โฎ๏ธ -- Introduced tab completion filters for branches, tags, and files ๐ŸŒณ -- Added unread tab indicators and filtering for better organization ๐Ÿ“ฌ -- Implemented token counting display with human-readable formatting ๐Ÿ”ข -- Added markdown rendering toggle for AI responses in terminal ๐Ÿ“ -- Removed built-in slash commands in favor of custom AI commands ๐ŸŽฏ -- Added context menu for sessions with rename, bookmark, move options ๐Ÿ–ฑ๏ธ -- Enhanced file preview with stats showing size, tokens, timestamps ๐Ÿ“Š -- Added token counting with js-tiktoken for file preview stats bar ๐Ÿ”ข -- Implemented Tab Switcher modal for fuzzy-search navigation (Opt+Cmd+T) ๐Ÿ” -- Added Save to History toggle (Cmd+S) for automatic work synopsis tracking ๐Ÿ’พ -- Enhanced tab completion with @ mentions for file references in AI prompts ๐Ÿ“Ž -- Implemented navigation history with back/forward shortcuts (Cmd+Shift+,/.) ๐Ÿ”™ -- Added git branches and tags to intelligent tab completion system ๐ŸŒฟ -- Enhanced markdown rendering with syntax highlighting and toggle view ๐Ÿ“ -- Added right-click context menus for session management and organization ๐Ÿ–ฑ๏ธ +- Added Tab Switcher modal for quick navigation between AI tabs ๐Ÿš€ +- Implemented @ mention file completion for AI mode references ๐Ÿ“ +- Added navigation history with back/forward through sessions and tabs โฎ๏ธ +- Introduced tab completion filters for branches, tags, and files ๐ŸŒณ +- Added unread tab indicators and filtering for better organization ๐Ÿ“ฌ +- Implemented token counting display with human-readable formatting ๐Ÿ”ข +- Added markdown rendering toggle for AI responses in terminal ๐Ÿ“ +- Removed built-in slash commands in favor of custom AI commands ๐ŸŽฏ +- Added context menu for sessions with rename, bookmark, move options ๐Ÿ–ฑ๏ธ +- Enhanced file preview with stats showing size, tokens, timestamps ๐Ÿ“Š +- Added token counting with js-tiktoken for file preview stats bar ๐Ÿ”ข +- Implemented Tab Switcher modal for fuzzy-search navigation (Opt+Cmd+T) ๐Ÿ” +- Added Save to History toggle (Cmd+S) for automatic work synopsis tracking ๐Ÿ’พ +- Enhanced tab completion with @ mentions for file references in AI prompts ๐Ÿ“Ž +- Implemented navigation history with back/forward shortcuts (Cmd+Shift+,/.) ๐Ÿ”™ +- Added git branches and tags to intelligent tab completion system ๐ŸŒฟ +- Enhanced markdown rendering with syntax highlighting and toggle view ๐Ÿ“ +- Added right-click context menus for session management and organization ๐Ÿ–ฑ๏ธ - Improved mobile app with better WebSocket reconnection and status badges ๐Ÿ“ฑ ### Previous Releases in this Series @@ -424,15 +438,15 @@ Plus the pre-release ALPHA... ### Changes -- Fixed tab handling requiring explicitly selected Claude session ๐Ÿ”ง -- Added auto-scroll navigation for slash command list selection โšก -- Implemented TTS audio feedback for toast notifications speak ๐Ÿ”Š -- Fixed shortcut case sensitivity using lowercase key matching ๐Ÿ”ค -- Added Cmd+Shift+J shortcut to jump to bottom instantly โฌ‡๏ธ -- Sorted shortcuts alphabetically in help modal for discovery ๐Ÿ“‘ -- Display full commit message body in git log view ๐Ÿ“ -- Added expand/collapse all buttons to process tree header ๐ŸŒณ -- Support synopsis process type in process tree parsing ๐Ÿ” +- Fixed tab handling requiring explicitly selected Claude session ๐Ÿ”ง +- Added auto-scroll navigation for slash command list selection โšก +- Implemented TTS audio feedback for toast notifications speak ๐Ÿ”Š +- Fixed shortcut case sensitivity using lowercase key matching ๐Ÿ”ค +- Added Cmd+Shift+J shortcut to jump to bottom instantly โฌ‡๏ธ +- Sorted shortcuts alphabetically in help modal for discovery ๐Ÿ“‘ +- Display full commit message body in git log view ๐Ÿ“ +- Added expand/collapse all buttons to process tree header ๐ŸŒณ +- Support synopsis process type in process tree parsing ๐Ÿ” - Renamed "No Group" to "UNGROUPED" for better clarity โœจ ### Previous Releases in this Series @@ -445,15 +459,15 @@ Plus the pre-release ALPHA... **Latest: v0.2.3** | Released November 29, 2025 -โ€ข Enhanced mobile web interface with session sync and history panel ๐Ÿ“ฑ -โ€ข Added ThinkingStatusPill showing real-time token counts and elapsed time โฑ๏ธ -โ€ข Implemented task count badges and session deduplication for batch runner ๐Ÿ“Š -โ€ข Added TTS stop control and improved voice synthesis compatibility ๐Ÿ”Š -โ€ข Created image lightbox with navigation, clipboard, and delete features ๐Ÿ–ผ๏ธ -โ€ข Fixed UI bugs in search, auto-scroll, and sidebar interactions ๐Ÿ› -โ€ข Added global Claude stats with streaming updates across projects ๐Ÿ“ˆ -โ€ข Improved markdown checkbox styling and collapsed palette hover UX โœจ -โ€ข Enhanced scratchpad with search, image paste, and attachment support ๐Ÿ” +โ€ข Enhanced mobile web interface with session sync and history panel ๐Ÿ“ฑ +โ€ข Added ThinkingStatusPill showing real-time token counts and elapsed time โฑ๏ธ +โ€ข Implemented task count badges and session deduplication for batch runner ๐Ÿ“Š +โ€ข Added TTS stop control and improved voice synthesis compatibility ๐Ÿ”Š +โ€ข Created image lightbox with navigation, clipboard, and delete features ๐Ÿ–ผ๏ธ +โ€ข Fixed UI bugs in search, auto-scroll, and sidebar interactions ๐Ÿ› +โ€ข Added global Claude stats with streaming updates across projects ๐Ÿ“ˆ +โ€ข Improved markdown checkbox styling and collapsed palette hover UX โœจ +โ€ข Enhanced scratchpad with search, image paste, and attachment support ๐Ÿ” โ€ข Added splash screen with logo and progress bar during startup ๐ŸŽจ ### Previous Releases in this Series @@ -468,15 +482,15 @@ Plus the pre-release ALPHA... **Latest: v0.1.6** | Released November 27, 2025 -โ€ข Added template variables for dynamic AI command customization ๐ŸŽฏ -โ€ข Implemented session bookmarking with star icons and dedicated section โญ -โ€ข Enhanced Git Log Viewer with smarter date formatting ๐Ÿ“… -โ€ข Improved GitHub release workflow to handle partial failures gracefully ๐Ÿ”ง -โ€ข Added collapsible template documentation in AI Commands panel ๐Ÿ“š -โ€ข Updated default commit command with session ID traceability ๐Ÿ” -โ€ข Added tag indicators for custom-named sessions visually ๐Ÿท๏ธ -โ€ข Improved Git Log search UX with better focus handling ๐ŸŽจ -โ€ข Fixed input placeholder spacing for better readability ๐Ÿ“ +โ€ข Added template variables for dynamic AI command customization ๐ŸŽฏ +โ€ข Implemented session bookmarking with star icons and dedicated section โญ +โ€ข Enhanced Git Log Viewer with smarter date formatting ๐Ÿ“… +โ€ข Improved GitHub release workflow to handle partial failures gracefully ๐Ÿ”ง +โ€ข Added collapsible template documentation in AI Commands panel ๐Ÿ“š +โ€ข Updated default commit command with session ID traceability ๐Ÿ” +โ€ข Added tag indicators for custom-named sessions visually ๐Ÿท๏ธ +โ€ข Improved Git Log search UX with better focus handling ๐ŸŽจ +โ€ข Fixed input placeholder spacing for better readability ๐Ÿ“ โ€ข Updated documentation with new features and template references ๐Ÿ“– ### Previous Releases in this Series @@ -495,6 +509,7 @@ Plus the pre-release ALPHA... All releases are available on the [GitHub Releases page](https://github.com/RunMaestro/Maestro/releases). Maestro is available for: + - **macOS** - Apple Silicon (arm64) and Intel (x64) - **Windows** - x64 - **Linux** - x64 and arm64, AppImage, deb, and rpm packages diff --git a/prompt.HSr50iQeQl b/prompt.HSr50iQeQl new file mode 100644 index 000000000..e69de29bb diff --git a/prompt.HSr50iQeQl.txt b/prompt.HSr50iQeQl.txt new file mode 100644 index 000000000..d99ee8179 --- /dev/null +++ b/prompt.HSr50iQeQl.txt @@ -0,0 +1,38 @@ +You are an autonomous coding agent fixing a GitHub pull request for Maestro. + +Goal: +- Fix only what is necessary to make the failing checks pass for PR #496. +- Use the failure payload below to focus changes. +- Keep changes minimal and scoped. +- Do not add broad refactors or formatting churn. +- If no safe fix exists, do not force risky behavior and return without edits. + +Context: +- PR repo: RunMaestro/Maestro +- PR number: 496 +- Head branch: symphony/issue-373-mm85yqkw +- Head SHA: aaf09b6dd5ece8413dc8917470cb13a043212f58 + +Failing check payload: +{"repo":"RunMaestro/Maestro","number":496,"title":"perf: optimize SessionListItem style recalculation","htmlUrl":"https://github.com/RunMaestro/Maestro/pull/496","head":"aaf09b6dd5ece8413dc8917470cb13a043212f58","failures":[{"source":"check-run","id":65506372599,"name":"test","status":"completed","conclusion":"failure","htmlUrl":"https://github.com/RunMaestro/Maestro/actions/runs/22608662329/job/65506372599","detailsUrl":"https://github.com/RunMaestro/Maestro/actions/runs/22608662329/job/65506372599","summary":"","notes":"src/__tests__/main/stats/paths.test.ts:577:failure AssertionError: expected \"vi.fn()\" to be called with arguments: [ โ€ฆ(2) ]\n\nNumber of calls: 0\n\n โฏ src/__tests__/main/stats/paths.test.ts:577:29\n\n | src/__tests__/main/stats/paths.test.ts:430:failure AssertionError: expected \"vi.fn()\" to be called with arguments: [ โ€ฆ(2) ]\n\nNumber of calls: 0\n\n โฏ src/__tests__/main/stats/paths.test.ts:430:28\n\n | src/__tests__/main/stats/paths.test.ts:417:failure AssertionError: expected \"vi.fn()\" to be called with arguments: [ โ€ฆ(2) ]\n\nNumber of calls: 0\n\n โฏ src/__tests__/main/stats/paths.test.ts:417:28\n\n | src/__tests__/main/stats/paths.test.ts:404:failure AssertionError: expected \"vi.fn()\" to be called with arguments: [ โ€ฆ(2) ]\n\nNumber of calls: 0\n\n โฏ src/__tests__/main/stats/paths.test.ts:404:28\n\n | src/__tests__/main/stats/paths.test.ts:391:failure AssertionError: expected \"vi.fn()\" to be called with arguments: [ โ€ฆ(2) ]\n\nNumber of calls: 0\n\n โฏ src/__tests__/main/stats/paths.test.ts:391:28\n\n","completedAt":"2026-03-03T04:52:25Z"},{"source":"check-run","id":65506372598,"name":"lint-and-format","status":"completed","conclusion":"failure","htmlUrl":"https://github.com/RunMaestro/Maestro/actions/runs/22608662329/job/65506372598","detailsUrl":"https://github.com/RunMaestro/Maestro/actions/runs/22608662329/job/65506372598","summary":"","notes":".github:9:failure Process completed with exit code 1.","completedAt":"2026-03-03T04:45:51Z"}]} +Local check results (run before Codex edit attempt): + +### Local check: test +command: pnpm test +status: 1 +output (last 250 lines): + +> maestro@0.15.0 test /private/var/folders/md/v04xc5jj3g33bd37429j8r3r0000gn/T/maestro-pr-fix-XXXXXX.c7wBBhgeQ2 +> vitest run + +sh: vitest: command not found +โ€‰ELIFECYCLEโ€‰ Test failed. See above for more details. +โ€‰WARNโ€‰ Local package.json exists, but node_modules missing, did you mean to install? + +### Local check: lint-and-format +command: pnpm run lint-and-format +status: 1 +output (last 250 lines): +โ€‰ERR_PNPM_NO_SCRIPTโ€‰ Missing script: lint-and-format + +Command "lint-and-format" not found. Did you mean "pnpm run format"? diff --git a/prompt.MC5RKtwlo8 b/prompt.MC5RKtwlo8 new file mode 100644 index 000000000..e69de29bb diff --git a/src/__tests__/integration/group-chat.integration.test.ts b/src/__tests__/integration/group-chat.integration.test.ts index 9062631a7..8adca9d2e 100644 --- a/src/__tests__/integration/group-chat.integration.test.ts +++ b/src/__tests__/integration/group-chat.integration.test.ts @@ -70,7 +70,7 @@ function createMockProcessManager(): IProcessManager & { toolType: config.toolType, prompt: config.prompt, }); - return { pid: Math.floor(Math.random() * 10000), success: true }; + return Promise.resolve({ pid: Math.floor(Math.random() * 10000), success: true }); }, write(sessionId: string, data: string) { diff --git a/src/__tests__/main/debug-package/collectors.test.ts b/src/__tests__/main/debug-package/collectors.test.ts index 73d2fa554..a9c13ac46 100644 --- a/src/__tests__/main/debug-package/collectors.test.ts +++ b/src/__tests__/main/debug-package/collectors.test.ts @@ -38,6 +38,11 @@ vi.mock('fs', () => ({ statSync: vi.fn(() => ({ size: 0, isDirectory: () => false })), readdirSync: vi.fn(() => []), readFileSync: vi.fn(() => ''), + promises: { + stat: vi.fn(() => Promise.resolve({ size: 0, isDirectory: () => false })), + readdir: vi.fn(() => Promise.resolve([])), + readFile: vi.fn(() => Promise.resolve('')), + }, })); // Mock cliDetection @@ -817,14 +822,13 @@ describe('Debug Package Collectors', () => { const { app } = await import('electron'); vi.mocked(app.getPath).mockReturnValue('/mock/userData'); - vi.mocked(fs.existsSync).mockReturnValue(true); - vi.mocked(fs.statSync).mockImplementation((path: any) => { + vi.mocked(fs.promises.stat).mockImplementation(async (path: any) => { if (path.includes('maestro-sessions.json')) { return { size: 1024, isDirectory: () => false } as any; } return { size: 0, isDirectory: () => true } as any; }); - vi.mocked(fs.readdirSync).mockReturnValue([]); + vi.mocked(fs.promises.readdir).mockResolvedValue([]); const { collectStorage } = await import('../../../main/debug-package/collectors/storage'); @@ -867,13 +871,12 @@ describe('Debug Package Collectors', () => { const { app } = await import('electron'); vi.mocked(app.getPath).mockReturnValue('/mock/userData'); - vi.mocked(fs.existsSync).mockReturnValue(true); - vi.mocked(fs.readdirSync).mockReturnValue([ + vi.mocked(fs.promises.readdir).mockResolvedValue([ 'chat-1.json', 'chat-1.log.json', 'chat-2.json', ] as any); - vi.mocked(fs.readFileSync).mockImplementation((path: any) => { + vi.mocked(fs.promises.readFile).mockImplementation(async (path: any) => { if (path.includes('chat-1.json') && !path.includes('.log')) { return JSON.stringify({ id: 'chat-1', @@ -930,7 +933,7 @@ describe('Debug Package Collectors', () => { it('should handle missing group chats directory', async () => { const fs = await import('fs'); - vi.mocked(fs.existsSync).mockReturnValue(false); + vi.mocked(fs.promises.readdir).mockRejectedValue(new Error('Directory does not exist')); const { collectGroupChats } = await import('../../../main/debug-package/collectors/group-chats'); diff --git a/src/__tests__/main/history-manager.test.ts b/src/__tests__/main/history-manager.test.ts index a6709a757..0397fe536 100644 --- a/src/__tests__/main/history-manager.test.ts +++ b/src/__tests__/main/history-manager.test.ts @@ -35,6 +35,14 @@ vi.mock('fs', () => ({ readdirSync: vi.fn(), unlinkSync: vi.fn(), watch: vi.fn(), + promises: { + access: vi.fn(), + readFile: vi.fn(), + writeFile: vi.fn(), + readdir: vi.fn(), + unlink: vi.fn(), + mkdir: vi.fn(), + }, })); import * as fs from 'fs'; @@ -52,6 +60,12 @@ const mockWriteFileSync = vi.mocked(fs.writeFileSync); const mockReaddirSync = vi.mocked(fs.readdirSync); const mockUnlinkSync = vi.mocked(fs.unlinkSync); const mockWatch = vi.mocked(fs.watch); +const mockFsAccess = vi.mocked(fs.promises.access); +const mockFsReadFile = vi.mocked(fs.promises.readFile); +const mockFsWriteFile = vi.mocked(fs.promises.writeFile); +const mockFsReaddir = vi.mocked(fs.promises.readdir); +const mockFsUnlink = vi.mocked(fs.promises.unlink); +const mockFsMkdir = vi.mocked(fs.promises.mkdir); /** * Helper to create a mock HistoryEntry @@ -91,6 +105,32 @@ describe('HistoryManager', () => { vi.resetAllMocks(); // Default: nothing exists mockExistsSync.mockReturnValue(false); + mockMkdirSync.mockClear(); + mockReadFileSync.mockReturnValue('{}'); + mockReaddirSync.mockReturnValue([]); + mockWriteFileSync.mockImplementation(() => undefined); + mockUnlinkSync.mockImplementation(() => undefined); + mockWatch.mockImplementation(() => ({ close: vi.fn() }) as unknown as fs.FSWatcher); + + mockFsAccess.mockImplementation(async (p: fs.PathLike) => { + if (mockExistsSync(p)) { + return; + } + throw new Error('ENOENT'); + }); + mockFsReadFile.mockImplementation(async (p: fs.PathLike) => { + return mockReadFileSync(p); + }); + mockFsWriteFile.mockImplementation(async (pathLike, data, options) => { + return mockWriteFileSync( + pathLike as string | Buffer | number, + data as string, + options as string + ); + }); + mockFsReaddir.mockImplementation(async () => mockReaddirSync() as unknown as fs.Dirent[]); + mockFsUnlink.mockResolvedValue(undefined); + mockFsMkdir.mockResolvedValue(undefined); manager = new HistoryManager(); }); @@ -102,7 +142,7 @@ describe('HistoryManager', () => { // Constructor // ---------------------------------------------------------------- describe('constructor', () => { - it('should set up paths based on app.getPath("userData")', () => { + it('should run async path', async () => { expect(app.getPath).toHaveBeenCalledWith('userData'); expect(manager.getHistoryDir()).toBe(path.join('/mock/userData', 'history')); expect(manager.getLegacyFilePath()).toBe(path.join('/mock/userData', 'maestro-history.json')); @@ -245,14 +285,14 @@ describe('HistoryManager', () => { // hasMigrated() // ---------------------------------------------------------------- describe('hasMigrated()', () => { - it('should return true when migration marker exists', () => { + it('should run async path', async () => { mockExistsSync.mockImplementation((p: fs.PathLike) => { return p.toString().endsWith('history-migrated.json'); }); expect(manager.hasMigrated()).toBe(true); }); - it('should return false when migration marker does not exist', () => { + it('should run async path', async () => { mockExistsSync.mockReturnValue(false); expect(manager.hasMigrated()).toBe(false); }); @@ -412,7 +452,7 @@ describe('HistoryManager', () => { // getEntries(sessionId) // ---------------------------------------------------------------- describe('getEntries()', () => { - it('should return entries from session file', () => { + it('should run async path', async () => { const entries = [createMockEntry({ id: 'e1' }), createMockEntry({ id: 'e2' })]; const filePath = path.join( '/mock/userData', @@ -423,18 +463,18 @@ describe('HistoryManager', () => { mockExistsSync.mockImplementation((p: fs.PathLike) => p.toString() === filePath); mockReadFileSync.mockReturnValue(createHistoryFileData('session-1', entries)); - const result = manager.getEntries('session-1'); + const result = await manager.getEntries('session-1'); expect(result).toHaveLength(2); expect(result[0].id).toBe('e1'); }); - it('should return empty array if session file does not exist', () => { + it('should run async path', async () => { mockExistsSync.mockReturnValue(false); - const result = manager.getEntries('nonexistent'); + const result = await manager.getEntries('nonexistent'); expect(result).toEqual([]); }); - it('should return empty array on read error', () => { + it('should run async path', async () => { const filePath = path.join( '/mock/userData', 'history', @@ -445,12 +485,12 @@ describe('HistoryManager', () => { throw new Error('Read error'); }); - const result = manager.getEntries('session-1'); + const result = await manager.getEntries('session-1'); expect(result).toEqual([]); expect(vi.mocked(logger.warn)).toHaveBeenCalled(); }); - it('should return empty array when file contains malformed JSON', () => { + it('should run async path', async () => { const filePath = path.join( '/mock/userData', 'history', @@ -459,7 +499,7 @@ describe('HistoryManager', () => { mockExistsSync.mockImplementation((p: fs.PathLike) => p.toString() === filePath); mockReadFileSync.mockReturnValue('not valid json'); - const result = manager.getEntries('session-1'); + const result = await manager.getEntries('session-1'); expect(result).toEqual([]); }); }); @@ -468,11 +508,11 @@ describe('HistoryManager', () => { // addEntry(sessionId, projectPath, entry) // ---------------------------------------------------------------- describe('addEntry()', () => { - it('should create a new file when session does not exist', () => { + it('should run async path', async () => { mockExistsSync.mockReturnValue(false); const entry = createMockEntry({ id: 'new-entry' }); - manager.addEntry('session-1', '/test/project', entry); + await manager.addEntry('session-1', '/test/project', entry); expect(mockWriteFileSync).toHaveBeenCalledTimes(1); const written = JSON.parse(mockWriteFileSync.mock.calls[0][1] as string); @@ -483,7 +523,7 @@ describe('HistoryManager', () => { expect(written.version).toBe(HISTORY_VERSION); }); - it('should prepend entry to beginning of existing file', () => { + it('should run async path', async () => { const existingEntry = createMockEntry({ id: 'old' }); const filePath = path.join( '/mock/userData', @@ -495,7 +535,7 @@ describe('HistoryManager', () => { mockReadFileSync.mockReturnValue(createHistoryFileData('session-1', [existingEntry])); const newEntry = createMockEntry({ id: 'new' }); - manager.addEntry('session-1', '/test/project', newEntry); + await manager.addEntry('session-1', '/test/project', newEntry); const written = JSON.parse(mockWriteFileSync.mock.calls[0][1] as string); expect(written.entries).toHaveLength(2); @@ -503,7 +543,7 @@ describe('HistoryManager', () => { expect(written.entries[1].id).toBe('old'); }); - it('should trim to MAX_ENTRIES_PER_SESSION', () => { + it('should run async path', async () => { const existingEntries: HistoryEntry[] = []; for (let i = 0; i < MAX_ENTRIES_PER_SESSION; i++) { existingEntries.push(createMockEntry({ id: `e-${i}` })); @@ -518,14 +558,14 @@ describe('HistoryManager', () => { mockReadFileSync.mockReturnValue(createHistoryFileData('session-1', existingEntries)); const newEntry = createMockEntry({ id: 'overflow' }); - manager.addEntry('session-1', '/test/project', newEntry); + await manager.addEntry('session-1', '/test/project', newEntry); const written = JSON.parse(mockWriteFileSync.mock.calls[0][1] as string); expect(written.entries).toHaveLength(MAX_ENTRIES_PER_SESSION); expect(written.entries[0].id).toBe('overflow'); }); - it('should update projectPath on existing file', () => { + it('should run async path', async () => { const existingEntry = createMockEntry({ id: 'e1' }); const filePath = path.join( '/mock/userData', @@ -539,13 +579,13 @@ describe('HistoryManager', () => { ); const newEntry = createMockEntry({ id: 'e2' }); - manager.addEntry('session-1', '/new/path', newEntry); + await manager.addEntry('session-1', '/new/path', newEntry); const written = JSON.parse(mockWriteFileSync.mock.calls[0][1] as string); expect(written.projectPath).toBe('/new/path'); }); - it('should create fresh data when existing file is corrupted', () => { + it('should run async path', async () => { const filePath = path.join( '/mock/userData', 'history', @@ -555,14 +595,14 @@ describe('HistoryManager', () => { mockReadFileSync.mockReturnValue('corrupted-json{{{'); const entry = createMockEntry({ id: 'new-entry' }); - manager.addEntry('session-1', '/test/project', entry); + await manager.addEntry('session-1', '/test/project', entry); const written = JSON.parse(mockWriteFileSync.mock.calls[0][1] as string); expect(written.entries).toHaveLength(1); expect(written.entries[0].id).toBe('new-entry'); }); - it('should log error on write failure', () => { + it('should run async path', async () => { mockExistsSync.mockReturnValue(false); mockWriteFileSync.mockImplementation(() => { throw new Error('Write error'); @@ -570,7 +610,7 @@ describe('HistoryManager', () => { const entry = createMockEntry({ id: 'e1' }); // Should not throw - manager.addEntry('session-1', '/test/project', entry); + await manager.addEntry('session-1', '/test/project', entry); expect(vi.mocked(logger.error)).toHaveBeenCalledWith( expect.stringContaining('Failed to write history'), @@ -583,7 +623,7 @@ describe('HistoryManager', () => { // deleteEntry(sessionId, entryId) // ---------------------------------------------------------------- describe('deleteEntry()', () => { - it('should remove an entry by id and return true', () => { + it('should run async path', async () => { const entries = [createMockEntry({ id: 'e1' }), createMockEntry({ id: 'e2' })]; const filePath = path.join( '/mock/userData', @@ -594,7 +634,7 @@ describe('HistoryManager', () => { mockExistsSync.mockImplementation((p: fs.PathLike) => p.toString() === filePath); mockReadFileSync.mockReturnValue(createHistoryFileData('session-1', entries)); - const result = manager.deleteEntry('session-1', 'e1'); + const result = await manager.deleteEntry('session-1', 'e1'); expect(result).toBe(true); const written = JSON.parse(mockWriteFileSync.mock.calls[0][1] as string); @@ -602,12 +642,12 @@ describe('HistoryManager', () => { expect(written.entries[0].id).toBe('e2'); }); - it('should return false if session file does not exist', () => { + it('should run async path', async () => { mockExistsSync.mockReturnValue(false); - expect(manager.deleteEntry('nonexistent', 'e1')).toBe(false); + expect(await manager.deleteEntry('nonexistent', 'e1')).toBe(false); }); - it('should return false if entry is not found', () => { + it('should run async path', async () => { const entries = [createMockEntry({ id: 'e1' })]; const filePath = path.join( '/mock/userData', @@ -618,11 +658,11 @@ describe('HistoryManager', () => { mockExistsSync.mockImplementation((p: fs.PathLike) => p.toString() === filePath); mockReadFileSync.mockReturnValue(createHistoryFileData('session-1', entries)); - expect(manager.deleteEntry('session-1', 'nonexistent')).toBe(false); + expect(await manager.deleteEntry('session-1', 'nonexistent')).toBe(false); expect(mockWriteFileSync).not.toHaveBeenCalled(); }); - it('should return false on read error (parse failure)', () => { + it('should run async path', async () => { const filePath = path.join( '/mock/userData', 'history', @@ -631,10 +671,10 @@ describe('HistoryManager', () => { mockExistsSync.mockImplementation((p: fs.PathLike) => p.toString() === filePath); mockReadFileSync.mockReturnValue('bad json'); - expect(manager.deleteEntry('session-1', 'e1')).toBe(false); + expect(await manager.deleteEntry('session-1', 'e1')).toBe(false); }); - it('should return false on write error', () => { + it('should run async path', async () => { const entries = [createMockEntry({ id: 'e1' })]; const filePath = path.join( '/mock/userData', @@ -648,7 +688,7 @@ describe('HistoryManager', () => { throw new Error('Write error'); }); - expect(manager.deleteEntry('session-1', 'e1')).toBe(false); + expect(await manager.deleteEntry('session-1', 'e1')).toBe(false); expect(vi.mocked(logger.error)).toHaveBeenCalled(); }); }); @@ -657,7 +697,7 @@ describe('HistoryManager', () => { // updateEntry(sessionId, entryId, updates) // ---------------------------------------------------------------- describe('updateEntry()', () => { - it('should update an entry by id and return true', () => { + it('should run async path', async () => { const entries = [createMockEntry({ id: 'e1', summary: 'original' })]; const filePath = path.join( '/mock/userData', @@ -668,7 +708,7 @@ describe('HistoryManager', () => { mockExistsSync.mockImplementation((p: fs.PathLike) => p.toString() === filePath); mockReadFileSync.mockReturnValue(createHistoryFileData('session-1', entries)); - const result = manager.updateEntry('session-1', 'e1', { summary: 'updated' }); + const result = await manager.updateEntry('session-1', 'e1', { summary: 'updated' }); expect(result).toBe(true); const written = JSON.parse(mockWriteFileSync.mock.calls[0][1] as string); @@ -676,12 +716,12 @@ describe('HistoryManager', () => { expect(written.entries[0].id).toBe('e1'); }); - it('should return false if session file does not exist', () => { + it('should run async path', async () => { mockExistsSync.mockReturnValue(false); - expect(manager.updateEntry('nonexistent', 'e1', { summary: 'x' })).toBe(false); + expect(await manager.updateEntry('nonexistent', 'e1', { summary: 'x' })).toBe(false); }); - it('should return false if entry is not found', () => { + it('should run async path', async () => { const entries = [createMockEntry({ id: 'e1' })]; const filePath = path.join( '/mock/userData', @@ -692,11 +732,11 @@ describe('HistoryManager', () => { mockExistsSync.mockImplementation((p: fs.PathLike) => p.toString() === filePath); mockReadFileSync.mockReturnValue(createHistoryFileData('session-1', entries)); - expect(manager.updateEntry('session-1', 'nonexistent', { summary: 'x' })).toBe(false); + expect(await manager.updateEntry('session-1', 'nonexistent', { summary: 'x' })).toBe(false); expect(mockWriteFileSync).not.toHaveBeenCalled(); }); - it('should return false on parse error', () => { + it('should run async path', async () => { const filePath = path.join( '/mock/userData', 'history', @@ -705,10 +745,10 @@ describe('HistoryManager', () => { mockExistsSync.mockImplementation((p: fs.PathLike) => p.toString() === filePath); mockReadFileSync.mockReturnValue('bad json'); - expect(manager.updateEntry('session-1', 'e1', { summary: 'x' })).toBe(false); + expect(await manager.updateEntry('session-1', 'e1', { summary: 'x' })).toBe(false); }); - it('should return false on write error', () => { + it('should run async path', async () => { const entries = [createMockEntry({ id: 'e1' })]; const filePath = path.join( '/mock/userData', @@ -722,7 +762,7 @@ describe('HistoryManager', () => { throw new Error('Write error'); }); - expect(manager.updateEntry('session-1', 'e1', { summary: 'x' })).toBe(false); + expect(await manager.updateEntry('session-1', 'e1', { summary: 'x' })).toBe(false); expect(vi.mocked(logger.error)).toHaveBeenCalled(); }); }); @@ -731,7 +771,7 @@ describe('HistoryManager', () => { // clearSession(sessionId) // ---------------------------------------------------------------- describe('clearSession()', () => { - it('should delete the session file if it exists', () => { + it('should run async path', async () => { const filePath = path.join( '/mock/userData', 'history', @@ -744,7 +784,7 @@ describe('HistoryManager', () => { expect(mockUnlinkSync).toHaveBeenCalledWith(filePath); }); - it('should do nothing if session file does not exist', () => { + it('should run async path', async () => { mockExistsSync.mockReturnValue(false); manager.clearSession('nonexistent'); @@ -752,7 +792,7 @@ describe('HistoryManager', () => { expect(mockUnlinkSync).not.toHaveBeenCalled(); }); - it('should log error on delete failure', () => { + it('should run async path', async () => { const filePath = path.join( '/mock/userData', 'history', @@ -776,7 +816,7 @@ describe('HistoryManager', () => { // listSessionsWithHistory() // ---------------------------------------------------------------- describe('listSessionsWithHistory()', () => { - it('should return session IDs from .json files in history dir', () => { + it('should run async path', async () => { mockExistsSync.mockImplementation((p: fs.PathLike) => p.toString().endsWith('history')); mockReaddirSync.mockReturnValue([ 'session_1.json' as unknown as fs.Dirent, @@ -784,13 +824,13 @@ describe('HistoryManager', () => { 'readme.txt' as unknown as fs.Dirent, ]); - const result = manager.listSessionsWithHistory(); + const result = await manager.listSessionsWithHistory(); expect(result).toEqual(['session_1', 'session_2']); }); - it('should return empty array if history directory does not exist', () => { + it('should run async path', async () => { mockExistsSync.mockReturnValue(false); - expect(manager.listSessionsWithHistory()).toEqual([]); + expect(await manager.listSessionsWithHistory()).toEqual([]); }); }); @@ -798,7 +838,7 @@ describe('HistoryManager', () => { // getHistoryFilePath(sessionId) // ---------------------------------------------------------------- describe('getHistoryFilePath()', () => { - it('should return file path if session file exists', () => { + it('should run async path', async () => { const filePath = path.join( '/mock/userData', 'history', @@ -809,7 +849,7 @@ describe('HistoryManager', () => { expect(manager.getHistoryFilePath('session-1')).toBe(filePath); }); - it('should return null if session file does not exist', () => { + it('should run async path', async () => { mockExistsSync.mockReturnValue(false); expect(manager.getHistoryFilePath('nonexistent')).toBeNull(); }); @@ -819,7 +859,7 @@ describe('HistoryManager', () => { // getAllEntries(limit?) // ---------------------------------------------------------------- describe('getAllEntries()', () => { - it('should aggregate entries across all sessions sorted by timestamp', () => { + it('should run async path', async () => { mockExistsSync.mockReturnValue(true); mockReaddirSync.mockReturnValue([ 'sess_a.json' as unknown as fs.Dirent, @@ -840,14 +880,14 @@ describe('HistoryManager', () => { return '{}'; }); - const result = manager.getAllEntries(); + const result = await manager.getAllEntries(); expect(result).toHaveLength(2); // Sorted descending: 200, 100 expect(result[0].id).toBe('b1'); expect(result[1].id).toBe('a1'); }); - it('should respect limit parameter', () => { + it('should run async path', async () => { mockExistsSync.mockReturnValue(true); mockReaddirSync.mockReturnValue(['sess_a.json' as unknown as fs.Dirent]); @@ -858,15 +898,15 @@ describe('HistoryManager', () => { ]; mockReadFileSync.mockReturnValue(createHistoryFileData('sess_a', entries)); - const result = manager.getAllEntries(2); + const result = await manager.getAllEntries(2); expect(result).toHaveLength(2); expect(result[0].id).toBe('e1'); expect(result[1].id).toBe('e2'); }); - it('should return empty array when no sessions exist', () => { + it('should run async path', async () => { mockExistsSync.mockReturnValue(false); - expect(manager.getAllEntries()).toEqual([]); + expect(await manager.getAllEntries()).toEqual([]); }); }); @@ -874,7 +914,7 @@ describe('HistoryManager', () => { // getAllEntriesPaginated(options?) // ---------------------------------------------------------------- describe('getAllEntriesPaginated()', () => { - it('should return paginated results with metadata', () => { + it('should run async path', async () => { mockExistsSync.mockReturnValue(true); mockReaddirSync.mockReturnValue(['sess_a.json' as unknown as fs.Dirent]); @@ -885,7 +925,7 @@ describe('HistoryManager', () => { ]; mockReadFileSync.mockReturnValue(createHistoryFileData('sess_a', entries)); - const result = manager.getAllEntriesPaginated({ limit: 2, offset: 0 }); + const result = await manager.getAllEntriesPaginated({ limit: 2, offset: 0 }); expect(result.entries).toHaveLength(2); expect(result.total).toBe(3); expect(result.limit).toBe(2); @@ -893,12 +933,12 @@ describe('HistoryManager', () => { expect(result.hasMore).toBe(true); }); - it('should handle offset beyond total entries', () => { + it('should run async path', async () => { mockExistsSync.mockReturnValue(true); mockReaddirSync.mockReturnValue(['sess_a.json' as unknown as fs.Dirent]); mockReadFileSync.mockReturnValue(createHistoryFileData('sess_a', [createMockEntry()])); - const result = manager.getAllEntriesPaginated({ limit: 10, offset: 100 }); + const result = await manager.getAllEntriesPaginated({ limit: 10, offset: 100 }); expect(result.entries).toHaveLength(0); expect(result.total).toBe(1); expect(result.hasMore).toBe(false); @@ -909,7 +949,7 @@ describe('HistoryManager', () => { // getEntriesByProjectPath(projectPath) // ---------------------------------------------------------------- describe('getEntriesByProjectPath()', () => { - it('should return entries matching project path', () => { + it('should run async path', async () => { mockExistsSync.mockReturnValue(true); mockReaddirSync.mockReturnValue([ 'sess_a.json' as unknown as fs.Dirent, @@ -938,19 +978,19 @@ describe('HistoryManager', () => { return '{}'; }); - const result = manager.getEntriesByProjectPath('/project/alpha'); + const result = await manager.getEntriesByProjectPath('/project/alpha'); expect(result).toHaveLength(1); expect(result[0].id).toBe('a1'); }); - it('should return empty array when no matching sessions exist', () => { + it('should run async path', async () => { mockExistsSync.mockReturnValue(true); mockReaddirSync.mockReturnValue(['sess_a.json' as unknown as fs.Dirent]); const entry = createMockEntry({ projectPath: '/other/path' }); mockReadFileSync.mockReturnValue(createHistoryFileData('sess_a', [entry], '/other/path')); - const result = manager.getEntriesByProjectPath('/no/match'); + const result = await manager.getEntriesByProjectPath('/no/match'); expect(result).toEqual([]); }); }); @@ -959,7 +999,7 @@ describe('HistoryManager', () => { // getEntriesByProjectPathPaginated(projectPath, options?) // ---------------------------------------------------------------- describe('getEntriesByProjectPathPaginated()', () => { - it('should return paginated results filtered by project path', () => { + it('should run async path', async () => { mockExistsSync.mockReturnValue(true); mockReaddirSync.mockReturnValue(['sess_a.json' as unknown as fs.Dirent]); @@ -970,7 +1010,7 @@ describe('HistoryManager', () => { ]; mockReadFileSync.mockReturnValue(createHistoryFileData('sess_a', entries, '/proj')); - const result = manager.getEntriesByProjectPathPaginated('/proj', { + const result = await manager.getEntriesByProjectPathPaginated('/proj', { limit: 2, offset: 0, }); @@ -984,7 +1024,7 @@ describe('HistoryManager', () => { // getEntriesPaginated(sessionId, options?) // ---------------------------------------------------------------- describe('getEntriesPaginated()', () => { - it('should return paginated results for a single session', () => { + it('should run async path', async () => { const entries = [ createMockEntry({ id: 'e1' }), createMockEntry({ id: 'e2' }), @@ -999,17 +1039,17 @@ describe('HistoryManager', () => { mockExistsSync.mockImplementation((p: fs.PathLike) => p.toString() === filePath); mockReadFileSync.mockReturnValue(createHistoryFileData('session-1', entries)); - const result = manager.getEntriesPaginated('session-1', { limit: 2, offset: 1 }); + const result = await manager.getEntriesPaginated('session-1', { limit: 2, offset: 1 }); expect(result.entries).toHaveLength(2); expect(result.total).toBe(3); expect(result.offset).toBe(1); expect(result.hasMore).toBe(false); }); - it('should return empty paginated result for nonexistent session', () => { + it('should run async path', async () => { mockExistsSync.mockReturnValue(false); - const result = manager.getEntriesPaginated('nonexistent'); + const result = await manager.getEntriesPaginated('nonexistent'); expect(result.entries).toEqual([]); expect(result.total).toBe(0); }); @@ -1019,7 +1059,7 @@ describe('HistoryManager', () => { // updateSessionNameByClaudeSessionId(agentSessionId, sessionName) // ---------------------------------------------------------------- describe('updateSessionNameByClaudeSessionId()', () => { - it('should update sessionName for matching entries and return count', () => { + it('should run async path', async () => { mockExistsSync.mockReturnValue(true); mockReaddirSync.mockReturnValue(['sess_a.json' as unknown as fs.Dirent]); @@ -1042,7 +1082,7 @@ describe('HistoryManager', () => { ]; mockReadFileSync.mockReturnValue(createHistoryFileData('sess_a', entries)); - const count = manager.updateSessionNameByClaudeSessionId('agent-123', 'new-name'); + const count = await manager.updateSessionNameByClaudeSessionId('agent-123', 'new-name'); expect(count).toBe(2); const written = JSON.parse(mockWriteFileSync.mock.calls[0][1] as string); @@ -1051,19 +1091,19 @@ describe('HistoryManager', () => { expect(written.entries[2].sessionName).toBe('other'); }); - it('should return 0 when no entries match', () => { + it('should run async path', async () => { mockExistsSync.mockReturnValue(true); mockReaddirSync.mockReturnValue(['sess_a.json' as unknown as fs.Dirent]); const entries = [createMockEntry({ id: 'e1', agentSessionId: 'agent-999' })]; mockReadFileSync.mockReturnValue(createHistoryFileData('sess_a', entries)); - const count = manager.updateSessionNameByClaudeSessionId('no-match', 'new-name'); + const count = await manager.updateSessionNameByClaudeSessionId('no-match', 'new-name'); expect(count).toBe(0); expect(mockWriteFileSync).not.toHaveBeenCalled(); }); - it('should not update entries that already have the correct sessionName', () => { + it('should run async path', async () => { mockExistsSync.mockReturnValue(true); mockReaddirSync.mockReturnValue(['sess_a.json' as unknown as fs.Dirent]); @@ -1076,19 +1116,22 @@ describe('HistoryManager', () => { ]; mockReadFileSync.mockReturnValue(createHistoryFileData('sess_a', entries)); - const count = manager.updateSessionNameByClaudeSessionId('agent-123', 'already-correct'); + const count = await manager.updateSessionNameByClaudeSessionId( + 'agent-123', + 'already-correct' + ); expect(count).toBe(0); expect(mockWriteFileSync).not.toHaveBeenCalled(); }); - it('should handle read errors gracefully', () => { + it('should run async path', async () => { mockExistsSync.mockReturnValue(true); mockReaddirSync.mockReturnValue(['sess_a.json' as unknown as fs.Dirent]); mockReadFileSync.mockImplementation(() => { throw new Error('Read error'); }); - const count = manager.updateSessionNameByClaudeSessionId('agent-123', 'new-name'); + const count = await manager.updateSessionNameByClaudeSessionId('agent-123', 'new-name'); expect(count).toBe(0); expect(vi.mocked(logger.warn)).toHaveBeenCalled(); }); @@ -1098,7 +1141,7 @@ describe('HistoryManager', () => { // clearByProjectPath(projectPath) // ---------------------------------------------------------------- describe('clearByProjectPath()', () => { - it('should clear sessions matching the project path', () => { + it('should run async path', async () => { mockExistsSync.mockReturnValue(true); mockReaddirSync.mockReturnValue([ 'sess_a.json' as unknown as fs.Dirent, @@ -1119,21 +1162,21 @@ describe('HistoryManager', () => { return '{}'; }); - manager.clearByProjectPath('/target/project'); + await manager.clearByProjectPath('/target/project'); // Should only unlink sess_a expect(mockUnlinkSync).toHaveBeenCalledTimes(1); expect(mockUnlinkSync.mock.calls[0][0].toString()).toContain('sess_a.json'); }); - it('should do nothing when no sessions match', () => { + it('should run async path', async () => { mockExistsSync.mockReturnValue(true); mockReaddirSync.mockReturnValue(['sess_a.json' as unknown as fs.Dirent]); const entry = createMockEntry({ projectPath: '/other' }); mockReadFileSync.mockReturnValue(createHistoryFileData('sess_a', [entry], '/other')); - manager.clearByProjectPath('/no/match'); + await manager.clearByProjectPath('/no/match'); expect(mockUnlinkSync).not.toHaveBeenCalled(); }); }); @@ -1142,7 +1185,7 @@ describe('HistoryManager', () => { // clearAll() // ---------------------------------------------------------------- describe('clearAll()', () => { - it('should clear all session files', () => { + it('should run async path', async () => { mockExistsSync.mockReturnValue(true); mockReaddirSync.mockReturnValue([ 'sess_a.json' as unknown as fs.Dirent, @@ -1155,7 +1198,7 @@ describe('HistoryManager', () => { expect(mockUnlinkSync).toHaveBeenCalledTimes(3); }); - it('should handle empty history directory', () => { + it('should run async path', async () => { mockExistsSync.mockReturnValue(true); mockReaddirSync.mockReturnValue([]); @@ -1169,7 +1212,7 @@ describe('HistoryManager', () => { // startWatching / stopWatching // ---------------------------------------------------------------- describe('startWatching() / stopWatching()', () => { - it('should start watching history directory for changes', () => { + it('should run async path', async () => { const mockWatcher = { close: vi.fn() } as unknown as fs.FSWatcher; mockWatch.mockReturnValue(mockWatcher); mockExistsSync.mockReturnValue(true); @@ -1183,7 +1226,7 @@ describe('HistoryManager', () => { ); }); - it('should create directory if it does not exist before watching', () => { + it('should run async path', async () => { const mockWatcher = { close: vi.fn() } as unknown as fs.FSWatcher; mockWatch.mockReturnValue(mockWatcher); mockExistsSync.mockReturnValue(false); @@ -1195,7 +1238,7 @@ describe('HistoryManager', () => { }); }); - it('should invoke callback when a .json file changes', () => { + it('should run async path', async () => { const mockWatcher = { close: vi.fn() } as unknown as fs.FSWatcher; let watchCallback: (event: string, filename: string | null) => void = () => {}; mockWatch.mockImplementation((_dir: string, cb: unknown) => { @@ -1213,7 +1256,7 @@ describe('HistoryManager', () => { expect(callback).toHaveBeenCalledWith('session_1'); }); - it('should not invoke callback for non-json files', () => { + it('should run async path', async () => { const mockWatcher = { close: vi.fn() } as unknown as fs.FSWatcher; let watchCallback: (event: string, filename: string | null) => void = () => {}; mockWatch.mockImplementation((_dir: string, cb: unknown) => { @@ -1229,7 +1272,7 @@ describe('HistoryManager', () => { expect(callback).not.toHaveBeenCalled(); }); - it('should not invoke callback when filename is null', () => { + it('should run async path', async () => { const mockWatcher = { close: vi.fn() } as unknown as fs.FSWatcher; let watchCallback: (event: string, filename: string | null) => void = () => {}; mockWatch.mockImplementation((_dir: string, cb: unknown) => { @@ -1245,7 +1288,7 @@ describe('HistoryManager', () => { expect(callback).not.toHaveBeenCalled(); }); - it('should not start watching again if already watching', () => { + it('should run async path', async () => { const mockWatcher = { close: vi.fn() } as unknown as fs.FSWatcher; mockWatch.mockReturnValue(mockWatcher); mockExistsSync.mockReturnValue(true); @@ -1256,7 +1299,7 @@ describe('HistoryManager', () => { expect(mockWatch).toHaveBeenCalledTimes(1); }); - it('should stop watching and close watcher', () => { + it('should run async path', async () => { const mockWatcher = { close: vi.fn() } as unknown as fs.FSWatcher; mockWatch.mockReturnValue(mockWatcher); mockExistsSync.mockReturnValue(true); @@ -1267,7 +1310,7 @@ describe('HistoryManager', () => { expect(mockWatcher.close).toHaveBeenCalled(); }); - it('should allow re-watching after stop', () => { + it('should run async path', async () => { const mockWatcher1 = { close: vi.fn() } as unknown as fs.FSWatcher; const mockWatcher2 = { close: vi.fn() } as unknown as fs.FSWatcher; mockWatch.mockReturnValueOnce(mockWatcher1).mockReturnValueOnce(mockWatcher2); @@ -1280,7 +1323,7 @@ describe('HistoryManager', () => { expect(mockWatch).toHaveBeenCalledTimes(2); }); - it('should be safe to call stopWatching when not watching', () => { + it('should run async path', async () => { // Should not throw expect(() => manager.stopWatching()).not.toThrow(); }); @@ -1290,12 +1333,12 @@ describe('HistoryManager', () => { // getHistoryManager() singleton // ---------------------------------------------------------------- describe('getHistoryManager()', () => { - it('should return a HistoryManager instance', () => { + it('should run async path', async () => { const instance = getHistoryManager(); expect(instance).toBeInstanceOf(HistoryManager); }); - it('should return the same instance on subsequent calls', () => { + it('should run async path', async () => { const instance1 = getHistoryManager(); const instance2 = getHistoryManager(); expect(instance1).toBe(instance2); @@ -1306,11 +1349,11 @@ describe('HistoryManager', () => { // sanitizeSessionId integration (uses real shared function) // ---------------------------------------------------------------- describe('session ID sanitization', () => { - it('should sanitize session IDs with special characters for file paths', () => { + it('should run async path', async () => { mockExistsSync.mockReturnValue(false); const entry = createMockEntry({ id: 'e1' }); - manager.addEntry('session/with:special.chars!', '/test', entry); + await manager.addEntry('session/with:special.chars!', '/test', entry); const writtenPath = mockWriteFileSync.mock.calls[0][0] as string; // Should not contain /, :, ., or ! in the filename portion diff --git a/src/__tests__/main/ipc/handlers/agentSessions.test.ts b/src/__tests__/main/ipc/handlers/agentSessions.test.ts index baccd997e..e2c002ef8 100644 --- a/src/__tests__/main/ipc/handlers/agentSessions.test.ts +++ b/src/__tests__/main/ipc/handlers/agentSessions.test.ts @@ -7,8 +7,15 @@ import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest'; import { ipcMain } from 'electron'; -import { registerAgentSessionsHandlers } from '../../../../main/ipc/handlers/agentSessions'; +import fs from 'fs/promises'; +import { + registerAgentSessionsHandlers, + __clearSessionDiscoveryCacheForTests, +} from '../../../../main/ipc/handlers/agentSessions'; import * as agentSessionStorage from '../../../../main/agents'; +import * as statsCache from '../../../../main/utils/statsCache'; +import os from 'os'; +import path from 'path'; // Mock electron's ipcMain vi.mock('electron', () => ({ @@ -34,6 +41,85 @@ vi.mock('../../../../main/utils/logger', () => ({ debug: vi.fn(), }, })); +// Mock fs/promises for global stats discovery scanning +vi.mock('fs/promises', () => { + const fsPromisesMock = { + access: vi.fn(), + readdir: vi.fn(), + stat: vi.fn(), + readFile: vi.fn(), + writeFile: vi.fn(), + mkdir: vi.fn(), + }; + + return { + ...fsPromisesMock, + default: fsPromisesMock, + }; +}); +// Mock global stats cache so getGlobalStats remains deterministic +vi.mock('../../../../main/utils/statsCache', () => ({ + loadGlobalStatsCache: vi.fn(), + saveGlobalStatsCache: vi.fn(), + GLOBAL_STATS_CACHE_VERSION: 3, +})); + +function setupInMemoryGlobalStatsCache() { + let cache: statsCache.GlobalStatsCache | null = null; + + vi.mocked(statsCache.loadGlobalStatsCache).mockImplementation(async () => cache); + vi.mocked(statsCache.saveGlobalStatsCache).mockImplementation(async (nextCache) => { + cache = nextCache; + }); + + return () => cache; +} + +function setupSingleClaudeSessionDiscoveryMock() { + const homeDir = os.homedir(); + const claudeProjectsDir = path.join(homeDir, '.claude', 'projects'); + const codexSessionsDir = path.join(homeDir, '.codex', 'sessions'); + const projectDir = path.join(claudeProjectsDir, 'project-one'); + const sessionFilePath = path.join(projectDir, 'abc.jsonl'); + + vi.mocked(fs.access).mockResolvedValue(undefined); + vi.mocked(fs.readdir).mockImplementation(async (target) => { + switch (target) { + case claudeProjectsDir: + return ['project-one']; + case projectDir: + return ['abc.jsonl']; + case codexSessionsDir: + return []; + default: + return []; + } + }); + vi.mocked(fs.stat).mockImplementation(async (target) => { + if (target === projectDir) { + return { + isDirectory: () => true, + size: 0, + mtimeMs: 1, + } as fs.Stats; + } + + if (target === sessionFilePath) { + return { + isDirectory: () => false, + size: 123, + mtimeMs: 1_700_000, + } as fs.Stats; + } + + return { + isDirectory: () => false, + size: 0, + mtimeMs: 1, + } as fs.Stats; + }); + vi.mocked(fs.readFile).mockResolvedValue('{"type":"user"}\n'); +} describe('agentSessions IPC handlers', () => { let handlers: Map; @@ -41,6 +127,7 @@ describe('agentSessions IPC handlers', () => { beforeEach(() => { // Clear mocks vi.clearAllMocks(); + __clearSessionDiscoveryCacheForTests(); // Capture all registered handlers handlers = new Map(); @@ -67,6 +154,7 @@ describe('agentSessions IPC handlers', () => { 'agentSessions:deleteMessagePair', 'agentSessions:hasStorage', 'agentSessions:getAvailableStorages', + 'agentSessions:getGlobalStats', ]; for (const channel of expectedChannels) { @@ -466,4 +554,74 @@ describe('agentSessions IPC handlers', () => { expect(result).toEqual(['claude-code', 'opencode']); }); }); + + describe('agentSessions:getGlobalStats', () => { + it('reuses discovered session file list within the 30-second cache window', async () => { + const getCache = setupInMemoryGlobalStatsCache(); + setupSingleClaudeSessionDiscoveryMock(); + + const handler = handlers.get('agentSessions:getGlobalStats'); + + await handler!({} as any); + const firstPassAccessCalls = vi.mocked(fs.access).mock.calls.length; + const firstPassReaddirCalls = vi.mocked(fs.readdir).mock.calls.length; + const firstPassStatCalls = vi.mocked(fs.stat).mock.calls.length; + const firstPassReadFileCalls = vi.mocked(fs.readFile).mock.calls.length; + + await handler!({} as any); + const secondPassAccessCalls = vi.mocked(fs.access).mock.calls.length; + const secondPassReaddirCalls = vi.mocked(fs.readdir).mock.calls.length; + const secondPassStatCalls = vi.mocked(fs.stat).mock.calls.length; + const secondPassReadFileCalls = vi.mocked(fs.readFile).mock.calls.length; + + expect(firstPassAccessCalls).toBe(2); + expect(firstPassReaddirCalls).toBe(3); + expect(firstPassStatCalls).toBe(3); + expect(firstPassReadFileCalls).toBe(1); + + expect(secondPassAccessCalls).toBe(firstPassAccessCalls); + expect(secondPassReaddirCalls).toBe(firstPassReaddirCalls); + expect(secondPassStatCalls).toBe(firstPassStatCalls); + expect(secondPassReadFileCalls).toBe(firstPassReadFileCalls); + + const cache = getCache(); + expect(cache).toBeTruthy(); + expect(cache!.providers['claude-code'].sessions['project-one/abc']).toBeDefined(); + }); + + it('refreshes discovery when cache TTL has expired', async () => { + const getCache = setupInMemoryGlobalStatsCache(); + setupSingleClaudeSessionDiscoveryMock(); + + let now = 1_700_000_000_000; + const dateNowSpy = vi.spyOn(Date, 'now').mockImplementation(() => now); + const handler = handlers.get('agentSessions:getGlobalStats'); + try { + await handler!({} as any); + expect(vi.mocked(fs.access).mock.calls).toHaveLength(2); + expect(vi.mocked(fs.readdir).mock.calls).toHaveLength(3); + expect(vi.mocked(fs.stat).mock.calls).toHaveLength(3); + expect(vi.mocked(fs.readFile).mock.calls).toHaveLength(1); + + await handler!({} as any); + expect(vi.mocked(fs.access).mock.calls).toHaveLength(2); + expect(vi.mocked(fs.readdir).mock.calls).toHaveLength(3); + expect(vi.mocked(fs.stat).mock.calls).toHaveLength(3); + expect(vi.mocked(fs.readFile).mock.calls).toHaveLength(1); + + now += 31_000; + await handler!({} as any); + expect(vi.mocked(fs.access).mock.calls).toHaveLength(4); + expect(vi.mocked(fs.readdir).mock.calls).toHaveLength(6); + expect(vi.mocked(fs.stat).mock.calls).toHaveLength(6); + expect(vi.mocked(fs.readFile).mock.calls).toHaveLength(1); + + const cache = getCache(); + expect(cache).toBeTruthy(); + expect(cache!.providers['claude-code'].sessions['project-one/abc']).toBeDefined(); + } finally { + dateNowSpy.mockRestore(); + } + }); + }); }); diff --git a/src/__tests__/main/ipc/handlers/director-notes.test.ts b/src/__tests__/main/ipc/handlers/director-notes.test.ts index 778946efc..441ccac39 100644 --- a/src/__tests__/main/ipc/handlers/director-notes.test.ts +++ b/src/__tests__/main/ipc/handlers/director-notes.test.ts @@ -94,8 +94,8 @@ describe('director-notes IPC handlers', () => { // Create mock history manager mockHistoryManager = { - getEntries: vi.fn().mockReturnValue([]), - listSessionsWithHistory: vi.fn().mockReturnValue([]), + getEntries: vi.fn().mockResolvedValue([]), + listSessionsWithHistory: vi.fn().mockResolvedValue([]), getHistoryFilePath: vi.fn().mockReturnValue(null), }; @@ -141,13 +141,13 @@ describe('director-notes IPC handlers', () => { describe('director-notes:getUnifiedHistory', () => { it('should aggregate history from all sessions', async () => { const now = Date.now(); - vi.mocked(mockHistoryManager.listSessionsWithHistory).mockReturnValue([ + vi.mocked(mockHistoryManager.listSessionsWithHistory).mockResolvedValue([ 'session-1', 'session-2', ]); vi.mocked(mockHistoryManager.getEntries) - .mockReturnValueOnce([ + .mockResolvedValueOnce([ createMockEntry({ id: 'e1', timestamp: now - 1000, @@ -155,7 +155,7 @@ describe('director-notes IPC handlers', () => { sessionName: 'Agent A', }), ]) - .mockReturnValueOnce([ + .mockResolvedValueOnce([ createMockEntry({ id: 'e2', timestamp: now - 2000, @@ -178,13 +178,13 @@ describe('director-notes IPC handlers', () => { it('should include stats in the response', async () => { const now = Date.now(); - vi.mocked(mockHistoryManager.listSessionsWithHistory).mockReturnValue([ + vi.mocked(mockHistoryManager.listSessionsWithHistory).mockResolvedValue([ 'session-1', 'session-2', ]); vi.mocked(mockHistoryManager.getEntries) - .mockReturnValueOnce([ + .mockResolvedValueOnce([ createMockEntry({ id: 'e1', type: 'AUTO', @@ -198,7 +198,7 @@ describe('director-notes IPC handlers', () => { agentSessionId: 'as-1', }), ]) - .mockReturnValueOnce([ + .mockResolvedValueOnce([ createMockEntry({ id: 'e3', type: 'AUTO', @@ -226,8 +226,8 @@ describe('director-notes IPC handlers', () => { it('should compute stats from unfiltered data when type filter is applied', async () => { const now = Date.now(); - vi.mocked(mockHistoryManager.listSessionsWithHistory).mockReturnValue(['session-1']); - vi.mocked(mockHistoryManager.getEntries).mockReturnValue([ + vi.mocked(mockHistoryManager.listSessionsWithHistory).mockResolvedValue(['session-1']); + vi.mocked(mockHistoryManager.getEntries).mockResolvedValue([ createMockEntry({ id: 'e1', type: 'AUTO', timestamp: now - 1000, agentSessionId: 'as-1' }), createMockEntry({ id: 'e2', type: 'USER', timestamp: now - 2000, agentSessionId: 'as-1' }), createMockEntry({ id: 'e3', type: 'AUTO', timestamp: now - 3000, agentSessionId: 'as-2' }), @@ -249,8 +249,8 @@ describe('director-notes IPC handlers', () => { const twoDaysAgo = now - 2 * 24 * 60 * 60 * 1000; const tenDaysAgo = now - 10 * 24 * 60 * 60 * 1000; - vi.mocked(mockHistoryManager.listSessionsWithHistory).mockReturnValue(['session-1']); - vi.mocked(mockHistoryManager.getEntries).mockReturnValue([ + vi.mocked(mockHistoryManager.listSessionsWithHistory).mockResolvedValue(['session-1']); + vi.mocked(mockHistoryManager.getEntries).mockResolvedValue([ createMockEntry({ id: 'recent', timestamp: twoDaysAgo }), createMockEntry({ id: 'old', timestamp: tenDaysAgo }), ]); @@ -267,8 +267,8 @@ describe('director-notes IPC handlers', () => { const twoDaysAgo = now - 2 * 24 * 60 * 60 * 1000; const yearAgo = now - 365 * 24 * 60 * 60 * 1000; - vi.mocked(mockHistoryManager.listSessionsWithHistory).mockReturnValue(['session-1']); - vi.mocked(mockHistoryManager.getEntries).mockReturnValue([ + vi.mocked(mockHistoryManager.listSessionsWithHistory).mockResolvedValue(['session-1']); + vi.mocked(mockHistoryManager.getEntries).mockResolvedValue([ createMockEntry({ id: 'recent', timestamp: twoDaysAgo }), createMockEntry({ id: 'ancient', timestamp: yearAgo }), ]); @@ -283,8 +283,8 @@ describe('director-notes IPC handlers', () => { it('should filter by type when filter is provided', async () => { const now = Date.now(); - vi.mocked(mockHistoryManager.listSessionsWithHistory).mockReturnValue(['session-1']); - vi.mocked(mockHistoryManager.getEntries).mockReturnValue([ + vi.mocked(mockHistoryManager.listSessionsWithHistory).mockResolvedValue(['session-1']); + vi.mocked(mockHistoryManager.getEntries).mockResolvedValue([ createMockEntry({ id: 'auto-entry', type: 'AUTO', timestamp: now - 1000 }), createMockEntry({ id: 'user-entry', type: 'USER', timestamp: now - 2000 }), ]); @@ -298,8 +298,8 @@ describe('director-notes IPC handlers', () => { it('should return both types when filter is null', async () => { const now = Date.now(); - vi.mocked(mockHistoryManager.listSessionsWithHistory).mockReturnValue(['session-1']); - vi.mocked(mockHistoryManager.getEntries).mockReturnValue([ + vi.mocked(mockHistoryManager.listSessionsWithHistory).mockResolvedValue(['session-1']); + vi.mocked(mockHistoryManager.getEntries).mockResolvedValue([ createMockEntry({ id: 'auto-entry', type: 'AUTO', timestamp: now - 1000 }), createMockEntry({ id: 'user-entry', type: 'USER', timestamp: now - 2000 }), ]); @@ -312,15 +312,15 @@ describe('director-notes IPC handlers', () => { it('should return entries sorted by timestamp descending', async () => { const now = Date.now(); - vi.mocked(mockHistoryManager.listSessionsWithHistory).mockReturnValue([ + vi.mocked(mockHistoryManager.listSessionsWithHistory).mockResolvedValue([ 'session-1', 'session-2', ]); // Session 1 has older entry, session 2 has newer entry vi.mocked(mockHistoryManager.getEntries) - .mockReturnValueOnce([createMockEntry({ id: 'oldest', timestamp: now - 3000 })]) - .mockReturnValueOnce([ + .mockResolvedValueOnce([createMockEntry({ id: 'oldest', timestamp: now - 3000 })]) + .mockResolvedValueOnce([ createMockEntry({ id: 'newest', timestamp: now - 1000 }), createMockEntry({ id: 'middle', timestamp: now - 2000 }), ]); @@ -336,8 +336,8 @@ describe('director-notes IPC handlers', () => { it('should use Maestro session name when available in sessions store', async () => { const now = Date.now(); - vi.mocked(mockHistoryManager.listSessionsWithHistory).mockReturnValue(['session-1']); - vi.mocked(mockHistoryManager.getEntries).mockReturnValue([ + vi.mocked(mockHistoryManager.listSessionsWithHistory).mockResolvedValue(['session-1']); + vi.mocked(mockHistoryManager.getEntries).mockResolvedValue([ createMockEntry({ id: 'e1', timestamp: now, sessionName: 'Tab Name' }), ]); @@ -363,8 +363,8 @@ describe('director-notes IPC handlers', () => { it('should set agentName to undefined when Maestro session not found in store', async () => { const now = Date.now(); - vi.mocked(mockHistoryManager.listSessionsWithHistory).mockReturnValue(['session-1']); - vi.mocked(mockHistoryManager.getEntries).mockReturnValue([ + vi.mocked(mockHistoryManager.listSessionsWithHistory).mockResolvedValue(['session-1']); + vi.mocked(mockHistoryManager.getEntries).mockResolvedValue([ createMockEntry({ id: 'e1', timestamp: now, sessionName: 'My Agent' }), ]); @@ -392,8 +392,8 @@ describe('director-notes IPC handlers', () => { it('should set agentName to undefined when session is not in store', async () => { const now = Date.now(); - vi.mocked(mockHistoryManager.listSessionsWithHistory).mockReturnValue(['claude-abc123']); - vi.mocked(mockHistoryManager.getEntries).mockReturnValue([ + vi.mocked(mockHistoryManager.listSessionsWithHistory).mockResolvedValue(['claude-abc123']); + vi.mocked(mockHistoryManager.getEntries).mockResolvedValue([ createMockEntry({ id: 'e1', timestamp: now, sessionName: undefined }), ]); @@ -405,7 +405,7 @@ describe('director-notes IPC handlers', () => { }); it('should return empty entries when no sessions have history', async () => { - vi.mocked(mockHistoryManager.listSessionsWithHistory).mockReturnValue([]); + vi.mocked(mockHistoryManager.listSessionsWithHistory).mockResolvedValue([]); const handler = handlers.get('director-notes:getUnifiedHistory'); const result = await handler!({} as any, { lookbackDays: 7 }); @@ -417,8 +417,8 @@ describe('director-notes IPC handlers', () => { it('should return empty entries when all entries are outside lookback window', async () => { const thirtyDaysAgo = Date.now() - 30 * 24 * 60 * 60 * 1000; - vi.mocked(mockHistoryManager.listSessionsWithHistory).mockReturnValue(['session-1']); - vi.mocked(mockHistoryManager.getEntries).mockReturnValue([ + vi.mocked(mockHistoryManager.listSessionsWithHistory).mockResolvedValue(['session-1']); + vi.mocked(mockHistoryManager.getEntries).mockResolvedValue([ createMockEntry({ id: 'old', timestamp: thirtyDaysAgo }), ]); @@ -432,8 +432,8 @@ describe('director-notes IPC handlers', () => { it('should support pagination with limit and offset', async () => { const now = Date.now(); - vi.mocked(mockHistoryManager.listSessionsWithHistory).mockReturnValue(['session-1']); - vi.mocked(mockHistoryManager.getEntries).mockReturnValue([ + vi.mocked(mockHistoryManager.listSessionsWithHistory).mockResolvedValue(['session-1']); + vi.mocked(mockHistoryManager.getEntries).mockResolvedValue([ createMockEntry({ id: 'e1', timestamp: now - 1000 }), createMockEntry({ id: 'e2', timestamp: now - 2000 }), createMockEntry({ id: 'e3', timestamp: now - 3000 }), @@ -480,7 +480,7 @@ describe('director-notes IPC handlers', () => { }); it('should return empty-history message when no sessions have history files', async () => { - vi.mocked(mockHistoryManager.listSessionsWithHistory).mockReturnValue([]); + vi.mocked(mockHistoryManager.listSessionsWithHistory).mockResolvedValue([]); const handler = handlers.get('director-notes:generateSynopsis'); const result = await handler!({} as any, { lookbackDays: 7, provider: 'claude-code' }); @@ -493,7 +493,7 @@ describe('director-notes IPC handlers', () => { }); it('should return empty-history message when all file paths are null', async () => { - vi.mocked(mockHistoryManager.listSessionsWithHistory).mockReturnValue(['session-1']); + vi.mocked(mockHistoryManager.listSessionsWithHistory).mockResolvedValue(['session-1']); vi.mocked(mockHistoryManager.getHistoryFilePath).mockReturnValue(null); const handler = handlers.get('director-notes:generateSynopsis'); @@ -511,7 +511,7 @@ describe('director-notes IPC handlers', () => { completionReason: 'process exited with code 0', }); - vi.mocked(mockHistoryManager.listSessionsWithHistory).mockReturnValue(['session-1']); + vi.mocked(mockHistoryManager.listSessionsWithHistory).mockResolvedValue(['session-1']); vi.mocked(mockHistoryManager.getHistoryFilePath).mockReturnValue( '/data/history/session-1.json' ); @@ -542,7 +542,7 @@ describe('director-notes IPC handlers', () => { completionReason: 'process exited with code 0', }); - vi.mocked(mockHistoryManager.listSessionsWithHistory).mockReturnValue([ + vi.mocked(mockHistoryManager.listSessionsWithHistory).mockResolvedValue([ 'session-1', 'session-2', 'session-3', @@ -569,7 +569,7 @@ describe('director-notes IPC handlers', () => { completionReason: 'process exited with code 0', }); - vi.mocked(mockHistoryManager.listSessionsWithHistory).mockReturnValue(['session-1']); + vi.mocked(mockHistoryManager.listSessionsWithHistory).mockResolvedValue(['session-1']); vi.mocked(mockHistoryManager.getHistoryFilePath).mockReturnValue( '/data/history/session-1.json' ); @@ -605,7 +605,7 @@ describe('director-notes IPC handlers', () => { completionReason: 'process exited with code 0', }); - vi.mocked(mockHistoryManager.listSessionsWithHistory).mockReturnValue(['session-1']); + vi.mocked(mockHistoryManager.listSessionsWithHistory).mockResolvedValue(['session-1']); vi.mocked(mockHistoryManager.getHistoryFilePath).mockReturnValue( '/data/history/session-1.json' ); @@ -640,7 +640,7 @@ describe('director-notes IPC handlers', () => { completionReason: 'process exited with code 0', }); - vi.mocked(mockHistoryManager.listSessionsWithHistory).mockReturnValue(['unknown-session']); + vi.mocked(mockHistoryManager.listSessionsWithHistory).mockResolvedValue(['unknown-session']); vi.mocked(mockHistoryManager.getHistoryFilePath).mockReturnValue( '/data/history/unknown-session.json' ); @@ -656,7 +656,7 @@ describe('director-notes IPC handlers', () => { const { groomContext } = await import('../../../../main/utils/context-groomer'); vi.mocked(groomContext).mockRejectedValue(new Error('Agent timed out')); - vi.mocked(mockHistoryManager.listSessionsWithHistory).mockReturnValue(['session-1']); + vi.mocked(mockHistoryManager.listSessionsWithHistory).mockResolvedValue(['session-1']); vi.mocked(mockHistoryManager.getHistoryFilePath).mockReturnValue( '/data/history/session-1.json' ); @@ -676,7 +676,7 @@ describe('director-notes IPC handlers', () => { completionReason: 'process exited with code 0', }); - vi.mocked(mockHistoryManager.listSessionsWithHistory).mockReturnValue(['session-1']); + vi.mocked(mockHistoryManager.listSessionsWithHistory).mockResolvedValue(['session-1']); vi.mocked(mockHistoryManager.getHistoryFilePath).mockReturnValue( '/data/history/session-1.json' ); @@ -696,7 +696,7 @@ describe('director-notes IPC handlers', () => { completionReason: 'process exited with code 0', }); - vi.mocked(mockHistoryManager.listSessionsWithHistory).mockReturnValue(['session-1']); + vi.mocked(mockHistoryManager.listSessionsWithHistory).mockResolvedValue(['session-1']); vi.mocked(mockHistoryManager.getHistoryFilePath).mockReturnValue( '/data/history/session-1.json' ); @@ -732,7 +732,7 @@ describe('director-notes IPC handlers', () => { completionReason: 'process exited with code 0', }); - vi.mocked(mockHistoryManager.listSessionsWithHistory).mockReturnValue(['session-1']); + vi.mocked(mockHistoryManager.listSessionsWithHistory).mockResolvedValue(['session-1']); vi.mocked(mockHistoryManager.getHistoryFilePath).mockReturnValue( '/data/history/session-1.json' ); diff --git a/src/__tests__/main/ipc/handlers/git.test.ts b/src/__tests__/main/ipc/handlers/git.test.ts index 9c5287b0c..0aeb19447 100644 --- a/src/__tests__/main/ipc/handlers/git.test.ts +++ b/src/__tests__/main/ipc/handlers/git.test.ts @@ -11,6 +11,27 @@ import { registerGitHandlers } from '../../../../main/ipc/handlers/git'; import * as execFile from '../../../../main/utils/execFile'; import path from 'path'; +type Deferred = { + promise: Promise; + resolve: (value: T) => void; + reject: (reason?: unknown) => void; +}; + +const createDeferred = (): Deferred => { + let resolve!: (value: T) => void; + let reject!: (reason?: unknown) => void; + const promise = new Promise((res, rej) => { + resolve = res; + reject = rej; + }); + + return { + promise, + resolve, + reject, + }; +}; + // Mock electron's ipcMain vi.mock('electron', () => ({ ipcMain: { @@ -3105,6 +3126,78 @@ export function Component() { }); }); + it('should check local branches in parallel when remote branch info is unavailable', async () => { + const callOrder: string[] = []; + const mainDeferred = createDeferred<{ + stdout: string; + stderr: string; + exitCode: number; + }>(); + const masterDeferred = createDeferred<{ + stdout: string; + stderr: string; + exitCode: number; + }>(); + + vi.mocked(execFile.execFileNoThrow).mockImplementation( + async (_cmd: string, args?: string[]) => { + if (args?.includes('show')) { + callOrder.push('remote'); + return { + stdout: `* remote origin + Fetch URL: git@github.com:user/repo.git + Push URL: git@github.com:user/repo.git + Remote branches: + feature tracked`, + stderr: '', + exitCode: 0, + }; + } + + if (args?.includes('--verify') && args?.includes('main')) { + callOrder.push('main'); + return mainDeferred.promise; + } + + if (args?.includes('--verify') && args?.includes('master')) { + callOrder.push('master'); + return masterDeferred.promise; + } + + return { + stdout: '', + stderr: `Unexpected command: ${args?.join(' ')}`, + exitCode: 1, + }; + } + ); + + const handler = handlers.get('git:getDefaultBranch'); + const handlerPromise = handler!({} as any, '/test/repo'); + + await Promise.resolve(); + + expect(callOrder).toEqual(['remote', 'main', 'master']); + + mainDeferred.resolve({ + stdout: '', + stderr: 'fatal: Needed a single revision', + exitCode: 128, + }); + masterDeferred.resolve({ + stdout: 'abc123def456\n', + stderr: '', + exitCode: 0, + }); + + const result = await handlerPromise; + + expect(result).toEqual({ + success: true, + branch: 'master', + }); + }); + it('should fallback to main branch when remote check fails but main exists locally', async () => { vi.mocked(execFile.execFileNoThrow) .mockResolvedValueOnce({ @@ -3492,6 +3585,78 @@ branch refs/heads/bugfix-123 mockFs = (await import('fs/promises')).default; }); + it('should run directory metadata git commands while show-toplevel is pending', async () => { + const callOrder: string[] = []; + const toplevelDeferred = createDeferred<{ + stdout: string; + stderr: string; + exitCode: number; + }>(); + + vi.mocked(mockFs.readdir).mockResolvedValue([ + { name: 'main-repo', isDirectory: () => true }, + ] as any); + + vi.mocked(execFile.execFileNoThrow).mockImplementation(async (_cmd, args) => { + const command = args?.join(' ') || ''; + + if (command.includes('--is-inside-work-tree')) { + callOrder.push('isInside'); + return { stdout: 'true\n', stderr: '', exitCode: 0 }; + } + + if (command.includes('--show-toplevel')) { + callOrder.push('toplevel'); + return toplevelDeferred.promise; + } + + if (command.includes('--git-dir')) { + callOrder.push('gitDir'); + return { stdout: '.git', stderr: '', exitCode: 0 }; + } + + if (command.includes('--git-common-dir')) { + callOrder.push('gitCommonDir'); + return { stdout: '.git', stderr: '', exitCode: 0 }; + } + + if (command.includes('--abbrev-ref')) { + callOrder.push('abbrevRef'); + return { stdout: 'main\n', stderr: '', exitCode: 0 }; + } + + return { stdout: '', stderr: '', exitCode: 0 }; + }); + + const handler = handlers.get('git:scanWorktreeDirectory'); + const resultPromise = handler!({} as any, '/parent'); + + await Promise.resolve(); + + expect(callOrder[0]).toBe('isInside'); + expect(callOrder[1]).toBe('toplevel'); + expect(callOrder.length).toBe(5); + expect(callOrder.includes('gitDir')).toBe(true); + expect(callOrder.includes('gitCommonDir')).toBe(true); + expect(callOrder.includes('abbrevRef')).toBe(true); + + toplevelDeferred.resolve({ stdout: '/parent/main-repo', stderr: '', exitCode: 0 }); + const result = await resultPromise; + + expect(result).toEqual({ + success: true, + gitSubdirs: [ + { + path: path.join('/parent', 'main-repo'), + name: 'main-repo', + isWorktree: false, + branch: 'main', + repoRoot: '/parent/main-repo', + }, + ], + }); + }); + it('should find git repositories and worktrees in directory', async () => { // Mock fs.readdir to return directory entries vi.mocked(mockFs.readdir).mockResolvedValue([ diff --git a/src/__tests__/main/ipc/handlers/history.test.ts b/src/__tests__/main/ipc/handlers/history.test.ts index e612489d7..89203e26f 100644 --- a/src/__tests__/main/ipc/handlers/history.test.ts +++ b/src/__tests__/main/ipc/handlers/history.test.ts @@ -56,39 +56,39 @@ describe('history IPC handlers', () => { // Create mock history manager mockHistoryManager = { - getEntries: vi.fn().mockReturnValue([]), - getEntriesByProjectPath: vi.fn().mockReturnValue([]), - getAllEntries: vi.fn().mockReturnValue([]), - getEntriesPaginated: vi.fn().mockReturnValue({ + getEntries: vi.fn().mockResolvedValue([]), + getEntriesByProjectPath: vi.fn().mockResolvedValue([]), + getAllEntries: vi.fn().mockResolvedValue([]), + getEntriesPaginated: vi.fn().mockResolvedValue({ entries: [], total: 0, limit: 100, offset: 0, hasMore: false, }), - getEntriesByProjectPathPaginated: vi.fn().mockReturnValue({ + getEntriesByProjectPathPaginated: vi.fn().mockResolvedValue({ entries: [], total: 0, limit: 100, offset: 0, hasMore: false, }), - getAllEntriesPaginated: vi.fn().mockReturnValue({ + getAllEntriesPaginated: vi.fn().mockResolvedValue({ entries: [], total: 0, limit: 100, offset: 0, hasMore: false, }), - addEntry: vi.fn(), + addEntry: vi.fn().mockResolvedValue(undefined), clearSession: vi.fn(), clearByProjectPath: vi.fn(), clearAll: vi.fn(), - deleteEntry: vi.fn().mockReturnValue(false), - updateEntry: vi.fn().mockReturnValue(false), - updateSessionNameByClaudeSessionId: vi.fn().mockReturnValue(0), - getHistoryFilePath: vi.fn().mockReturnValue(null), - listSessionsWithHistory: vi.fn().mockReturnValue([]), + deleteEntry: vi.fn().mockResolvedValue(false), + updateEntry: vi.fn().mockResolvedValue(false), + updateSessionNameByClaudeSessionId: vi.fn().mockResolvedValue(0), + getHistoryFilePath: vi.fn().mockResolvedValue(null), + listSessionsWithHistory: vi.fn().mockResolvedValue([]), }; vi.mocked(historyManagerModule.getHistoryManager).mockReturnValue( @@ -136,7 +136,7 @@ describe('history IPC handlers', () => { createMockEntry({ id: 'entry-1', timestamp: 2000 }), createMockEntry({ id: 'entry-2', timestamp: 1000 }), ]; - vi.mocked(mockHistoryManager.getEntries).mockReturnValue(mockEntries); + vi.mocked(mockHistoryManager.getEntries).mockResolvedValue(mockEntries); const handler = handlers.get('history:getAll'); const result = await handler!({} as any, undefined, 'session-1'); @@ -150,7 +150,7 @@ describe('history IPC handlers', () => { it('should return entries filtered by project path', async () => { const mockEntries = [createMockEntry()]; - vi.mocked(mockHistoryManager.getEntriesByProjectPath).mockReturnValue(mockEntries); + vi.mocked(mockHistoryManager.getEntriesByProjectPath).mockResolvedValue(mockEntries); const handler = handlers.get('history:getAll'); const result = await handler!({} as any, '/test/project'); @@ -161,7 +161,7 @@ describe('history IPC handlers', () => { it('should return all entries when no filters provided', async () => { const mockEntries = [createMockEntry()]; - vi.mocked(mockHistoryManager.getAllEntries).mockReturnValue(mockEntries); + vi.mocked(mockHistoryManager.getAllEntries).mockResolvedValue(mockEntries); const handler = handlers.get('history:getAll'); const result = await handler!({} as any); @@ -171,7 +171,7 @@ describe('history IPC handlers', () => { }); it('should return empty array when session has no history', async () => { - vi.mocked(mockHistoryManager.getEntries).mockReturnValue([]); + vi.mocked(mockHistoryManager.getEntries).mockResolvedValue([]); const handler = handlers.get('history:getAll'); const result = await handler!({} as any, undefined, 'session-1'); @@ -189,7 +189,7 @@ describe('history IPC handlers', () => { offset: 0, hasMore: true, }; - vi.mocked(mockHistoryManager.getEntriesPaginated).mockReturnValue(mockResult); + vi.mocked(mockHistoryManager.getEntriesPaginated).mockResolvedValue(mockResult); const handler = handlers.get('history:getAllPaginated'); const result = await handler!({} as any, { @@ -212,7 +212,7 @@ describe('history IPC handlers', () => { offset: 0, hasMore: true, }; - vi.mocked(mockHistoryManager.getEntriesByProjectPathPaginated).mockReturnValue(mockResult); + vi.mocked(mockHistoryManager.getEntriesByProjectPathPaginated).mockResolvedValue(mockResult); const handler = handlers.get('history:getAllPaginated'); const result = await handler!({} as any, { @@ -235,7 +235,7 @@ describe('history IPC handlers', () => { offset: 0, hasMore: false, }; - vi.mocked(mockHistoryManager.getAllEntriesPaginated).mockReturnValue(mockResult); + vi.mocked(mockHistoryManager.getAllEntriesPaginated).mockResolvedValue(mockResult); const handler = handlers.get('history:getAllPaginated'); const result = await handler!({} as any, {}); @@ -252,7 +252,7 @@ describe('history IPC handlers', () => { offset: 0, hasMore: false, }; - vi.mocked(mockHistoryManager.getAllEntriesPaginated).mockReturnValue(mockResult); + vi.mocked(mockHistoryManager.getAllEntriesPaginated).mockResolvedValue(mockResult); const handler = handlers.get('history:getAllPaginated'); const result = await handler!({} as any, undefined); @@ -343,7 +343,7 @@ describe('history IPC handlers', () => { describe('history:delete', () => { it('should delete entry from specific session', async () => { - vi.mocked(mockHistoryManager.deleteEntry).mockReturnValue(true); + vi.mocked(mockHistoryManager.deleteEntry).mockResolvedValue(true); const handler = handlers.get('history:delete'); const result = await handler!({} as any, 'entry-123', 'session-1'); @@ -353,7 +353,7 @@ describe('history IPC handlers', () => { }); it('should return false when entry not found in session', async () => { - vi.mocked(mockHistoryManager.deleteEntry).mockReturnValue(false); + vi.mocked(mockHistoryManager.deleteEntry).mockResolvedValue(false); const handler = handlers.get('history:delete'); const result = await handler!({} as any, 'non-existent', 'session-1'); @@ -362,13 +362,13 @@ describe('history IPC handlers', () => { }); it('should search all sessions when sessionId not provided', async () => { - vi.mocked(mockHistoryManager.listSessionsWithHistory).mockReturnValue([ + vi.mocked(mockHistoryManager.listSessionsWithHistory).mockResolvedValue([ 'session-1', 'session-2', ]); vi.mocked(mockHistoryManager.deleteEntry) - .mockReturnValueOnce(false) - .mockReturnValueOnce(true); + .mockResolvedValueOnce(false) + .mockResolvedValueOnce(true); const handler = handlers.get('history:delete'); const result = await handler!({} as any, 'entry-123'); @@ -380,11 +380,11 @@ describe('history IPC handlers', () => { }); it('should return false when entry not found in any session', async () => { - vi.mocked(mockHistoryManager.listSessionsWithHistory).mockReturnValue([ + vi.mocked(mockHistoryManager.listSessionsWithHistory).mockResolvedValue([ 'session-1', 'session-2', ]); - vi.mocked(mockHistoryManager.deleteEntry).mockReturnValue(false); + vi.mocked(mockHistoryManager.deleteEntry).mockResolvedValue(false); const handler = handlers.get('history:delete'); const result = await handler!({} as any, 'non-existent'); @@ -395,7 +395,7 @@ describe('history IPC handlers', () => { describe('history:update', () => { it('should update entry in specific session', async () => { - vi.mocked(mockHistoryManager.updateEntry).mockReturnValue(true); + vi.mocked(mockHistoryManager.updateEntry).mockResolvedValue(true); const updates = { validated: true }; const handler = handlers.get('history:update'); @@ -410,7 +410,7 @@ describe('history IPC handlers', () => { }); it('should return false when entry not found in session', async () => { - vi.mocked(mockHistoryManager.updateEntry).mockReturnValue(false); + vi.mocked(mockHistoryManager.updateEntry).mockResolvedValue(false); const handler = handlers.get('history:update'); const result = await handler!({} as any, 'non-existent', { validated: true }, 'session-1'); @@ -419,13 +419,13 @@ describe('history IPC handlers', () => { }); it('should search all sessions when sessionId not provided', async () => { - vi.mocked(mockHistoryManager.listSessionsWithHistory).mockReturnValue([ + vi.mocked(mockHistoryManager.listSessionsWithHistory).mockResolvedValue([ 'session-1', 'session-2', ]); vi.mocked(mockHistoryManager.updateEntry) - .mockReturnValueOnce(false) - .mockReturnValueOnce(true); + .mockResolvedValueOnce(false) + .mockResolvedValueOnce(true); const updates = { summary: 'Updated summary' }; const handler = handlers.get('history:update'); @@ -445,8 +445,8 @@ describe('history IPC handlers', () => { }); it('should return false when entry not found in any session', async () => { - vi.mocked(mockHistoryManager.listSessionsWithHistory).mockReturnValue(['session-1']); - vi.mocked(mockHistoryManager.updateEntry).mockReturnValue(false); + vi.mocked(mockHistoryManager.listSessionsWithHistory).mockResolvedValue(['session-1']); + vi.mocked(mockHistoryManager.updateEntry).mockResolvedValue(false); const handler = handlers.get('history:update'); const result = await handler!({} as any, 'non-existent', { validated: true }); @@ -457,7 +457,7 @@ describe('history IPC handlers', () => { describe('history:updateSessionName', () => { it('should update session name for matching entries', async () => { - vi.mocked(mockHistoryManager.updateSessionNameByClaudeSessionId).mockReturnValue(5); + vi.mocked(mockHistoryManager.updateSessionNameByClaudeSessionId).mockResolvedValue(5); const handler = handlers.get('history:updateSessionName'); const result = await handler!({} as any, 'agent-session-123', 'New Session Name'); @@ -470,7 +470,7 @@ describe('history IPC handlers', () => { }); it('should return 0 when no matching entries found', async () => { - vi.mocked(mockHistoryManager.updateSessionNameByClaudeSessionId).mockReturnValue(0); + vi.mocked(mockHistoryManager.updateSessionNameByClaudeSessionId).mockResolvedValue(0); const handler = handlers.get('history:updateSessionName'); const result = await handler!({} as any, 'non-existent-agent', 'Name'); @@ -481,7 +481,7 @@ describe('history IPC handlers', () => { describe('history:getFilePath', () => { it('should return file path for existing session', async () => { - vi.mocked(mockHistoryManager.getHistoryFilePath).mockReturnValue( + vi.mocked(mockHistoryManager.getHistoryFilePath).mockResolvedValue( '/path/to/history/session-1.json' ); @@ -493,7 +493,7 @@ describe('history IPC handlers', () => { }); it('should return null for non-existent session', async () => { - vi.mocked(mockHistoryManager.getHistoryFilePath).mockReturnValue(null); + vi.mocked(mockHistoryManager.getHistoryFilePath).mockResolvedValue(null); const handler = handlers.get('history:getFilePath'); const result = await handler!({} as any, 'non-existent'); @@ -504,7 +504,7 @@ describe('history IPC handlers', () => { describe('history:listSessions', () => { it('should return list of sessions with history', async () => { - vi.mocked(mockHistoryManager.listSessionsWithHistory).mockReturnValue([ + vi.mocked(mockHistoryManager.listSessionsWithHistory).mockResolvedValue([ 'session-1', 'session-2', 'session-3', @@ -518,7 +518,7 @@ describe('history IPC handlers', () => { }); it('should return empty array when no sessions have history', async () => { - vi.mocked(mockHistoryManager.listSessionsWithHistory).mockReturnValue([]); + vi.mocked(mockHistoryManager.listSessionsWithHistory).mockResolvedValue([]); const handler = handlers.get('history:listSessions'); const result = await handler!({} as any); diff --git a/src/__tests__/main/process-manager/spawners/ChildProcessSpawner.test.ts b/src/__tests__/main/process-manager/spawners/ChildProcessSpawner.test.ts index 83d84e3ec..659cc6895 100644 --- a/src/__tests__/main/process-manager/spawners/ChildProcessSpawner.test.ts +++ b/src/__tests__/main/process-manager/spawners/ChildProcessSpawner.test.ts @@ -139,10 +139,10 @@ describe('ChildProcessSpawner', () => { }); describe('isStreamJsonMode detection', () => { - it('should enable stream-json mode when args contain "stream-json"', () => { + it('should enable stream-json mode when args contain "stream-json"', async () => { const { processes, spawner } = createTestContext(); - spawner.spawn( + await spawner.spawn( createBaseConfig({ args: ['--output-format', 'stream-json'], }) @@ -152,10 +152,10 @@ describe('ChildProcessSpawner', () => { expect(proc?.isStreamJsonMode).toBe(true); }); - it('should enable stream-json mode when args contain "--json"', () => { + it('should enable stream-json mode when args contain "--json"', async () => { const { processes, spawner } = createTestContext(); - spawner.spawn( + await spawner.spawn( createBaseConfig({ args: ['--json'], }) @@ -165,10 +165,10 @@ describe('ChildProcessSpawner', () => { expect(proc?.isStreamJsonMode).toBe(true); }); - it('should enable stream-json mode when args contain "--format" and "json"', () => { + it('should enable stream-json mode when args contain "--format" and "json"', async () => { const { processes, spawner } = createTestContext(); - spawner.spawn( + await spawner.spawn( createBaseConfig({ args: ['--format', 'json'], }) @@ -178,10 +178,10 @@ describe('ChildProcessSpawner', () => { expect(proc?.isStreamJsonMode).toBe(true); }); - it('should enable stream-json mode when sendPromptViaStdin is true', () => { + it('should enable stream-json mode when sendPromptViaStdin is true', async () => { const { processes, spawner } = createTestContext(); - spawner.spawn( + await spawner.spawn( createBaseConfig({ args: ['--print'], sendPromptViaStdin: true, @@ -193,12 +193,12 @@ describe('ChildProcessSpawner', () => { expect(proc?.isStreamJsonMode).toBe(true); }); - it('should NOT enable stream-json mode when sendPromptViaStdinRaw is true', () => { + it('should NOT enable stream-json mode when sendPromptViaStdinRaw is true', async () => { const { processes, spawner } = createTestContext(); // sendPromptViaStdinRaw sends RAW text via stdin, not JSON // So it should NOT set isStreamJsonMode (which is for JSON streaming) - spawner.spawn( + await spawner.spawn( createBaseConfig({ args: ['--print'], sendPromptViaStdinRaw: true, @@ -210,12 +210,12 @@ describe('ChildProcessSpawner', () => { expect(proc?.isStreamJsonMode).toBe(false); }); - it('should enable stream-json mode when sshStdinScript is provided', () => { + it('should enable stream-json mode when sshStdinScript is provided', async () => { const { processes, spawner } = createTestContext(); // SSH sessions pass a script via stdin - this should trigger stream-json mode // even though the args (SSH args) don't contain 'stream-json' - spawner.spawn( + await spawner.spawn( createBaseConfig({ args: ['-o', 'BatchMode=yes', 'user@host', '/bin/bash'], sshStdinScript: 'export PATH="$HOME/.local/bin:$PATH"\ncd /project\nexec claude --print', @@ -226,10 +226,10 @@ describe('ChildProcessSpawner', () => { expect(proc?.isStreamJsonMode).toBe(true); }); - it('should NOT enable stream-json mode for plain args without JSON flags', () => { + it('should NOT enable stream-json mode for plain args without JSON flags', async () => { const { processes, spawner } = createTestContext(); - spawner.spawn( + await spawner.spawn( createBaseConfig({ args: ['--print', '--verbose'], }) @@ -239,10 +239,10 @@ describe('ChildProcessSpawner', () => { expect(proc?.isStreamJsonMode).toBe(false); }); - it('should enable stream-json mode when images are provided with prompt', () => { + it('should enable stream-json mode when images are provided with prompt', async () => { const { processes, spawner } = createTestContext(); - spawner.spawn( + await spawner.spawn( createBaseConfig({ args: ['--print'], images: ['data:image/png;base64,abc123'], @@ -256,10 +256,10 @@ describe('ChildProcessSpawner', () => { }); describe('isBatchMode detection', () => { - it('should enable batch mode when prompt is provided', () => { + it('should enable batch mode when prompt is provided', async () => { const { processes, spawner } = createTestContext(); - spawner.spawn( + await spawner.spawn( createBaseConfig({ prompt: 'test prompt', }) @@ -269,10 +269,10 @@ describe('ChildProcessSpawner', () => { expect(proc?.isBatchMode).toBe(true); }); - it('should NOT enable batch mode when no prompt is provided', () => { + it('should NOT enable batch mode when no prompt is provided', async () => { const { processes, spawner } = createTestContext(); - spawner.spawn( + await spawner.spawn( createBaseConfig({ prompt: undefined, }) @@ -284,10 +284,10 @@ describe('ChildProcessSpawner', () => { }); describe('SSH remote context', () => { - it('should store sshRemoteId on managed process', () => { + it('should store sshRemoteId on managed process', async () => { const { processes, spawner } = createTestContext(); - spawner.spawn( + await spawner.spawn( createBaseConfig({ sshRemoteId: 'my-remote-server', sshRemoteHost: 'dev.example.com', @@ -316,10 +316,10 @@ describe('ChildProcessSpawner', () => { '--dangerously-skip-permissions', ]; - it('should add --input-format stream-json when images are present with default Claude Code args', () => { + it('should add --input-format stream-json when images are present with default Claude Code args', async () => { const { spawner } = createTestContext(); - spawner.spawn( + await spawner.spawn( createBaseConfig({ args: CLAUDE_DEFAULT_ARGS, images: ['data:image/png;base64,abc123'], @@ -334,10 +334,10 @@ describe('ChildProcessSpawner', () => { expect(spawnArgs[inputFormatIdx + 1]).toBe('stream-json'); }); - it('should add --input-format stream-json even when sendPromptViaStdin is true', () => { + it('should add --input-format stream-json even when sendPromptViaStdin is true', async () => { const { spawner } = createTestContext(); - spawner.spawn( + await spawner.spawn( createBaseConfig({ args: CLAUDE_DEFAULT_ARGS, images: ['data:image/png;base64,abc123'], @@ -352,10 +352,10 @@ describe('ChildProcessSpawner', () => { expect(spawnArgs[inputFormatIdx + 1]).toBe('stream-json'); }); - it('should not duplicate --input-format when it is already in args', () => { + it('should not duplicate --input-format when it is already in args', async () => { const { spawner } = createTestContext(); - spawner.spawn( + await spawner.spawn( createBaseConfig({ args: [...CLAUDE_DEFAULT_ARGS, '--input-format', 'stream-json'], images: ['data:image/png;base64,abc123'], @@ -368,10 +368,10 @@ describe('ChildProcessSpawner', () => { expect(inputFormatCount).toBe(1); }); - it('should send stream-json message via stdin when images are present', () => { + it('should send stream-json message via stdin when images are present', async () => { const { spawner } = createTestContext(); - spawner.spawn( + await spawner.spawn( createBaseConfig({ args: CLAUDE_DEFAULT_ARGS, images: ['data:image/png;base64,abc123'], @@ -389,7 +389,7 @@ describe('ChildProcessSpawner', () => { expect(mockChildProcess.stdin.end).toHaveBeenCalled(); }); - it('should send stream-json message via stdin with multiple images', () => { + it('should send stream-json message via stdin with multiple images', async () => { const { spawner } = createTestContext(); const images = [ @@ -398,7 +398,7 @@ describe('ChildProcessSpawner', () => { 'data:image/webp;base64,ghi789', ]; - spawner.spawn( + await spawner.spawn( createBaseConfig({ args: CLAUDE_DEFAULT_ARGS, images, @@ -425,10 +425,10 @@ describe('ChildProcessSpawner', () => { '--dangerously-skip-permissions', ]; - it('should NOT treat --output-format stream-json as promptViaStdin', () => { + it('should NOT treat --output-format stream-json as promptViaStdin', async () => { const { spawner } = createTestContext(); - spawner.spawn( + await spawner.spawn( createBaseConfig({ args: CLAUDE_DEFAULT_ARGS, prompt: 'hello', @@ -441,10 +441,10 @@ describe('ChildProcessSpawner', () => { expect(spawnArgs).toContain('hello'); }); - it('should treat --input-format stream-json as promptViaStdin', () => { + it('should treat --input-format stream-json as promptViaStdin', async () => { const { spawner } = createTestContext(); - spawner.spawn( + await spawner.spawn( createBaseConfig({ args: [...CLAUDE_DEFAULT_ARGS, '--input-format', 'stream-json'], prompt: 'hello', @@ -457,10 +457,10 @@ describe('ChildProcessSpawner', () => { expect(spawnArgs).not.toContain('hello'); }); - it('should treat sendPromptViaStdin as promptViaStdin', () => { + it('should treat sendPromptViaStdin as promptViaStdin', async () => { const { spawner } = createTestContext(); - spawner.spawn( + await spawner.spawn( createBaseConfig({ args: CLAUDE_DEFAULT_ARGS, prompt: 'hello', @@ -472,10 +472,10 @@ describe('ChildProcessSpawner', () => { expect(spawnArgs).not.toContain('hello'); }); - it('should treat sendPromptViaStdinRaw as promptViaStdin', () => { + it('should treat sendPromptViaStdinRaw as promptViaStdin', async () => { const { spawner } = createTestContext(); - spawner.spawn( + await spawner.spawn( createBaseConfig({ args: CLAUDE_DEFAULT_ARGS, prompt: 'hello', @@ -489,7 +489,7 @@ describe('ChildProcessSpawner', () => { }); describe('stdin write guard for non-stream-json-input agents', () => { - it('should NOT write stream-json to stdin when prompt is already in CLI args (Codex --json)', () => { + it('should NOT write stream-json to stdin when prompt is already in CLI args (Codex --json)', async () => { // Codex uses --json for JSON *output*, not input. The prompt goes as a CLI arg. // Without the promptViaStdin guard, isStreamJsonMode (true from --json) would // cause the prompt to be double-sent: once in CLI args and once via stdin. @@ -499,7 +499,7 @@ describe('ChildProcessSpawner', () => { const { spawner } = createTestContext(); - spawner.spawn( + await spawner.spawn( createBaseConfig({ toolType: 'codex', command: 'codex', @@ -523,10 +523,10 @@ describe('ChildProcessSpawner', () => { }); describe('child process event handling', () => { - it('should listen on "close" event (not "exit") to ensure all stdio data is drained', () => { + it('should listen on "close" event (not "exit") to ensure all stdio data is drained', async () => { const { spawner } = createTestContext(); - spawner.spawn(createBaseConfig({ prompt: 'test' })); + await spawner.spawn(createBaseConfig({ prompt: 'test' })); // Verify 'close' is registered (ensures all stdout/stderr data is consumed // before exit handler runs โ€” fixes data loss for short-lived processes) @@ -536,10 +536,10 @@ describe('ChildProcessSpawner', () => { expect(eventNames).not.toContain('exit'); }); - it('should listen for "error" events on the child process', () => { + it('should listen for "error" events on the child process', async () => { const { spawner } = createTestContext(); - spawner.spawn(createBaseConfig({ prompt: 'test' })); + await spawner.spawn(createBaseConfig({ prompt: 'test' })); const onCalls = mockChildProcess.on.mock.calls as [string, Function][]; const eventNames = onCalls.map(([event]) => event); @@ -548,16 +548,16 @@ describe('ChildProcessSpawner', () => { }); describe('image handling with non-stream-json agents', () => { - it('should use file-based image args for agents without stream-json support', () => { + it('should use file-based image args for agents without stream-json support', async () => { // Override capabilities for this test vi.mocked(getAgentCapabilities).mockReturnValueOnce({ supportsStreamJsonInput: false, } as any); - vi.mocked(saveImageToTempFile).mockReturnValueOnce('/tmp/maestro-image-0.png'); + vi.mocked(saveImageToTempFile).mockResolvedValueOnce('/tmp/maestro-image-0.png'); const { spawner } = createTestContext(); - spawner.spawn( + await spawner.spawn( createBaseConfig({ toolType: 'codex', command: 'codex', @@ -577,16 +577,16 @@ describe('ChildProcessSpawner', () => { }); describe('resume mode with prompt-embed image handling', () => { - it('should embed image paths in prompt when resuming with imageResumeMode=prompt-embed', () => { + it('should embed image paths in prompt when resuming with imageResumeMode=prompt-embed', async () => { vi.mocked(getAgentCapabilities).mockReturnValueOnce({ supportsStreamJsonInput: false, imageResumeMode: 'prompt-embed', } as any); - vi.mocked(saveImageToTempFile).mockReturnValueOnce('/tmp/maestro-image-0.png'); + vi.mocked(saveImageToTempFile).mockResolvedValueOnce('/tmp/maestro-image-0.png'); const { spawner } = createTestContext(); - spawner.spawn( + await spawner.spawn( createBaseConfig({ toolType: 'codex', command: 'codex', @@ -608,17 +608,17 @@ describe('ChildProcessSpawner', () => { expect(promptArg).toContain('describe this image'); }); - it('should use -i flag for initial spawn even when imageResumeMode=prompt-embed', () => { + it('should use -i flag for initial spawn even when imageResumeMode=prompt-embed', async () => { vi.mocked(getAgentCapabilities).mockReturnValueOnce({ supportsStreamJsonInput: false, imageResumeMode: 'prompt-embed', } as any); - vi.mocked(saveImageToTempFile).mockReturnValueOnce('/tmp/maestro-image-0.png'); + vi.mocked(saveImageToTempFile).mockResolvedValueOnce('/tmp/maestro-image-0.png'); const { spawner } = createTestContext(); // Args do NOT contain 'resume' โ€” this is an initial spawn - spawner.spawn( + await spawner.spawn( createBaseConfig({ toolType: 'codex', command: 'codex', @@ -635,16 +635,16 @@ describe('ChildProcessSpawner', () => { expect(spawnArgs).toContain('/tmp/maestro-image-0.png'); }); - it('should send modified prompt via stdin in resume mode when promptViaStdin is true', () => { + it('should send modified prompt via stdin in resume mode when promptViaStdin is true', async () => { vi.mocked(getAgentCapabilities).mockReturnValueOnce({ supportsStreamJsonInput: false, imageResumeMode: 'prompt-embed', } as any); - vi.mocked(saveImageToTempFile).mockReturnValueOnce('/tmp/maestro-image-0.png'); + vi.mocked(saveImageToTempFile).mockResolvedValueOnce('/tmp/maestro-image-0.png'); const { spawner } = createTestContext(); - spawner.spawn( + await spawner.spawn( createBaseConfig({ toolType: 'codex', command: 'codex', @@ -669,18 +669,18 @@ describe('ChildProcessSpawner', () => { expect(writtenData).toContain('describe this image'); }); - it('should handle multiple images in resume mode', () => { + it('should handle multiple images in resume mode', async () => { vi.mocked(getAgentCapabilities).mockReturnValueOnce({ supportsStreamJsonInput: false, imageResumeMode: 'prompt-embed', } as any); vi.mocked(saveImageToTempFile) - .mockReturnValueOnce('/tmp/maestro-image-0.png') - .mockReturnValueOnce('/tmp/maestro-image-1.jpg'); + .mockResolvedValueOnce('/tmp/maestro-image-0.png') + .mockResolvedValueOnce('/tmp/maestro-image-1.jpg'); const { spawner } = createTestContext(); - spawner.spawn( + await spawner.spawn( createBaseConfig({ toolType: 'codex', command: 'codex', @@ -699,17 +699,17 @@ describe('ChildProcessSpawner', () => { expect(promptArg).toContain('compare these images'); }); - it('should NOT use prompt-embed when imageResumeMode is undefined', () => { + it('should NOT use prompt-embed when imageResumeMode is undefined', async () => { vi.mocked(getAgentCapabilities).mockReturnValueOnce({ supportsStreamJsonInput: false, imageResumeMode: undefined, } as any); - vi.mocked(saveImageToTempFile).mockReturnValueOnce('/tmp/maestro-image-0.png'); + vi.mocked(saveImageToTempFile).mockResolvedValueOnce('/tmp/maestro-image-0.png'); const { spawner } = createTestContext(); // Even with 'resume' in args, if imageResumeMode is undefined, use -i flag - spawner.spawn( + await spawner.spawn( createBaseConfig({ toolType: 'opencode', command: 'opencode', diff --git a/src/__tests__/main/stats/aggregations.test.ts b/src/__tests__/main/stats/aggregations.test.ts index 875cf6715..7de5e6d1a 100644 --- a/src/__tests__/main/stats/aggregations.test.ts +++ b/src/__tests__/main/stats/aggregations.test.ts @@ -69,6 +69,18 @@ const mockFsRenameSync = vi.fn(); const mockFsStatSync = vi.fn(() => ({ size: 1024 })); const mockFsReadFileSync = vi.fn(() => '0'); // Default: old timestamp (triggers vacuum check) const mockFsWriteFileSync = vi.fn(); +const mockFsAccess = vi.fn((pathArg: string) => { + if (mockFsExistsSync(pathArg)) { + return Promise.resolve(); + } + return Promise.reject(new Error('ENOENT')); +}); +const mockFsMkdir = vi.fn(() => Promise.resolve()); +const mockFsStat = vi.fn(() => Promise.resolve({ size: 1024 })); +const mockFsCopyFile = vi.fn(() => Promise.resolve()); +const mockFsUnlink = vi.fn(() => Promise.resolve()); +const mockFsRename = vi.fn(() => Promise.resolve()); +const mockFsReaddir = vi.fn(() => Promise.resolve([] as string[])); // Mock fs vi.mock('fs', () => ({ @@ -80,6 +92,35 @@ vi.mock('fs', () => ({ statSync: (...args: unknown[]) => mockFsStatSync(...args), readFileSync: (...args: unknown[]) => mockFsReadFileSync(...args), writeFileSync: (...args: unknown[]) => mockFsWriteFileSync(...args), + promises: { + access: (...args: unknown[]) => mockFsAccess(...args), + mkdir: (...args: unknown[]) => mockFsMkdir(...args), + stat: (...args: unknown[]) => mockFsStat(...args), + copyFile: (...args: unknown[]) => mockFsCopyFile(...args), + unlink: (...args: unknown[]) => mockFsUnlink(...args), + readdir: (...args: unknown[]) => mockFsReaddir(...args), + rename: (...args: unknown[]) => mockFsRename(...args), + }, + default: { + existsSync: (...args: unknown[]) => mockFsExistsSync(...args), + mkdirSync: (...args: unknown[]) => mockFsMkdirSync(...args), + copyFileSync: (...args: unknown[]) => mockFsCopyFileSync(...args), + unlinkSync: (...args: unknown[]) => mockFsUnlinkSync(...args), + renameSync: (...args: unknown[]) => mockFsRenameSync(...args), + statSync: (...args: unknown[]) => mockFsStatSync(...args), + readFileSync: (...args: unknown[]) => mockFsReadFileSync(...args), + writeFileSync: (...args: unknown[]) => mockFsWriteFileSync(...args), + readdirSync: (...args: unknown[]) => mockFsReaddirSync(...args), + promises: { + access: (...args: unknown[]) => mockFsAccess(...args), + mkdir: (...args: unknown[]) => mockFsMkdir(...args), + stat: (...args: unknown[]) => mockFsStat(...args), + copyFile: (...args: unknown[]) => mockFsCopyFile(...args), + unlink: (...args: unknown[]) => mockFsUnlink(...args), + readdir: (...args: unknown[]) => mockFsReaddir(...args), + rename: (...args: unknown[]) => mockFsRename(...args), + }, + }, })); // Mock logger @@ -125,7 +166,7 @@ describe('Time-range filtering works correctly for all ranges', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); db.getQueryEvents('day'); @@ -147,7 +188,7 @@ describe('Time-range filtering works correctly for all ranges', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); db.getQueryEvents('week'); @@ -168,7 +209,7 @@ describe('Time-range filtering works correctly for all ranges', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); db.getQueryEvents('month'); @@ -189,7 +230,7 @@ describe('Time-range filtering works correctly for all ranges', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); db.getQueryEvents('year'); @@ -207,7 +248,7 @@ describe('Time-range filtering works correctly for all ranges', () => { it('should filter by "all" range (from epoch/timestamp 0)', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); db.getQueryEvents('all'); @@ -229,7 +270,7 @@ describe('Time-range filtering works correctly for all ranges', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); db.getAutoRunSessions('day'); @@ -249,7 +290,7 @@ describe('Time-range filtering works correctly for all ranges', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); db.getAutoRunSessions('week'); @@ -269,7 +310,7 @@ describe('Time-range filtering works correctly for all ranges', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); db.getAutoRunSessions('month'); @@ -289,7 +330,7 @@ describe('Time-range filtering works correctly for all ranges', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); db.getAutoRunSessions('year'); @@ -306,7 +347,7 @@ describe('Time-range filtering works correctly for all ranges', () => { it('should filter Auto Run sessions by "all" range', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); db.getAutoRunSessions('all'); @@ -327,7 +368,7 @@ describe('Time-range filtering works correctly for all ranges', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); mockStatement.get.mockClear(); db.getAggregatedStats('day'); @@ -349,7 +390,7 @@ describe('Time-range filtering works correctly for all ranges', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); mockStatement.get.mockClear(); db.getAggregatedStats('week'); @@ -370,7 +411,7 @@ describe('Time-range filtering works correctly for all ranges', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); mockStatement.get.mockClear(); db.getAggregatedStats('month'); @@ -391,7 +432,7 @@ describe('Time-range filtering works correctly for all ranges', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); mockStatement.get.mockClear(); db.getAggregatedStats('year'); @@ -409,7 +450,7 @@ describe('Time-range filtering works correctly for all ranges', () => { it('should aggregate stats for "all" range', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); mockStatement.get.mockClear(); db.getAggregatedStats('all'); @@ -431,7 +472,7 @@ describe('Time-range filtering works correctly for all ranges', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); db.exportToCsv('day'); @@ -448,7 +489,7 @@ describe('Time-range filtering works correctly for all ranges', () => { it('should export CSV for "all" range', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); db.exportToCsv('all'); @@ -466,7 +507,7 @@ describe('Time-range filtering works correctly for all ranges', () => { it('should include start_time >= ? in getQueryEvents SQL', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); db.getQueryEvents('week'); @@ -482,7 +523,7 @@ describe('Time-range filtering works correctly for all ranges', () => { it('should include start_time >= ? in getAutoRunSessions SQL', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); db.getAutoRunSessions('month'); @@ -498,7 +539,7 @@ describe('Time-range filtering works correctly for all ranges', () => { it('should include start_time >= ? in aggregation queries', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); db.getAggregatedStats('year'); @@ -555,7 +596,7 @@ describe('Time-range filtering works correctly for all ranges', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const events = db.getQueryEvents('day'); @@ -569,7 +610,7 @@ describe('Time-range filtering works correctly for all ranges', () => { // We verify this by checking the SQL structure const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); db.getQueryEvents('day'); @@ -585,7 +626,7 @@ describe('Time-range filtering works correctly for all ranges', () => { it('should return consistent results for multiple calls with same range', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); // Call twice in quick succession db.getQueryEvents('week'); @@ -607,7 +648,7 @@ describe('Time-range filtering works correctly for all ranges', () => { it('should combine time range with agentType filter', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); db.getQueryEvents('week', { agentType: 'claude-code' }); @@ -623,7 +664,7 @@ describe('Time-range filtering works correctly for all ranges', () => { it('should combine time range with source filter', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); db.getQueryEvents('month', { source: 'auto' }); @@ -639,7 +680,7 @@ describe('Time-range filtering works correctly for all ranges', () => { it('should combine time range with multiple filters', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); db.getQueryEvents('year', { agentType: 'opencode', @@ -694,7 +735,7 @@ describe('Aggregation queries return correct calculations', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const stats = db.getAggregatedStats('week'); @@ -707,7 +748,7 @@ describe('Aggregation queries return correct calculations', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const stats = db.getAggregatedStats('month'); @@ -720,7 +761,7 @@ describe('Aggregation queries return correct calculations', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const stats = db.getAggregatedStats('day'); @@ -734,7 +775,7 @@ describe('Aggregation queries return correct calculations', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const stats = db.getAggregatedStats('year'); @@ -750,7 +791,7 @@ describe('Aggregation queries return correct calculations', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const stats = db.getAggregatedStats('all'); @@ -766,7 +807,7 @@ describe('Aggregation queries return correct calculations', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const stats = db.getAggregatedStats('week'); @@ -779,7 +820,7 @@ describe('Aggregation queries return correct calculations', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const stats = db.getAggregatedStats('day'); @@ -794,7 +835,7 @@ describe('Aggregation queries return correct calculations', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const stats = db.getAggregatedStats('month'); @@ -808,7 +849,7 @@ describe('Aggregation queries return correct calculations', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const stats = db.getAggregatedStats('day'); @@ -822,7 +863,7 @@ describe('Aggregation queries return correct calculations', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const stats = db.getAggregatedStats('day'); @@ -841,7 +882,7 @@ describe('Aggregation queries return correct calculations', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); // Reset to control exact mock responses for getAggregatedStats mockStatement.all.mockReset(); @@ -873,7 +914,7 @@ describe('Aggregation queries return correct calculations', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const stats = db.getAggregatedStats('month'); @@ -899,7 +940,7 @@ describe('Aggregation queries return correct calculations', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const stats = db.getAggregatedStats('day'); @@ -919,7 +960,7 @@ describe('Aggregation queries return correct calculations', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const stats = db.getAggregatedStats('week'); @@ -949,7 +990,7 @@ describe('Aggregation queries return correct calculations', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const stats = db.getAggregatedStats('week'); @@ -966,7 +1007,7 @@ describe('Aggregation queries return correct calculations', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const stats = db.getAggregatedStats('month'); @@ -983,7 +1024,7 @@ describe('Aggregation queries return correct calculations', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const stats = db.getAggregatedStats('year'); @@ -997,7 +1038,7 @@ describe('Aggregation queries return correct calculations', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const stats = db.getAggregatedStats('day'); @@ -1016,7 +1057,7 @@ describe('Aggregation queries return correct calculations', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const stats = db.getAggregatedStats('all'); @@ -1040,7 +1081,7 @@ describe('Aggregation queries return correct calculations', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const stats = db.getAggregatedStats('week'); @@ -1056,7 +1097,7 @@ describe('Aggregation queries return correct calculations', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const stats = db.getAggregatedStats('day'); @@ -1074,7 +1115,7 @@ describe('Aggregation queries return correct calculations', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const stats = db.getAggregatedStats('day'); @@ -1098,7 +1139,7 @@ describe('Aggregation queries return correct calculations', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const stats = db.getAggregatedStats('week'); @@ -1122,7 +1163,7 @@ describe('Aggregation queries return correct calculations', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const stats = db.getAggregatedStats('week'); @@ -1145,7 +1186,7 @@ describe('Aggregation queries return correct calculations', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const stats = db.getAggregatedStats('week'); @@ -1164,7 +1205,7 @@ describe('Aggregation queries return correct calculations', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const stats1 = db.getAggregatedStats('week'); const stats2 = db.getAggregatedStats('week'); @@ -1180,7 +1221,7 @@ describe('Aggregation queries return correct calculations', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); // Simulate concurrent calls const [result1, result2, result3] = [ @@ -1202,7 +1243,7 @@ describe('Aggregation queries return correct calculations', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); db.getAggregatedStats('week'); @@ -1221,7 +1262,7 @@ describe('Aggregation queries return correct calculations', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); db.getAggregatedStats('month'); @@ -1241,7 +1282,7 @@ describe('Aggregation queries return correct calculations', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); db.getAggregatedStats('year'); @@ -1255,13 +1296,33 @@ describe('Aggregation queries return correct calculations', () => { expect(bySourceCall).toBeDefined(); }); + it('should pre-group queryEvents by time bucket for byAgent to align compound index usage', async () => { + mockStatement.get.mockReturnValue({ count: 0, total_duration: 0 }); + mockStatement.all.mockReturnValue([]); + + const { StatsDB } = await import('../../../main/stats'); + const db = new StatsDB(); + await db.initialize(); + + db.getAggregatedStats('year'); + + const prepareCalls = mockDb.prepare.mock.calls; + const byAgentCall = prepareCalls.find( + (call) => + (call[0] as string).includes('GROUP BY start_time, agent_type') && + (call[0] as string).includes('FROM query_events') + ); + + expect(byAgentCall).toBeDefined(); + }); + it('should use date() function for daily grouping', async () => { mockStatement.get.mockReturnValue({ count: 0, total_duration: 0 }); mockStatement.all.mockReturnValue([]); const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); db.getAggregatedStats('all'); @@ -1279,7 +1340,7 @@ describe('Aggregation queries return correct calculations', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); db.getAggregatedStats('week'); @@ -1302,7 +1363,7 @@ describe('Aggregation queries return correct calculations', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const stats = db.getAggregatedStats('day'); @@ -1318,7 +1379,7 @@ describe('Aggregation queries return correct calculations', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const stats = db.getAggregatedStats('all'); @@ -1338,7 +1399,7 @@ describe('Aggregation queries return correct calculations', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const stats = db.getAggregatedStats('week'); @@ -1359,7 +1420,7 @@ describe('Aggregation queries return correct calculations', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const stats = db.getAggregatedStats('week'); diff --git a/src/__tests__/main/stats/auto-run.test.ts b/src/__tests__/main/stats/auto-run.test.ts index 51ab6f417..f1178c61d 100644 --- a/src/__tests__/main/stats/auto-run.test.ts +++ b/src/__tests__/main/stats/auto-run.test.ts @@ -69,9 +69,21 @@ const mockFsRenameSync = vi.fn(); const mockFsStatSync = vi.fn(() => ({ size: 1024 })); const mockFsReadFileSync = vi.fn(() => '0'); // Default: old timestamp (triggers vacuum check) const mockFsWriteFileSync = vi.fn(); +const mockFsAccess = vi.fn((pathArg: string) => { + if (mockFsExistsSync(pathArg)) { + return Promise.resolve(); + } + return Promise.reject(new Error('ENOENT')); +}); +const mockFsMkdir = vi.fn(() => Promise.resolve()); +const mockFsStat = vi.fn(() => Promise.resolve({ size: 1024 })); +const mockFsCopyFile = vi.fn(() => Promise.resolve()); +const mockFsUnlink = vi.fn(() => Promise.resolve()); +const mockFsRename = vi.fn(() => Promise.resolve()); +const mockFsReaddir = vi.fn(() => Promise.resolve([] as string[])); // Mock fs -vi.mock('fs', () => ({ +const mockFsModule = { existsSync: (...args: unknown[]) => mockFsExistsSync(...args), mkdirSync: (...args: unknown[]) => mockFsMkdirSync(...args), copyFileSync: (...args: unknown[]) => mockFsCopyFileSync(...args), @@ -80,6 +92,20 @@ vi.mock('fs', () => ({ statSync: (...args: unknown[]) => mockFsStatSync(...args), readFileSync: (...args: unknown[]) => mockFsReadFileSync(...args), writeFileSync: (...args: unknown[]) => mockFsWriteFileSync(...args), + promises: { + access: (...args: unknown[]) => mockFsAccess(...args), + mkdir: (...args: unknown[]) => mockFsMkdir(...args), + stat: (...args: unknown[]) => mockFsStat(...args), + copyFile: (...args: unknown[]) => mockFsCopyFile(...args), + unlink: (...args: unknown[]) => mockFsUnlink(...args), + readdir: (...args: unknown[]) => mockFsReaddir(...args), + rename: (...args: unknown[]) => mockFsRename(...args), + }, +}; + +vi.mock('fs', () => ({ + ...mockFsModule, + default: mockFsModule, })); // Mock logger @@ -121,7 +147,7 @@ describe('Auto Run session and task recording', () => { it('should insert Auto Run session and return id', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const sessionId = db.insertAutoRunSession({ sessionId: 'session-1', @@ -144,7 +170,7 @@ describe('Auto Run session and task recording', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const updated = db.updateAutoRunSession('session-id', { duration: 60000, @@ -172,7 +198,7 @@ describe('Auto Run session and task recording', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const sessions = db.getAutoRunSessions('week'); @@ -186,7 +212,7 @@ describe('Auto Run session and task recording', () => { it('should insert Auto Run task with success=true', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const taskId = db.insertAutoRunTask({ autoRunSessionId: 'auto-1', @@ -209,7 +235,7 @@ describe('Auto Run session and task recording', () => { it('should insert Auto Run task with success=false', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); db.insertAutoRunTask({ autoRunSessionId: 'auto-1', @@ -255,7 +281,7 @@ describe('Auto Run session and task recording', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const tasks = db.getAutoRunTasks('auto-1'); @@ -290,7 +316,7 @@ describe('Auto Run sessions and tasks recorded correctly', () => { it('should record Auto Run session with all required fields', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const startTime = Date.now(); const sessionId = db.insertAutoRunSession({ @@ -325,7 +351,7 @@ describe('Auto Run sessions and tasks recorded correctly', () => { it('should record Auto Run session with multiple documents (comma-separated)', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const sessionId = db.insertAutoRunSession({ sessionId: 'multi-doc-session', @@ -348,7 +374,7 @@ describe('Auto Run sessions and tasks recorded correctly', () => { it('should update Auto Run session duration and tasks on completion', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); // First, insert the session const autoRunId = db.insertAutoRunSession({ @@ -377,7 +403,7 @@ describe('Auto Run sessions and tasks recorded correctly', () => { it('should update Auto Run session with partial completion (some tasks skipped)', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const autoRunId = db.insertAutoRunSession({ sessionId: 'partial-session', @@ -402,7 +428,7 @@ describe('Auto Run sessions and tasks recorded correctly', () => { it('should handle Auto Run session stopped by user (wasStopped)', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const autoRunId = db.insertAutoRunSession({ sessionId: 'stopped-session', @@ -429,7 +455,7 @@ describe('Auto Run sessions and tasks recorded correctly', () => { it('should record individual task with all fields', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const taskStartTime = Date.now() - 5000; const taskId = db.insertAutoRunTask({ @@ -462,7 +488,7 @@ describe('Auto Run sessions and tasks recorded correctly', () => { it('should record failed task with success=false', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); db.insertAutoRunTask({ autoRunSessionId: 'auto-run-1', @@ -483,7 +509,7 @@ describe('Auto Run sessions and tasks recorded correctly', () => { it('should record multiple tasks for same Auto Run session', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); // Clear mocks after initialize() to count only test operations mockStatement.run.mockClear(); @@ -539,7 +565,7 @@ describe('Auto Run sessions and tasks recorded correctly', () => { it('should record task without optional taskContent', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const taskId = db.insertAutoRunTask({ autoRunSessionId: 'auto-run-1', @@ -590,7 +616,7 @@ describe('Auto Run sessions and tasks recorded correctly', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const sessions = db.getAutoRunSessions('week'); @@ -654,7 +680,7 @@ describe('Auto Run sessions and tasks recorded correctly', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const tasks = db.getAutoRunTasks('auto-run-1'); @@ -719,7 +745,7 @@ describe('Auto Run sessions and tasks recorded correctly', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const tasks = db.getAutoRunTasks('ar1'); @@ -735,7 +761,7 @@ describe('Auto Run sessions and tasks recorded correctly', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); db.getAutoRunSessions('day'); @@ -751,7 +777,7 @@ describe('Auto Run sessions and tasks recorded correctly', () => { it('should return all Auto Run sessions for "all" time range', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); mockStatement.all.mockReturnValue([ { @@ -789,7 +815,7 @@ describe('Auto Run sessions and tasks recorded correctly', () => { it('should support the full Auto Run lifecycle: start -> record tasks -> end', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); // Clear mocks after initialize() to count only test operations mockStatement.run.mockClear(); @@ -845,7 +871,7 @@ describe('Auto Run sessions and tasks recorded correctly', () => { it('should handle Auto Run with loop mode (multiple passes)', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); // Clear mocks after initialize() to count only test operations mockStatement.run.mockClear(); @@ -908,7 +934,7 @@ describe('Auto Run sessions and tasks recorded correctly', () => { it('should handle very long task content (synopsis)', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const longContent = 'A'.repeat(10000); // 10KB task content @@ -933,7 +959,7 @@ describe('Auto Run sessions and tasks recorded correctly', () => { it('should handle zero duration tasks', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const taskId = db.insertAutoRunTask({ autoRunSessionId: 'ar1', @@ -956,7 +982,7 @@ describe('Auto Run sessions and tasks recorded correctly', () => { it('should handle Auto Run session with zero tasks total', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); // This shouldn't happen in practice, but the database should handle it const sessionId = db.insertAutoRunSession({ @@ -976,7 +1002,7 @@ describe('Auto Run sessions and tasks recorded correctly', () => { it('should handle different agent types for Auto Run', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); // Clear mocks after initialize() to count only test operations mockStatement.run.mockClear(); @@ -1039,7 +1065,7 @@ describe('Foreign key relationship between tasks and sessions', () => { it('should create auto_run_tasks table with REFERENCES clause to auto_run_sessions', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); // Verify the CREATE TABLE statement includes the foreign key reference const prepareCalls = mockDb.prepare.mock.calls.map((call) => call[0] as string); @@ -1056,7 +1082,7 @@ describe('Foreign key relationship between tasks and sessions', () => { it('should have auto_run_session_id column as NOT NULL in auto_run_tasks', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const prepareCalls = mockDb.prepare.mock.calls.map((call) => call[0] as string); const createTasksTable = prepareCalls.find((sql) => @@ -1071,7 +1097,7 @@ describe('Foreign key relationship between tasks and sessions', () => { it('should create index on auto_run_session_id foreign key column', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const prepareCalls = mockDb.prepare.mock.calls.map((call) => call[0] as string); const indexCreation = prepareCalls.find((sql) => sql.includes('idx_task_auto_session')); @@ -1085,7 +1111,7 @@ describe('Foreign key relationship between tasks and sessions', () => { it('should store auto_run_session_id when inserting task', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const autoRunSessionId = 'parent-session-abc-123'; db.insertAutoRunTask({ @@ -1110,7 +1136,7 @@ describe('Foreign key relationship between tasks and sessions', () => { it('should insert task with matching auto_run_session_id from parent session', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); // Clear calls from initialization mockStatement.run.mockClear(); @@ -1182,7 +1208,7 @@ describe('Foreign key relationship between tasks and sessions', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); // Query tasks for 'auto-run-A' const tasksA = db.getAutoRunTasks('auto-run-A'); @@ -1200,7 +1226,7 @@ describe('Foreign key relationship between tasks and sessions', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const tasks = db.getAutoRunTasks('non-existent-session'); @@ -1213,7 +1239,7 @@ describe('Foreign key relationship between tasks and sessions', () => { it('should maintain consistent auto_run_session_id across multiple tasks', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); // Clear calls from initialization mockStatement.run.mockClear(); @@ -1246,7 +1272,7 @@ describe('Foreign key relationship between tasks and sessions', () => { it('should allow tasks from different sessions to be inserted independently', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); // Clear calls from initialization mockStatement.run.mockClear(); @@ -1299,7 +1325,7 @@ describe('Foreign key relationship between tasks and sessions', () => { it('should use generated session ID as foreign key when retrieved after insertion', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); // Clear calls from initialization mockStatement.run.mockClear(); @@ -1343,7 +1369,7 @@ describe('Foreign key relationship between tasks and sessions', () => { it('should filter tasks using WHERE auto_run_session_id clause', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); db.getAutoRunTasks('specific-session-id'); @@ -1361,7 +1387,7 @@ describe('Foreign key relationship between tasks and sessions', () => { it('should order tasks by task_index within a session', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); db.getAutoRunTasks('any-session'); diff --git a/src/__tests__/main/stats/data-management.test.ts b/src/__tests__/main/stats/data-management.test.ts index 954e38c14..a2029ff31 100644 --- a/src/__tests__/main/stats/data-management.test.ts +++ b/src/__tests__/main/stats/data-management.test.ts @@ -69,9 +69,21 @@ const mockFsRenameSync = vi.fn(); const mockFsStatSync = vi.fn(() => ({ size: 1024 })); const mockFsReadFileSync = vi.fn(() => '0'); // Default: old timestamp (triggers vacuum check) const mockFsWriteFileSync = vi.fn(); +const mockFsAccess = vi.fn((pathArg: string) => { + if (mockFsExistsSync(pathArg)) { + return Promise.resolve(); + } + return Promise.reject(new Error('ENOENT')); +}); +const mockFsMkdir = vi.fn(() => Promise.resolve()); +const mockFsStat = vi.fn(() => Promise.resolve({ size: 1024 })); +const mockFsCopyFile = vi.fn(() => Promise.resolve()); +const mockFsUnlink = vi.fn(() => Promise.resolve()); +const mockFsRename = vi.fn(() => Promise.resolve()); +const mockFsReaddir = vi.fn(() => Promise.resolve([] as string[])); // Mock fs -vi.mock('fs', () => ({ +const mockFsModule = { existsSync: (...args: unknown[]) => mockFsExistsSync(...args), mkdirSync: (...args: unknown[]) => mockFsMkdirSync(...args), copyFileSync: (...args: unknown[]) => mockFsCopyFileSync(...args), @@ -80,6 +92,20 @@ vi.mock('fs', () => ({ statSync: (...args: unknown[]) => mockFsStatSync(...args), readFileSync: (...args: unknown[]) => mockFsReadFileSync(...args), writeFileSync: (...args: unknown[]) => mockFsWriteFileSync(...args), + promises: { + access: (...args: unknown[]) => mockFsAccess(...args), + mkdir: (...args: unknown[]) => mockFsMkdir(...args), + stat: (...args: unknown[]) => mockFsStat(...args), + copyFile: (...args: unknown[]) => mockFsCopyFile(...args), + unlink: (...args: unknown[]) => mockFsUnlink(...args), + readdir: (...args: unknown[]) => mockFsReaddir(...args), + rename: (...args: unknown[]) => mockFsRename(...args), + }, +}; + +vi.mock('fs', () => ({ + ...mockFsModule, + default: mockFsModule, })); // Mock logger @@ -127,12 +153,12 @@ describe('Database VACUUM functionality', () => { // so getDatabaseSize will catch the error and return 0 const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); // Since mockFsExistsSync.mockReturnValue(true) is set but statSync is not mocked, // getDatabaseSize will try to call the real statSync on a non-existent path // and catch the error, returning 0 - const size = db.getDatabaseSize(); + const size = await db.getDatabaseSize(); // The mock environment doesn't have actual file, so expect 0 expect(size).toBe(0); @@ -141,10 +167,10 @@ describe('Database VACUUM functionality', () => { it('should handle statSync gracefully when file does not exist', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); // getDatabaseSize should not throw - expect(() => db.getDatabaseSize()).not.toThrow(); + await expect(db.getDatabaseSize()).resolves.toBe(0); }); }); @@ -152,13 +178,13 @@ describe('Database VACUUM functionality', () => { it('should execute VACUUM SQL command', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); // Clear mocks from initialization mockStatement.run.mockClear(); mockDb.prepare.mockClear(); - const result = db.vacuum(); + const result = await db.vacuum(); expect(result.success).toBe(true); expect(mockDb.prepare).toHaveBeenCalledWith('VACUUM'); @@ -168,9 +194,9 @@ describe('Database VACUUM functionality', () => { it('should return success true when vacuum completes', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); - const result = db.vacuum(); + const result = await db.vacuum(); expect(result.success).toBe(true); expect(result.error).toBeUndefined(); @@ -179,9 +205,9 @@ describe('Database VACUUM functionality', () => { it('should return bytesFreed of 0 when sizes are equal (mocked)', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); - const result = db.vacuum(); + const result = await db.vacuum(); // With mock fs, both before and after sizes will be 0 expect(result.bytesFreed).toBe(0); @@ -192,7 +218,7 @@ describe('Database VACUUM functionality', () => { const db = new StatsDB(); // Don't initialize - const result = db.vacuum(); + const result = await db.vacuum(); expect(result.success).toBe(false); expect(result.bytesFreed).toBe(0); @@ -202,7 +228,7 @@ describe('Database VACUUM functionality', () => { it('should handle VACUUM failure gracefully', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); // Make VACUUM fail mockDb.prepare.mockImplementation((sql: string) => { @@ -216,7 +242,7 @@ describe('Database VACUUM functionality', () => { return mockStatement; }); - const result = db.vacuum(); + const result = await db.vacuum(); expect(result.success).toBe(false); expect(result.error).toContain('database is locked'); @@ -226,12 +252,12 @@ describe('Database VACUUM functionality', () => { const { logger } = await import('../../../main/utils/logger'); const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); // Clear logger mocks from initialization vi.mocked(logger.info).mockClear(); - db.vacuum(); + await db.vacuum(); // Check that logger was called with vacuum-related messages expect(logger.info).toHaveBeenCalledWith( @@ -249,13 +275,13 @@ describe('Database VACUUM functionality', () => { it('should skip vacuum if database size is 0 (below threshold)', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); // Clear mocks from initialization mockStatement.run.mockClear(); mockDb.prepare.mockClear(); - const result = db.vacuumIfNeeded(); + const result = await db.vacuumIfNeeded(); // Size is 0 (mock fs), which is below 100MB threshold expect(result.vacuumed).toBe(false); @@ -266,9 +292,9 @@ describe('Database VACUUM functionality', () => { it('should return correct databaseSize in result', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); - const result = db.vacuumIfNeeded(); + const result = await db.vacuumIfNeeded(); // Size property should be present expect(typeof result.databaseSize).toBe('number'); @@ -277,10 +303,10 @@ describe('Database VACUUM functionality', () => { it('should use default 100MB threshold when not specified', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); // With 0 byte size (mocked), should skip vacuum - const result = db.vacuumIfNeeded(); + const result = await db.vacuumIfNeeded(); expect(result.vacuumed).toBe(false); }); @@ -288,14 +314,14 @@ describe('Database VACUUM functionality', () => { it('should not vacuum with threshold 0 and size 0 since 0 is not > 0', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); // Clear mocks from initialization mockStatement.run.mockClear(); mockDb.prepare.mockClear(); // With 0 threshold and 0 byte file: 0 is NOT greater than 0 - const result = db.vacuumIfNeeded(0); + const result = await db.vacuumIfNeeded(0); // The condition is: databaseSize < thresholdBytes // 0 < 0 is false, so vacuumed should be true (it tries to vacuum) @@ -308,12 +334,12 @@ describe('Database VACUUM functionality', () => { const { logger } = await import('../../../main/utils/logger'); const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); // Clear logger mocks from initialization vi.mocked(logger.debug).mockClear(); - db.vacuumIfNeeded(); + await db.vacuumIfNeeded(); expect(logger.debug).toHaveBeenCalledWith( expect.stringContaining('below vacuum threshold'), @@ -326,14 +352,14 @@ describe('Database VACUUM functionality', () => { it('should respect custom threshold parameter (threshold = -1 means always vacuum)', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); // Clear mocks from initialization mockStatement.run.mockClear(); mockDb.prepare.mockClear(); // With -1 threshold, 0 > -1 is true, so should vacuum - const result = db.vacuumIfNeeded(-1); + const result = await db.vacuumIfNeeded(-1); expect(result.vacuumed).toBe(true); expect(mockDb.prepare).toHaveBeenCalledWith('VACUUM'); @@ -342,14 +368,14 @@ describe('Database VACUUM functionality', () => { it('should not vacuum with very large threshold', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); // Clear mocks from initialization mockStatement.run.mockClear(); mockDb.prepare.mockClear(); // With 1TB threshold, should NOT trigger vacuum - const result = db.vacuumIfNeeded(1024 * 1024 * 1024 * 1024); + const result = await db.vacuumIfNeeded(1024 * 1024 * 1024 * 1024); expect(result.vacuumed).toBe(false); expect(mockDb.prepare).not.toHaveBeenCalledWith('VACUUM'); @@ -369,7 +395,7 @@ describe('Database VACUUM functionality', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); // With old timestamp, vacuumIfNeededWeekly should proceed to call vacuumIfNeeded // which logs "below vacuum threshold" for small databases (mocked as 1024 bytes) @@ -396,7 +422,7 @@ describe('Database VACUUM functionality', () => { const db = new StatsDB(); // Initialize should not throw (vacuum is skipped due to 0 size anyway) - expect(() => db.initialize()).not.toThrow(); + await expect(db.initialize()).resolves.toBeUndefined(); // Database should still be ready expect(db.isReady()).toBe(true); @@ -408,7 +434,7 @@ describe('Database VACUUM functionality', () => { // Time the initialization (should be fast for mock) const start = Date.now(); - db.initialize(); + await db.initialize(); const elapsed = Date.now() - start; expect(db.isReady()).toBe(true); @@ -420,9 +446,9 @@ describe('Database VACUUM functionality', () => { it('vacuum should return object with success, bytesFreed, and optional error', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); - const result = db.vacuum(); + const result = await db.vacuum(); expect(typeof result.success).toBe('boolean'); expect(typeof result.bytesFreed).toBe('number'); @@ -432,9 +458,9 @@ describe('Database VACUUM functionality', () => { it('vacuumIfNeeded should return object with vacuumed, databaseSize, and optional result', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); - const result = db.vacuumIfNeeded(); + const result = await db.vacuumIfNeeded(); expect(typeof result.vacuumed).toBe('boolean'); expect(typeof result.databaseSize).toBe('number'); @@ -444,10 +470,10 @@ describe('Database VACUUM functionality', () => { it('vacuumIfNeeded should include result when vacuum is performed', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); // Use -1 threshold to force vacuum - const result = db.vacuumIfNeeded(-1); + const result = await db.vacuumIfNeeded(-1); expect(result.vacuumed).toBe(true); expect(result.result).toBeDefined(); @@ -478,7 +504,7 @@ describe('Database VACUUM functionality', () => { it('should return error when olderThanDays is 0 or negative', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const resultZero = db.clearOldData(0); expect(resultZero.success).toBe(false); @@ -496,7 +522,7 @@ describe('Database VACUUM functionality', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const result = db.clearOldData(30); @@ -513,7 +539,7 @@ describe('Database VACUUM functionality', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const result = db.clearOldData(365); @@ -542,7 +568,7 @@ describe('Database VACUUM functionality', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const beforeCall = Date.now(); db.clearOldData(7); @@ -569,7 +595,7 @@ describe('Database VACUUM functionality', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const result = db.clearOldData(30); @@ -586,7 +612,7 @@ describe('Database VACUUM functionality', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); // Test common time periods from Settings UI const periods = [7, 30, 90, 180, 365]; diff --git a/src/__tests__/main/stats/integration.test.ts b/src/__tests__/main/stats/integration.test.ts index 86c12326c..ad3754b81 100644 --- a/src/__tests__/main/stats/integration.test.ts +++ b/src/__tests__/main/stats/integration.test.ts @@ -69,9 +69,21 @@ const mockFsRenameSync = vi.fn(); const mockFsStatSync = vi.fn(() => ({ size: 1024 })); const mockFsReadFileSync = vi.fn(() => '0'); // Default: old timestamp (triggers vacuum check) const mockFsWriteFileSync = vi.fn(); +const mockFsAccess = vi.fn((pathArg: string) => { + if (mockFsExistsSync(pathArg)) { + return Promise.resolve(); + } + return Promise.reject(new Error('ENOENT')); +}); +const mockFsMkdir = vi.fn(() => Promise.resolve()); +const mockFsStat = vi.fn(() => Promise.resolve({ size: 1024 })); +const mockFsCopyFile = vi.fn(() => Promise.resolve()); +const mockFsUnlink = vi.fn(() => Promise.resolve()); +const mockFsRename = vi.fn(() => Promise.resolve()); +const mockFsReaddir = vi.fn(() => Promise.resolve([] as string[])); // Mock fs -vi.mock('fs', () => ({ +const mockFsModule = { existsSync: (...args: unknown[]) => mockFsExistsSync(...args), mkdirSync: (...args: unknown[]) => mockFsMkdirSync(...args), copyFileSync: (...args: unknown[]) => mockFsCopyFileSync(...args), @@ -80,6 +92,20 @@ vi.mock('fs', () => ({ statSync: (...args: unknown[]) => mockFsStatSync(...args), readFileSync: (...args: unknown[]) => mockFsReadFileSync(...args), writeFileSync: (...args: unknown[]) => mockFsWriteFileSync(...args), + promises: { + access: (...args: unknown[]) => mockFsAccess(...args), + mkdir: (...args: unknown[]) => mockFsMkdir(...args), + stat: (...args: unknown[]) => mockFsStat(...args), + copyFile: (...args: unknown[]) => mockFsCopyFile(...args), + unlink: (...args: unknown[]) => mockFsUnlink(...args), + readdir: (...args: unknown[]) => mockFsReaddir(...args), + rename: (...args: unknown[]) => mockFsRename(...args), + }, +}; + +vi.mock('fs', () => ({ + ...mockFsModule, + default: mockFsModule, })); // Mock logger @@ -140,7 +166,7 @@ describe('Concurrent writes and database locking', () => { it('should enable WAL journal mode on initialization', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); expect(mockDb.pragma).toHaveBeenCalledWith('journal_mode = WAL'); }); @@ -155,7 +181,7 @@ describe('Concurrent writes and database locking', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); // WAL mode should be set early in initialization const walIndex = pragmaCalls.indexOf('journal_mode = WAL'); @@ -170,7 +196,7 @@ describe('Concurrent writes and database locking', () => { it('should handle 10 rapid sequential query event inserts', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); // Clear mocks after initialize() to count only test operations mockStatement.run.mockClear(); @@ -198,7 +224,7 @@ describe('Concurrent writes and database locking', () => { it('should handle 10 rapid sequential Auto Run session inserts', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); // Clear mocks after initialize() to count only test operations mockStatement.run.mockClear(); @@ -226,7 +252,7 @@ describe('Concurrent writes and database locking', () => { it('should handle 10 rapid sequential task inserts', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); // Clear mocks after initialize() to count only test operations mockStatement.run.mockClear(); @@ -256,7 +282,7 @@ describe('Concurrent writes and database locking', () => { it('should handle concurrent writes to different tables via Promise.all', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); // Clear mocks after initialize() to count only test operations mockStatement.run.mockClear(); @@ -304,7 +330,7 @@ describe('Concurrent writes and database locking', () => { it('should handle 20 concurrent query event inserts via Promise.all', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); // Clear mocks after initialize() to count only test operations mockStatement.run.mockClear(); @@ -332,7 +358,7 @@ describe('Concurrent writes and database locking', () => { it('should handle mixed insert and update operations concurrently', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); // Clear mocks after initialize() to count only test operations mockStatement.run.mockClear(); @@ -394,7 +420,7 @@ describe('Concurrent writes and database locking', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const operations = [ // Write @@ -441,7 +467,7 @@ describe('Concurrent writes and database locking', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); // Start multiple writes const writes = Array.from({ length: 5 }, (_, i) => @@ -470,7 +496,7 @@ describe('Concurrent writes and database locking', () => { it('should handle 50 concurrent writes without data loss', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); // Reset counter after initialize() to count only test operations const insertedCount = { value: 0 }; @@ -525,7 +551,7 @@ describe('Concurrent writes and database locking', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); // 40 query events + 30 sessions + 30 tasks = 100 writes const queryWrites = Array.from({ length: 40 }, (_, i) => @@ -580,7 +606,7 @@ describe('Concurrent writes and database locking', () => { it('should generate unique IDs even with high-frequency calls', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); // Generate 100 IDs as fast as possible const ids: string[] = []; @@ -602,7 +628,7 @@ describe('Concurrent writes and database locking', () => { it('should generate IDs with timestamp-random format', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const id = db.insertQueryEvent({ sessionId: 'session-1', @@ -621,7 +647,7 @@ describe('Concurrent writes and database locking', () => { it('should maintain stable connection during intensive operations', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); // Perform many operations for (let i = 0; i < 30; i++) { @@ -641,7 +667,7 @@ describe('Concurrent writes and database locking', () => { it('should handle operations after previous operations complete', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); // Track call count manually since we're testing sequential batches // Set up tracking AFTER initialize() to count only test operations @@ -983,7 +1009,7 @@ describe('electron-rebuild verification for better-sqlite3', () => { const db = new StatsDB(); // Should be able to initialize with mocked database - expect(() => db.initialize()).not.toThrow(); + await expect(db.initialize()).resolves.toBeUndefined(); expect(db.isReady()).toBe(true); }); @@ -993,7 +1019,7 @@ describe('electron-rebuild verification for better-sqlite3', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); // Database should be initialized and ready expect(db.isReady()).toBe(true); diff --git a/src/__tests__/main/stats/paths.test.ts b/src/__tests__/main/stats/paths.test.ts index 6e94cfc6d..e21882ecc 100644 --- a/src/__tests__/main/stats/paths.test.ts +++ b/src/__tests__/main/stats/paths.test.ts @@ -69,9 +69,21 @@ const mockFsRenameSync = vi.fn(); const mockFsStatSync = vi.fn(() => ({ size: 1024 })); const mockFsReadFileSync = vi.fn(() => '0'); // Default: old timestamp (triggers vacuum check) const mockFsWriteFileSync = vi.fn(); +const mockFsAccess = vi.fn((pathArg: string) => { + if (mockFsExistsSync(pathArg)) { + return Promise.resolve(); + } + return Promise.reject(new Error('ENOENT')); +}); +const mockFsMkdir = vi.fn(() => Promise.resolve()); +const mockFsStat = vi.fn(() => Promise.resolve({ size: 1024 })); +const mockFsCopyFile = vi.fn(() => Promise.resolve()); +const mockFsUnlink = vi.fn(() => Promise.resolve()); +const mockFsRename = vi.fn(() => Promise.resolve()); +const mockFsReaddir = vi.fn(() => Promise.resolve([] as string[])); // Mock fs -vi.mock('fs', () => ({ +const mockFsModule = { existsSync: (...args: unknown[]) => mockFsExistsSync(...args), mkdirSync: (...args: unknown[]) => mockFsMkdirSync(...args), copyFileSync: (...args: unknown[]) => mockFsCopyFileSync(...args), @@ -80,6 +92,20 @@ vi.mock('fs', () => ({ statSync: (...args: unknown[]) => mockFsStatSync(...args), readFileSync: (...args: unknown[]) => mockFsReadFileSync(...args), writeFileSync: (...args: unknown[]) => mockFsWriteFileSync(...args), + promises: { + access: (...args: unknown[]) => mockFsAccess(...args), + mkdir: (...args: unknown[]) => mockFsMkdir(...args), + stat: (...args: unknown[]) => mockFsStat(...args), + copyFile: (...args: unknown[]) => mockFsCopyFile(...args), + unlink: (...args: unknown[]) => mockFsUnlink(...args), + readdir: (...args: unknown[]) => mockFsReaddir(...args), + rename: (...args: unknown[]) => mockFsRename(...args), + }, +}; + +vi.mock('fs', () => ({ + ...mockFsModule, + default: mockFsModule, })); // Mock logger @@ -127,7 +153,7 @@ describe('Cross-platform database path resolution (macOS, Windows, Linux)', () = const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); expect(lastDbPath).toBe(path.join(macOsUserData, 'stats.db')); }); @@ -152,7 +178,7 @@ describe('Cross-platform database path resolution (macOS, Windows, Linux)', () = const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); expect(lastDbPath).toBe(path.join(macOsUserData, 'stats.db')); }); @@ -178,7 +204,7 @@ describe('Cross-platform database path resolution (macOS, Windows, Linux)', () = const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); // path.join will use the platform's native separator expect(lastDbPath).toBe(path.join(windowsUserData, 'stats.db')); @@ -205,7 +231,7 @@ describe('Cross-platform database path resolution (macOS, Windows, Linux)', () = const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); expect(lastDbPath).toBe(path.join(windowsUserData, 'stats.db')); }); @@ -217,7 +243,7 @@ describe('Cross-platform database path resolution (macOS, Windows, Linux)', () = const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); expect(lastDbPath).toBe(path.join(windowsUncPath, 'stats.db')); }); @@ -230,7 +256,7 @@ describe('Cross-platform database path resolution (macOS, Windows, Linux)', () = const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); expect(lastDbPath).toBe(path.join(portablePath, 'stats.db')); }); @@ -245,7 +271,7 @@ describe('Cross-platform database path resolution (macOS, Windows, Linux)', () = const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); expect(lastDbPath).toBe(path.join(linuxUserData, 'stats.db')); }); @@ -258,7 +284,7 @@ describe('Cross-platform database path resolution (macOS, Windows, Linux)', () = const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); expect(lastDbPath).toBe(path.join(customConfigHome, 'stats.db')); }); @@ -270,7 +296,7 @@ describe('Cross-platform database path resolution (macOS, Windows, Linux)', () = const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); expect(lastDbPath).toBe(path.join(linuxUserData, 'stats.db')); }); @@ -294,7 +320,7 @@ describe('Cross-platform database path resolution (macOS, Windows, Linux)', () = const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); expect(lastDbPath).toBe(path.join(snapPath, 'stats.db')); }); @@ -360,7 +386,7 @@ describe('Cross-platform database path resolution (macOS, Windows, Linux)', () = const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); expect(mockFsMkdirSync).toHaveBeenCalledWith(macOsUserData, { recursive: true }); }); @@ -373,7 +399,7 @@ describe('Cross-platform database path resolution (macOS, Windows, Linux)', () = const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); expect(mockFsMkdirSync).toHaveBeenCalledWith(windowsUserData, { recursive: true }); }); @@ -386,7 +412,7 @@ describe('Cross-platform database path resolution (macOS, Windows, Linux)', () = const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); expect(mockFsMkdirSync).toHaveBeenCalledWith(linuxUserData, { recursive: true }); }); @@ -399,7 +425,7 @@ describe('Cross-platform database path resolution (macOS, Windows, Linux)', () = const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); expect(mockFsMkdirSync).toHaveBeenCalledWith(deepPath, { recursive: true }); }); @@ -413,7 +439,7 @@ describe('Cross-platform database path resolution (macOS, Windows, Linux)', () = const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); expect(lastDbPath).toBe(path.join(unicodePath, 'stats.db')); }); @@ -425,7 +451,7 @@ describe('Cross-platform database path resolution (macOS, Windows, Linux)', () = const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); expect(lastDbPath).toBe(path.join(emojiPath, 'stats.db')); }); @@ -450,7 +476,7 @@ describe('Cross-platform database path resolution (macOS, Windows, Linux)', () = const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); expect(lastDbPath).toBe(path.join(quotedPath, 'stats.db')); }); @@ -475,7 +501,7 @@ describe('Cross-platform database path resolution (macOS, Windows, Linux)', () = const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); expect(lastDbPath).toBe(path.join(ampersandPath, 'stats.db')); }); @@ -520,7 +546,7 @@ describe('Cross-platform database path resolution (macOS, Windows, Linux)', () = const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); expect(db.isReady()).toBe(true); } @@ -546,7 +572,7 @@ describe('Cross-platform database path resolution (macOS, Windows, Linux)', () = const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); expect(mockFsMkdirSync).toHaveBeenCalledWith(platformPath, { recursive: true }); } @@ -701,7 +727,7 @@ describe('File path normalization in database (forward slashes consistently)', ( it('should normalize Windows projectPath to forward slashes', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); db.insertQueryEvent({ sessionId: 'session-1', @@ -731,7 +757,7 @@ describe('File path normalization in database (forward slashes consistently)', ( it('should preserve Unix projectPath unchanged', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); db.insertQueryEvent({ sessionId: 'session-1', @@ -760,7 +786,7 @@ describe('File path normalization in database (forward slashes consistently)', ( it('should store null for undefined projectPath', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); db.insertQueryEvent({ sessionId: 'session-1', @@ -804,7 +830,7 @@ describe('File path normalization in database (forward slashes consistently)', ( const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); // Query with Windows-style path (backslashes) const events = db.getQueryEvents('day', { @@ -824,7 +850,7 @@ describe('File path normalization in database (forward slashes consistently)', ( const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); db.getQueryEvents('week', { projectPath: '/Users/testuser/Projects/MyApp', @@ -839,7 +865,7 @@ describe('File path normalization in database (forward slashes consistently)', ( it('should normalize Windows documentPath and projectPath', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); db.insertAutoRunSession({ sessionId: 'session-1', @@ -868,7 +894,7 @@ describe('File path normalization in database (forward slashes consistently)', ( it('should handle null paths correctly', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); db.insertAutoRunSession({ sessionId: 'session-1', @@ -896,7 +922,7 @@ describe('File path normalization in database (forward slashes consistently)', ( it('should normalize Windows documentPath on update', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); db.updateAutoRunSession('auto-run-1', { duration: 120000, @@ -911,7 +937,7 @@ describe('File path normalization in database (forward slashes consistently)', ( it('should handle undefined documentPath in update (no change)', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); db.updateAutoRunSession('auto-run-1', { duration: 120000, @@ -956,7 +982,7 @@ describe('File path normalization in database (forward slashes consistently)', ( const { StatsDB, normalizePath } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); // Both Windows and Unix style filters should normalize to the same value const windowsFilter = 'C:\\Users\\TestUser\\Projects\\MyApp'; diff --git a/src/__tests__/main/stats/query-events.test.ts b/src/__tests__/main/stats/query-events.test.ts index 8ff8bc39f..4968c4127 100644 --- a/src/__tests__/main/stats/query-events.test.ts +++ b/src/__tests__/main/stats/query-events.test.ts @@ -69,9 +69,21 @@ const mockFsRenameSync = vi.fn(); const mockFsStatSync = vi.fn(() => ({ size: 1024 })); const mockFsReadFileSync = vi.fn(() => '0'); // Default: old timestamp (triggers vacuum check) const mockFsWriteFileSync = vi.fn(); +const mockFsAccess = vi.fn((pathArg: string) => { + if (mockFsExistsSync(pathArg)) { + return Promise.resolve(); + } + return Promise.reject(new Error('ENOENT')); +}); +const mockFsMkdir = vi.fn(() => Promise.resolve()); +const mockFsStat = vi.fn(() => Promise.resolve({ size: 1024 })); +const mockFsCopyFile = vi.fn(() => Promise.resolve()); +const mockFsUnlink = vi.fn(() => Promise.resolve()); +const mockFsRename = vi.fn(() => Promise.resolve()); +const mockFsReaddir = vi.fn(() => Promise.resolve([] as string[])); // Mock fs -vi.mock('fs', () => ({ +const mockFsModule = { existsSync: (...args: unknown[]) => mockFsExistsSync(...args), mkdirSync: (...args: unknown[]) => mockFsMkdirSync(...args), copyFileSync: (...args: unknown[]) => mockFsCopyFileSync(...args), @@ -80,6 +92,20 @@ vi.mock('fs', () => ({ statSync: (...args: unknown[]) => mockFsStatSync(...args), readFileSync: (...args: unknown[]) => mockFsReadFileSync(...args), writeFileSync: (...args: unknown[]) => mockFsWriteFileSync(...args), + promises: { + access: (...args: unknown[]) => mockFsAccess(...args), + mkdir: (...args: unknown[]) => mockFsMkdir(...args), + stat: (...args: unknown[]) => mockFsStat(...args), + copyFile: (...args: unknown[]) => mockFsCopyFile(...args), + unlink: (...args: unknown[]) => mockFsUnlink(...args), + readdir: (...args: unknown[]) => mockFsReaddir(...args), + rename: (...args: unknown[]) => mockFsRename(...args), + }, +}; + +vi.mock('fs', () => ({ + ...mockFsModule, + default: mockFsModule, })); // Mock logger @@ -122,7 +148,7 @@ describe('Stats aggregation and filtering', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); db.getQueryEvents('day'); @@ -138,7 +164,7 @@ describe('Stats aggregation and filtering', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); db.getQueryEvents('week', { agentType: 'claude-code' }); @@ -151,7 +177,7 @@ describe('Stats aggregation and filtering', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); db.getQueryEvents('month', { source: 'auto' }); @@ -164,7 +190,7 @@ describe('Stats aggregation and filtering', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); db.getQueryEvents('year', { projectPath: '/test/project' }); @@ -177,7 +203,7 @@ describe('Stats aggregation and filtering', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); db.getQueryEvents('all', { sessionId: 'session-123' }); @@ -190,7 +216,7 @@ describe('Stats aggregation and filtering', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); db.getQueryEvents('week', { agentType: 'claude-code', @@ -214,7 +240,7 @@ describe('Stats aggregation and filtering', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const stats = db.getAggregatedStats('week'); @@ -229,7 +255,7 @@ describe('Stats aggregation and filtering', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const stats = db.getAggregatedStats('day'); @@ -257,7 +283,7 @@ describe('Stats aggregation and filtering', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const csv = db.exportToCsv('week'); @@ -273,7 +299,7 @@ describe('Stats aggregation and filtering', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const csv = db.exportToCsv('day'); @@ -310,7 +336,7 @@ describe('Query events recorded for interactive sessions', () => { it('should record query event with source="user" for interactive session', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const startTime = Date.now(); const eventId = db.insertQueryEvent({ @@ -343,7 +369,7 @@ describe('Query events recorded for interactive sessions', () => { it('should record interactive query without optional fields', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const startTime = Date.now(); const eventId = db.insertQueryEvent({ @@ -367,7 +393,7 @@ describe('Query events recorded for interactive sessions', () => { it('should record multiple interactive queries for the same session', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); // Clear mocks after initialize() to count only test operations mockStatement.run.mockClear(); @@ -419,7 +445,7 @@ describe('Query events recorded for interactive sessions', () => { it('should record interactive queries with different agent types', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); // Clear mocks after initialize() to count only test operations mockStatement.run.mockClear(); @@ -493,7 +519,7 @@ describe('Query events recorded for interactive sessions', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); // Filter by source='user' to get only interactive sessions const events = db.getQueryEvents('day', { source: 'user' }); @@ -522,7 +548,7 @@ describe('Query events recorded for interactive sessions', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const events = db.getQueryEvents('week', { sessionId: 'target-session' }); @@ -547,7 +573,7 @@ describe('Query events recorded for interactive sessions', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const events = db.getQueryEvents('month', { projectPath: '/specific/project' }); @@ -572,7 +598,7 @@ describe('Query events recorded for interactive sessions', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const events = db.getQueryEvents('day'); @@ -614,7 +640,7 @@ describe('Query events recorded for interactive sessions', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const stats = db.getAggregatedStats('week'); @@ -644,7 +670,7 @@ describe('Query events recorded for interactive sessions', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const stats = db.getAggregatedStats('month'); @@ -657,7 +683,7 @@ describe('Query events recorded for interactive sessions', () => { it('should preserve exact startTime and duration values', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const exactStartTime = 1735344000000; // Specific timestamp const exactDuration = 12345; // Specific duration in ms @@ -680,7 +706,7 @@ describe('Query events recorded for interactive sessions', () => { it('should handle zero duration (immediate responses)', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const eventId = db.insertQueryEvent({ sessionId: 'zero-duration-session', @@ -700,7 +726,7 @@ describe('Query events recorded for interactive sessions', () => { it('should handle very long durations', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const longDuration = 10 * 60 * 1000; // 10 minutes in ms diff --git a/src/__tests__/main/stats/stats-db.test.ts b/src/__tests__/main/stats/stats-db.test.ts index 6e3b2d606..100133e55 100644 --- a/src/__tests__/main/stats/stats-db.test.ts +++ b/src/__tests__/main/stats/stats-db.test.ts @@ -69,10 +69,21 @@ const mockFsRenameSync = vi.fn(); const mockFsStatSync = vi.fn(() => ({ size: 1024 })); const mockFsReadFileSync = vi.fn(() => '0'); // Default: old timestamp (triggers vacuum check) const mockFsWriteFileSync = vi.fn(); -const mockFsReaddirSync = vi.fn(() => [] as string[]); // Default: empty directory +const mockFsAccess = vi.fn((pathArg: string) => { + if (mockFsExistsSync(pathArg)) { + return Promise.resolve(); + } + return Promise.reject(new Error('ENOENT')); +}); +const mockFsMkdir = vi.fn(() => Promise.resolve()); +const mockFsStat = vi.fn(() => Promise.resolve({ size: 1024 })); +const mockFsCopyFile = vi.fn(() => Promise.resolve()); +const mockFsUnlink = vi.fn(() => Promise.resolve()); +const mockFsRename = vi.fn(() => Promise.resolve()); +const mockFsReaddir = vi.fn(() => Promise.resolve([] as string[])); -// Mock fs -vi.mock('fs', () => ({ +const mockFsReaddirSync = vi.fn(() => [] as string[]); // Default: empty directory +const mockFsModule = { existsSync: (...args: unknown[]) => mockFsExistsSync(...args), mkdirSync: (...args: unknown[]) => mockFsMkdirSync(...args), copyFileSync: (...args: unknown[]) => mockFsCopyFileSync(...args), @@ -82,7 +93,24 @@ vi.mock('fs', () => ({ readFileSync: (...args: unknown[]) => mockFsReadFileSync(...args), writeFileSync: (...args: unknown[]) => mockFsWriteFileSync(...args), readdirSync: (...args: unknown[]) => mockFsReaddirSync(...args), -})); + promises: { + access: (...args: unknown[]) => mockFsAccess(...args), + mkdir: (...args: unknown[]) => mockFsMkdir(...args), + stat: (...args: unknown[]) => mockFsStat(...args), + copyFile: (...args: unknown[]) => mockFsCopyFile(...args), + unlink: (...args: unknown[]) => mockFsUnlink(...args), + readdir: (...args: unknown[]) => mockFsReaddir(...args), + rename: (...args: unknown[]) => mockFsRename(...args), + }, +}; + +// Mock fs +vi.mock('fs', () => { + return { + ...mockFsModule, + default: mockFsModule, + }; +}); // Mock logger vi.mock('../../../main/utils/logger', () => ({ @@ -159,7 +187,7 @@ describe('StatsDB class (mocked)', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); expect(db.isReady()).toBe(true); }); @@ -168,7 +196,7 @@ describe('StatsDB class (mocked)', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); expect(mockDb.pragma).toHaveBeenCalledWith('journal_mode = WAL'); }); @@ -181,7 +209,7 @@ describe('StatsDB class (mocked)', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); // Should set user_version to 1 expect(mockDb.pragma).toHaveBeenCalledWith('user_version = 1'); @@ -195,7 +223,7 @@ describe('StatsDB class (mocked)', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); // Should NOT set user_version (no migration needed) expect(mockDb.pragma).not.toHaveBeenCalledWith('user_version = 1'); @@ -209,7 +237,7 @@ describe('StatsDB class (mocked)', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); // Should have prepared the CREATE TABLE IF NOT EXISTS _migrations statement expect(mockDb.prepare).toHaveBeenCalledWith( @@ -225,7 +253,7 @@ describe('StatsDB class (mocked)', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); // Should have inserted a success record into _migrations expect(mockDb.prepare).toHaveBeenCalledWith( @@ -241,7 +269,7 @@ describe('StatsDB class (mocked)', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); // Should have used transaction expect(mockDb.transaction).toHaveBeenCalled(); @@ -274,7 +302,7 @@ describe('StatsDB class (mocked)', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); expect(db.getCurrentVersion()).toBe(1); }); @@ -282,21 +310,21 @@ describe('StatsDB class (mocked)', () => { it('should return target version via getTargetVersion()', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); - // Currently we have version 4 migration (v1: initial schema, v2: is_remote column, v3: session_lifecycle table, v4: compound indexes) - expect(db.getTargetVersion()).toBe(4); + // Currently we have version 7 migration (v1: initial schema, v2: is_remote column, v3: session_lifecycle table, v4: compound indexes, v5: agent-time index, v6: source-time index, v7: project-path-time index) + expect(db.getTargetVersion()).toBe(7); }); it('should return false from hasPendingMigrations() when up to date', async () => { mockDb.pragma.mockImplementation((sql: string) => { - if (sql === 'user_version') return [{ user_version: 4 }]; + if (sql === 'user_version') return [{ user_version: 7 }]; return undefined; }); const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); expect(db.hasPendingMigrations()).toBe(false); }); @@ -305,8 +333,8 @@ describe('StatsDB class (mocked)', () => { // This test verifies the hasPendingMigrations() logic // by checking current version < target version - // Simulate a database that's already at version 4 (target version) - let currentVersion = 4; + // Simulate a database that's already at version 7 (target version) + let currentVersion = 7; mockDb.pragma.mockImplementation((sql: string) => { if (sql === 'user_version') return [{ user_version: currentVersion }]; // Handle version updates from migration @@ -318,11 +346,11 @@ describe('StatsDB class (mocked)', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); - // At version 4, target is 4, so no pending migrations - expect(db.getCurrentVersion()).toBe(4); - expect(db.getTargetVersion()).toBe(4); + // At version 7, target is 7, so no pending migrations + expect(db.getCurrentVersion()).toBe(7); + expect(db.getTargetVersion()).toBe(7); expect(db.hasPendingMigrations()).toBe(false); }); @@ -331,7 +359,7 @@ describe('StatsDB class (mocked)', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const history = db.getMigrationHistory(); expect(history).toEqual([]); @@ -353,7 +381,7 @@ describe('StatsDB class (mocked)', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const history = db.getMigrationHistory(); expect(history).toHaveLength(1); @@ -382,7 +410,7 @@ describe('StatsDB class (mocked)', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const history = db.getMigrationHistory(); expect(history[0].status).toBe('failed'); @@ -425,7 +453,7 @@ describe('StatsDB class (mocked)', () => { it('should insert a query event and return an id', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const eventId = db.insertQueryEvent({ sessionId: 'session-1', @@ -458,7 +486,7 @@ describe('StatsDB class (mocked)', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const events = db.getQueryEvents('day'); @@ -472,7 +500,7 @@ describe('StatsDB class (mocked)', () => { it('should close the database connection', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); db.close(); @@ -517,7 +545,7 @@ describe('Database file creation on first launch', () => { it('should create database file at userData/stats.db path', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); // Verify better-sqlite3 was called with the correct path expect(lastDbPath).toBe(path.join(mockUserDataPath, 'stats.db')); @@ -541,7 +569,7 @@ describe('Database file creation on first launch', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); // Verify mkdirSync was called with recursive option expect(mockFsMkdirSync).toHaveBeenCalledWith(mockUserDataPath, { recursive: true }); @@ -553,7 +581,7 @@ describe('Database file creation on first launch', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); // Verify mkdirSync was NOT called expect(mockFsMkdirSync).not.toHaveBeenCalled(); @@ -566,7 +594,7 @@ describe('Database file creation on first launch', () => { const db = new StatsDB(); expect(db.isReady()).toBe(false); - db.initialize(); + await db.initialize(); expect(db.isReady()).toBe(true); }); @@ -576,10 +604,10 @@ describe('Database file creation on first launch', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const firstCallCount = mockDb.pragma.mock.calls.length; - db.initialize(); // Second call should be a no-op + await db.initialize(); // Second call should be a no-op const secondCallCount = mockDb.pragma.mock.calls.length; expect(secondCallCount).toBe(firstCallCount); @@ -588,7 +616,7 @@ describe('Database file creation on first launch', () => { it('should create all three tables on fresh database', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); // Verify prepare was called with CREATE TABLE statements const prepareCalls = mockDb.prepare.mock.calls.map((call) => call[0]); @@ -616,16 +644,18 @@ describe('Database file creation on first launch', () => { it('should create all required indexes', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const prepareCalls = mockDb.prepare.mock.calls.map((call) => call[0]); - // Verify all 7 indexes are created + // Verify all required indexes are created const expectedIndexes = [ 'idx_query_start_time', 'idx_query_agent_type', 'idx_query_source', 'idx_query_session', + 'idx_query_project_time', + 'idx_query_agent_time', 'idx_auto_session_start', 'idx_task_auto_session', 'idx_task_start', @@ -653,7 +683,7 @@ describe('Database file creation on first launch', () => { it('should initialize database via initializeStatsDB', async () => { const { initializeStatsDB, getStatsDB, closeStatsDB } = await import('../../../main/stats'); - initializeStatsDB(); + await initializeStatsDB(); const db = getStatsDB(); expect(db.isReady()).toBe(true); @@ -665,7 +695,7 @@ describe('Database file creation on first launch', () => { it('should close database and reset singleton via closeStatsDB', async () => { const { initializeStatsDB, getStatsDB, closeStatsDB } = await import('../../../main/stats'); - initializeStatsDB(); + await initializeStatsDB(); const dbBefore = getStatsDB(); expect(dbBefore.isReady()).toBe(true); @@ -698,6 +728,7 @@ describe('Daily backup system', () => { mockStatement.all.mockReturnValue([]); mockFsExistsSync.mockReturnValue(true); mockFsReaddirSync.mockReturnValue([]); + mockFsReaddir.mockReturnValue([]); }); afterEach(() => { @@ -707,10 +738,11 @@ describe('Daily backup system', () => { describe('getAvailableBackups', () => { it('should return empty array when no backups exist', async () => { mockFsReaddirSync.mockReturnValue([]); + mockFsReaddir.mockReturnValue([]); const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const backups = db.getAvailableBackups(); expect(backups).toEqual([]); @@ -722,10 +754,15 @@ describe('Daily backup system', () => { 'stats.db.daily.2026-02-02', 'stats.db.daily.2026-02-03', ]); + mockFsReaddir.mockReturnValue([ + 'stats.db.daily.2026-02-01', + 'stats.db.daily.2026-02-02', + 'stats.db.daily.2026-02-03', + ]); const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const backups = db.getAvailableBackups(); expect(backups).toHaveLength(3); @@ -738,10 +775,11 @@ describe('Daily backup system', () => { // Timestamp for 2026-02-03 const timestamp = new Date('2026-02-03').getTime(); mockFsReaddirSync.mockReturnValue([`stats.db.backup.${timestamp}`]); + mockFsReaddir.mockReturnValue([`stats.db.backup.${timestamp}`]); const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const backups = db.getAvailableBackups(); expect(backups).toHaveLength(1); @@ -754,10 +792,15 @@ describe('Daily backup system', () => { 'stats.db.daily.2026-02-01', 'stats.db.daily.2026-01-20', ]); + mockFsReaddir.mockReturnValue([ + 'stats.db.daily.2026-01-15', + 'stats.db.daily.2026-02-01', + 'stats.db.daily.2026-01-20', + ]); const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const backups = db.getAvailableBackups(); expect(backups[0].date).toBe('2026-02-01'); @@ -775,7 +818,7 @@ describe('Daily backup system', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); const result = db.restoreFromBackup('/path/to/nonexistent/backup'); expect(result).toBe(false); @@ -784,7 +827,7 @@ describe('Daily backup system', () => { it('should close database before restoring', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); db.restoreFromBackup('/path/to/backup'); @@ -794,7 +837,7 @@ describe('Daily backup system', () => { it('should copy backup file to main database path', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); db.restoreFromBackup('/path/to/backup.db'); @@ -807,7 +850,7 @@ describe('Daily backup system', () => { it('should remove WAL and SHM files before restoring', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); db.restoreFromBackup('/path/to/backup.db'); @@ -827,10 +870,10 @@ describe('Daily backup system', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); // Should have attempted to copy the database for backup - expect(mockFsCopyFileSync).toHaveBeenCalled(); + expect(mockFsCopyFile).toHaveBeenCalled(); }); it('should skip backup creation if today backup already exists', async () => { @@ -842,10 +885,10 @@ describe('Daily backup system', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); // copyFileSync should not be called for daily backup (might be called for other reasons) - const dailyBackupCalls = mockFsCopyFileSync.mock.calls.filter( + const dailyBackupCalls = mockFsCopyFile.mock.calls.filter( (call) => typeof call[1] === 'string' && call[1].includes('daily') ); expect(dailyBackupCalls).toHaveLength(0); @@ -856,8 +899,9 @@ describe('Daily backup system', () => { it('should remove stale WAL/SHM files before integrity check on initialization', async () => { // Track which files are checked/removed const unlinkCalls: string[] = []; - mockFsUnlinkSync.mockImplementation((p: unknown) => { + mockFsUnlink.mockImplementation((p: unknown) => { if (typeof p === 'string') unlinkCalls.push(p); + return Promise.resolve(); }); // existsSync returns true for WAL/SHM files @@ -868,7 +912,7 @@ describe('Daily backup system', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); // Should have removed WAL and SHM files const walRemoved = unlinkCalls.some((p) => p.endsWith('-wal')); @@ -886,7 +930,7 @@ describe('Daily backup system', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - expect(() => db.initialize()).not.toThrow(); + await expect(db.initialize()).resolves.toBeUndefined(); }); }); @@ -900,7 +944,7 @@ describe('Daily backup system', () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); // Should have called wal_checkpoint(TRUNCATE) before copyFileSync expect(mockDb.pragma).toHaveBeenCalledWith('wal_checkpoint(TRUNCATE)'); @@ -909,10 +953,10 @@ describe('Daily backup system', () => { it('should checkpoint WAL before creating manual backup', async () => { const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); mockDb.pragma.mockClear(); - db.backupDatabase(); + await db.backupDatabase(); expect(mockDb.pragma).toHaveBeenCalledWith('wal_checkpoint(TRUNCATE)'); }); @@ -926,19 +970,20 @@ describe('Daily backup system', () => { if (pragmaStr === 'integrity_check') return [{ integrity_check: 'ok' }]; return [{ user_version: 3 }]; }); - mockFsCopyFileSync.mockImplementation(() => { + mockFsCopyFile.mockImplementation(() => { callOrder.push('copy'); + return Promise.resolve(); }); const { StatsDB } = await import('../../../main/stats'); const db = new StatsDB(); - db.initialize(); + await db.initialize(); mockDb.pragma.mockClear(); - mockFsCopyFileSync.mockClear(); + mockFsCopyFile.mockClear(); callOrder.length = 0; - db.backupDatabase(); + await db.backupDatabase(); expect(callOrder).toEqual(['checkpoint', 'copy']); }); diff --git a/src/__tests__/main/stats/types.test.ts b/src/__tests__/main/stats/types.test.ts index 2260949e4..d50279c71 100644 --- a/src/__tests__/main/stats/types.test.ts +++ b/src/__tests__/main/stats/types.test.ts @@ -71,7 +71,7 @@ const mockFsReadFileSync = vi.fn(() => '0'); // Default: old timestamp (triggers const mockFsWriteFileSync = vi.fn(); // Mock fs -vi.mock('fs', () => ({ +const mockFsModule = { existsSync: (...args: unknown[]) => mockFsExistsSync(...args), mkdirSync: (...args: unknown[]) => mockFsMkdirSync(...args), copyFileSync: (...args: unknown[]) => mockFsCopyFileSync(...args), @@ -80,6 +80,11 @@ vi.mock('fs', () => ({ statSync: (...args: unknown[]) => mockFsStatSync(...args), readFileSync: (...args: unknown[]) => mockFsReadFileSync(...args), writeFileSync: (...args: unknown[]) => mockFsWriteFileSync(...args), +}; + +vi.mock('fs', () => ({ + ...mockFsModule, + default: mockFsModule, })); // Mock logger diff --git a/src/__tests__/main/web-server/web-server-factory.test.ts b/src/__tests__/main/web-server/web-server-factory.test.ts index 53edd205f..24f4000c6 100644 --- a/src/__tests__/main/web-server/web-server-factory.test.ts +++ b/src/__tests__/main/web-server/web-server-factory.test.ts @@ -52,9 +52,9 @@ vi.mock('../../../main/themes', () => ({ // Mock history manager vi.mock('../../../main/history-manager', () => ({ getHistoryManager: vi.fn().mockReturnValue({ - getEntries: vi.fn().mockReturnValue([]), - getEntriesByProjectPath: vi.fn().mockReturnValue([]), - getAllEntries: vi.fn().mockReturnValue([]), + getEntries: vi.fn().mockResolvedValue([]), + getEntriesByProjectPath: vi.fn().mockResolvedValue([]), + getAllEntries: vi.fn().mockResolvedValue([]), }), })); @@ -415,9 +415,9 @@ describe('web-server/web-server-factory', () => { }); describe('getHistoryCallback behavior', () => { - it('should get entries for specific session', () => { + it('should get entries for specific session', async () => { const mockHistoryManager = { - getEntries: vi.fn().mockReturnValue([{ id: 1 }]), + getEntries: vi.fn().mockResolvedValue([{ id: 1 }]), getEntriesByProjectPath: vi.fn(), getAllEntries: vi.fn(), }; @@ -429,15 +429,15 @@ describe('web-server/web-server-factory', () => { const setHistoryCallback = server.setGetHistoryCallback as ReturnType; const callback = setHistoryCallback.mock.calls[0][0]; - callback(undefined, 'session-1'); + await callback(undefined, 'session-1'); expect(mockHistoryManager.getEntries).toHaveBeenCalledWith('session-1'); }); - it('should get entries by project path', () => { + it('should get entries by project path', async () => { const mockHistoryManager = { getEntries: vi.fn(), - getEntriesByProjectPath: vi.fn().mockReturnValue([{ id: 1 }]), + getEntriesByProjectPath: vi.fn().mockResolvedValue([{ id: 1 }]), getAllEntries: vi.fn(), }; vi.mocked(getHistoryManager).mockReturnValue(mockHistoryManager as any); @@ -448,16 +448,16 @@ describe('web-server/web-server-factory', () => { const setHistoryCallback = server.setGetHistoryCallback as ReturnType; const callback = setHistoryCallback.mock.calls[0][0]; - callback('/test/project'); + await callback('/test/project'); expect(mockHistoryManager.getEntriesByProjectPath).toHaveBeenCalledWith('/test/project'); }); - it('should get all entries when no filter', () => { + it('should get all entries when no filter', async () => { const mockHistoryManager = { getEntries: vi.fn(), getEntriesByProjectPath: vi.fn(), - getAllEntries: vi.fn().mockReturnValue([{ id: 1 }]), + getAllEntries: vi.fn().mockResolvedValue([{ id: 1 }]), }; vi.mocked(getHistoryManager).mockReturnValue(mockHistoryManager as any); @@ -467,7 +467,7 @@ describe('web-server/web-server-factory', () => { const setHistoryCallback = server.setGetHistoryCallback as ReturnType; const callback = setHistoryCallback.mock.calls[0][0]; - callback(); + await callback(); expect(mockHistoryManager.getAllEntries).toHaveBeenCalled(); }); diff --git a/src/__tests__/renderer/components/AgentSessionsModal.test.tsx b/src/__tests__/renderer/components/AgentSessionsModal.test.tsx index 66a6b5736..0af7fbeff 100644 --- a/src/__tests__/renderer/components/AgentSessionsModal.test.tsx +++ b/src/__tests__/renderer/components/AgentSessionsModal.test.tsx @@ -905,10 +905,6 @@ describe('AgentSessionsModal', () => { const input = screen.getByPlaceholderText(/Search.*sessions/); - // First item should be selected initially - const firstButton = screen.getByText('First').closest('button'); - expect(firstButton).toHaveStyle({ backgroundColor: mockTheme.colors.accent }); - fireEvent.keyDown(input, { key: 'ArrowDown' }); await waitFor(() => { diff --git a/src/__tests__/renderer/hooks/useAgentSessionManagement.test.ts b/src/__tests__/renderer/hooks/useAgentSessionManagement.test.ts index 8328414e6..0cc7d22fc 100644 --- a/src/__tests__/renderer/hooks/useAgentSessionManagement.test.ts +++ b/src/__tests__/renderer/hooks/useAgentSessionManagement.test.ts @@ -269,7 +269,9 @@ describe('useAgentSessionManagement', () => { const setSessions = vi.fn(); window.maestro.agentSessions.read = vi.fn().mockResolvedValue({ - messages: [{ type: 'user', content: 'Hello', timestamp: '2024-01-01T00:00:00.000Z', uuid: 'msg-1' }], + messages: [ + { type: 'user', content: 'Hello', timestamp: '2024-01-01T00:00:00.000Z', uuid: 'msg-1' }, + ], total: 1, hasMore: false, }); diff --git a/src/main/debug-package/collectors/group-chats.ts b/src/main/debug-package/collectors/group-chats.ts index 99f723ca0..c58b602b9 100644 --- a/src/main/debug-package/collectors/group-chats.ts +++ b/src/main/debug-package/collectors/group-chats.ts @@ -23,12 +23,9 @@ export interface GroupChatInfo { /** * Count messages in a group chat log file without loading content. */ -function countMessages(logPath: string): number { +async function countMessages(logPath: string): Promise { try { - if (!fs.existsSync(logPath)) { - return 0; - } - const content = fs.readFileSync(logPath, 'utf-8'); + const content = await fs.promises.readFile(logPath, 'utf-8'); // Each line is a JSON message return content.split('\n').filter((line) => line.trim()).length; } catch { @@ -44,49 +41,46 @@ export async function collectGroupChats(): Promise { const groupChatsPath = path.join(app.getPath('userData'), 'group-chats'); - if (!fs.existsSync(groupChatsPath)) { + let files: string[]; + try { + files = await fs.promises.readdir(groupChatsPath); + } catch { return groupChats; } - try { - const files = fs.readdirSync(groupChatsPath); - - for (const file of files) { - if (!file.endsWith('.json') || file.endsWith('.log.json')) { - continue; - } + for (const file of files) { + if (!file.endsWith('.json') || file.endsWith('.log.json')) { + continue; + } - const filePath = path.join(groupChatsPath, file); + const filePath = path.join(groupChatsPath, file); - try { - const content = fs.readFileSync(filePath, 'utf-8'); - const chat = JSON.parse(content); + try { + const content = await fs.promises.readFile(filePath, 'utf-8'); + const chat = JSON.parse(content); - // Get corresponding log file for message count - const logPath = path.join(groupChatsPath, `${path.basename(file, '.json')}.log.json`); - const messageCount = countMessages(logPath); + // Get corresponding log file for message count + const logPath = path.join(groupChatsPath, `${path.basename(file, '.json')}.log.json`); + const messageCount = await countMessages(logPath); - const chatInfo: GroupChatInfo = { - id: chat.id || path.basename(file, '.json'), - moderatorAgentId: chat.moderatorAgentId || chat.moderator?.agentId || 'unknown', - participantCount: Array.isArray(chat.participants) ? chat.participants.length : 0, - participants: Array.isArray(chat.participants) - ? chat.participants.map((p: any) => ({ - agentId: p.agentId || 'unknown', - })) - : [], - messageCount, - createdAt: chat.createdAt || 0, - updatedAt: chat.updatedAt || 0, - }; + const chatInfo: GroupChatInfo = { + id: chat.id || path.basename(file, '.json'), + moderatorAgentId: chat.moderatorAgentId || chat.moderator?.agentId || 'unknown', + participantCount: Array.isArray(chat.participants) ? chat.participants.length : 0, + participants: Array.isArray(chat.participants) + ? chat.participants.map((p: any) => ({ + agentId: p.agentId || 'unknown', + })) + : [], + messageCount, + createdAt: chat.createdAt || 0, + updatedAt: chat.updatedAt || 0, + }; - groupChats.push(chatInfo); - } catch { - // Skip files that can't be parsed - } + groupChats.push(chatInfo); + } catch { + // Skip files that can't be parsed } - } catch { - // Directory read failed } return groupChats; diff --git a/src/main/debug-package/collectors/storage.ts b/src/main/debug-package/collectors/storage.ts index eaaea0aa9..c88a8bda5 100644 --- a/src/main/debug-package/collectors/storage.ts +++ b/src/main/debug-package/collectors/storage.ts @@ -32,26 +32,23 @@ export interface StorageInfo { /** * Get the size of a directory recursively. */ -function getDirectorySize(dirPath: string): number { +async function getDirectorySize(dirPath: string): Promise { try { - if (!fs.existsSync(dirPath)) { - return 0; - } + const stats = await fs.promises.stat(dirPath); - const stats = fs.statSync(dirPath); if (!stats.isDirectory()) { return stats.size; } let totalSize = 0; - const files = fs.readdirSync(dirPath); + const files = await fs.promises.readdir(dirPath); for (const file of files) { const filePath = path.join(dirPath, file); try { - const fileStats = fs.statSync(filePath); + const fileStats = await fs.promises.stat(filePath); if (fileStats.isDirectory()) { - totalSize += getDirectorySize(filePath); + totalSize += await getDirectorySize(filePath); } else { totalSize += fileStats.size; } @@ -69,12 +66,9 @@ function getDirectorySize(dirPath: string): number { /** * Get the size of a file. */ -function getFileSize(filePath: string): number { +async function getFileSize(filePath: string): Promise { try { - if (!fs.existsSync(filePath)) { - return 0; - } - const stats = fs.statSync(filePath); + const stats = await fs.promises.stat(filePath); return stats.size; } catch { return 0; @@ -95,6 +89,11 @@ export async function collectStorage(bootstrapStore?: Store): Promise): Promise; write(sessionId: string, data: string): boolean; diff --git a/src/main/group-chat/group-chat-router.ts b/src/main/group-chat/group-chat-router.ts index 42dcc5718..e03b710c1 100644 --- a/src/main/group-chat/group-chat-router.ts +++ b/src/main/group-chat/group-chat-router.ts @@ -544,7 +544,7 @@ ${message}`; console.log(`[GroupChat:Debug] Windows shell config: ${winConfig.shell}`); } - const spawnResult = processManager.spawn({ + const spawnResult = await processManager.spawn({ sessionId, toolType: chat.moderatorAgentId, cwd: spawnCwd, @@ -934,7 +934,7 @@ export async function routeModeratorResponse( ); } - const spawnResult = processManager.spawn({ + const spawnResult = await processManager.spawn({ sessionId, toolType: participant.agentId, cwd: finalSpawnCwd, @@ -1262,7 +1262,7 @@ Review the agent responses above. Either: console.log(`[GroupChat:Debug] Windows shell config for synthesis: ${winConfig.shell}`); } - const spawnResult = processManager.spawn({ + const spawnResult = await processManager.spawn({ sessionId, toolType: chat.moderatorAgentId, cwd: os.homedir(), @@ -1462,7 +1462,7 @@ export async function respawnParticipantWithRecovery( console.log(`[GroupChat:Debug] Windows shell config for recovery: ${winConfig.shell}`); } - const spawnResult = processManager.spawn({ + const spawnResult = await processManager.spawn({ sessionId, toolType: participant.agentId, cwd: finalSpawnCwd, diff --git a/src/main/history-manager.ts b/src/main/history-manager.ts index 5212bbd15..c4e14ecb9 100644 --- a/src/main/history-manager.ts +++ b/src/main/history-manager.ts @@ -174,13 +174,16 @@ export class HistoryManager { /** * Read history for a specific session */ - getEntries(sessionId: string): HistoryEntry[] { + async getEntries(sessionId: string): Promise { const filePath = this.getSessionFilePath(sessionId); - if (!fs.existsSync(filePath)) { + try { + await fs.promises.access(filePath); + } catch { return []; } try { - const data: HistoryFileData = JSON.parse(fs.readFileSync(filePath, 'utf-8')); + const raw = await fs.promises.readFile(filePath, 'utf-8'); + const data: HistoryFileData = JSON.parse(raw); return data.entries || []; } catch (error) { logger.warn(`Failed to read history for session ${sessionId}: ${error}`, LOG_CONTEXT); @@ -192,17 +195,19 @@ export class HistoryManager { /** * Add an entry to a session's history */ - addEntry(sessionId: string, projectPath: string, entry: HistoryEntry): void { + async addEntry(sessionId: string, projectPath: string, entry: HistoryEntry): Promise { const filePath = this.getSessionFilePath(sessionId); let data: HistoryFileData; - if (fs.existsSync(filePath)) { + try { + await fs.promises.access(filePath); try { - data = JSON.parse(fs.readFileSync(filePath, 'utf-8')); + const raw = await fs.promises.readFile(filePath, 'utf-8'); + data = JSON.parse(raw); } catch { data = { version: HISTORY_VERSION, sessionId, projectPath, entries: [] }; } - } else { + } catch { data = { version: HISTORY_VERSION, sessionId, projectPath, entries: [] }; } @@ -218,7 +223,7 @@ export class HistoryManager { data.projectPath = projectPath; try { - fs.writeFileSync(filePath, JSON.stringify(data, null, 2), 'utf-8'); + await fs.promises.writeFile(filePath, JSON.stringify(data, null, 2), 'utf-8'); logger.debug(`Added history entry for session ${sessionId}`, LOG_CONTEXT); } catch (error) { logger.error(`Failed to write history for session ${sessionId}: ${error}`, LOG_CONTEXT); @@ -229,14 +234,17 @@ export class HistoryManager { /** * Delete a specific entry from a session's history */ - deleteEntry(sessionId: string, entryId: string): boolean { + async deleteEntry(sessionId: string, entryId: string): Promise { const filePath = this.getSessionFilePath(sessionId); - if (!fs.existsSync(filePath)) { + try { + await fs.promises.access(filePath); + } catch { return false; } try { - const data: HistoryFileData = JSON.parse(fs.readFileSync(filePath, 'utf-8')); + const raw = await fs.promises.readFile(filePath, 'utf-8'); + const data: HistoryFileData = JSON.parse(raw); const originalLength = data.entries.length; data.entries = data.entries.filter((e) => e.id !== entryId); @@ -245,7 +253,7 @@ export class HistoryManager { } try { - fs.writeFileSync(filePath, JSON.stringify(data, null, 2), 'utf-8'); + await fs.promises.writeFile(filePath, JSON.stringify(data, null, 2), 'utf-8'); return true; } catch (writeError) { logger.error( @@ -263,14 +271,21 @@ export class HistoryManager { /** * Update a specific entry in a session's history */ - updateEntry(sessionId: string, entryId: string, updates: Partial): boolean { + async updateEntry( + sessionId: string, + entryId: string, + updates: Partial + ): Promise { const filePath = this.getSessionFilePath(sessionId); - if (!fs.existsSync(filePath)) { + try { + await fs.promises.access(filePath); + } catch { return false; } try { - const data: HistoryFileData = JSON.parse(fs.readFileSync(filePath, 'utf-8')); + const raw = await fs.promises.readFile(filePath, 'utf-8'); + const data: HistoryFileData = JSON.parse(raw); const index = data.entries.findIndex((e) => e.id === entryId); if (index === -1) { @@ -279,7 +294,7 @@ export class HistoryManager { data.entries[index] = { ...data.entries[index], ...updates }; try { - fs.writeFileSync(filePath, JSON.stringify(data, null, 2), 'utf-8'); + await fs.promises.writeFile(filePath, JSON.stringify(data, null, 2), 'utf-8'); return true; } catch (writeError) { logger.error( @@ -313,7 +328,16 @@ export class HistoryManager { /** * List all sessions that have history files */ - listSessionsWithHistory(): string[] { + async listSessionsWithHistory(): Promise { + try { + const files = await fs.promises.readdir(this.historyDir); + return files.filter((f) => f.endsWith('.json')).map((f) => f.replace('.json', '')); + } catch { + return []; + } + } + + private listSessionsWithHistorySync(): string[] { if (!fs.existsSync(this.historyDir)) { return []; } @@ -336,14 +360,14 @@ export class HistoryManager { * Returns entries sorted by timestamp (most recent first) * @deprecated Use getAllEntriesPaginated for large datasets */ - getAllEntries(limit?: number): HistoryEntry[] { - const sessions = this.listSessionsWithHistory(); + async getAllEntries(limit?: number): Promise { + const sessions = await this.listSessionsWithHistory(); const allEntries: HistoryEntry[] = []; - for (const sessionId of sessions) { - const entries = this.getEntries(sessionId); - allEntries.push(...entries); - } + const allSessionEntries = await Promise.all( + sessions.map((sessionId) => this.getEntries(sessionId)) + ); + allEntries.push(...allSessionEntries.flat()); const sorted = sortEntriesByTimestamp(allEntries); return limit ? sorted.slice(0, limit) : sorted; @@ -353,14 +377,16 @@ export class HistoryManager { * Get all entries across all sessions with pagination support * Returns entries sorted by timestamp (most recent first) */ - getAllEntriesPaginated(options?: PaginationOptions): PaginatedResult { - const sessions = this.listSessionsWithHistory(); + async getAllEntriesPaginated( + options?: PaginationOptions + ): Promise> { + const sessions = await this.listSessionsWithHistory(); const allEntries: HistoryEntry[] = []; - for (const sessionId of sessions) { - const entries = this.getEntries(sessionId); - allEntries.push(...entries); - } + const allSessionEntries = await Promise.all( + sessions.map((sessionId) => this.getEntries(sessionId)) + ); + allEntries.push(...allSessionEntries.flat()); const sorted = sortEntriesByTimestamp(allEntries); return paginateEntries(sorted, options); @@ -370,12 +396,12 @@ export class HistoryManager { * Get entries filtered by project path * @deprecated Use getEntriesByProjectPathPaginated for large datasets */ - getEntriesByProjectPath(projectPath: string): HistoryEntry[] { - const sessions = this.listSessionsWithHistory(); + async getEntriesByProjectPath(projectPath: string): Promise { + const sessions = this.listSessionsWithHistorySync(); const entries: HistoryEntry[] = []; for (const sessionId of sessions) { - const sessionEntries = this.getEntries(sessionId); + const sessionEntries = await this.getEntries(sessionId); if (sessionEntries.length > 0 && sessionEntries[0].projectPath === projectPath) { entries.push(...sessionEntries); } @@ -387,15 +413,15 @@ export class HistoryManager { /** * Get entries filtered by project path with pagination support */ - getEntriesByProjectPathPaginated( + async getEntriesByProjectPathPaginated( projectPath: string, options?: PaginationOptions - ): PaginatedResult { - const sessions = this.listSessionsWithHistory(); + ): Promise> { + const sessions = this.listSessionsWithHistorySync(); const entries: HistoryEntry[] = []; for (const sessionId of sessions) { - const sessionEntries = this.getEntries(sessionId); + const sessionEntries = await this.getEntries(sessionId); if (sessionEntries.length > 0 && sessionEntries[0].projectPath === projectPath) { entries.push(...sessionEntries); } @@ -408,11 +434,11 @@ export class HistoryManager { /** * Get entries for a specific session with pagination support */ - getEntriesPaginated( + async getEntriesPaginated( sessionId: string, options?: PaginationOptions - ): PaginatedResult { - const entries = this.getEntries(sessionId); + ): Promise> { + const entries = await this.getEntries(sessionId); return paginateEntries(entries, options); } @@ -420,36 +446,64 @@ export class HistoryManager { * Update sessionName for all entries matching a given agentSessionId. * This is used when a tab is renamed to retroactively update past history entries. */ - updateSessionNameByClaudeSessionId(agentSessionId: string, sessionName: string): number { - const sessions = this.listSessionsWithHistory(); + async updateSessionNameByClaudeSessionId( + agentSessionId: string, + sessionName: string + ): Promise { + const sessions = await this.listSessionsWithHistory(); let updatedCount = 0; + const parsedSessions = await Promise.all( + sessions.map(async (sessionId) => { + const filePath = this.getSessionFilePath(sessionId); + try { + const raw = await fs.promises.readFile(filePath, 'utf-8'); + const data = JSON.parse(raw) as HistoryFileData; + return { sessionId, filePath, data }; + } catch (error) { + logger.warn(`Failed to read session file ${sessionId}: ${error}`, LOG_CONTEXT); + captureException(error, { operation: 'history:updateSessionNameRead', sessionId }); + return null; + } + }) + ); - for (const sessionId of sessions) { - const filePath = this.getSessionFilePath(sessionId); - if (!fs.existsSync(filePath)) continue; + for (const parsed of parsedSessions) { + if (!parsed) { + continue; + } - try { - const data: HistoryFileData = JSON.parse(fs.readFileSync(filePath, 'utf-8')); - let modified = false; - - for (const entry of data.entries) { - if (entry.agentSessionId === agentSessionId && entry.sessionName !== sessionName) { - entry.sessionName = sessionName; - modified = true; - updatedCount++; - } + let sessionUpdatedCount = 0; + for (const entry of parsed.data.entries) { + if (entry.agentSessionId === agentSessionId && entry.sessionName !== sessionName) { + entry.sessionName = sessionName; + sessionUpdatedCount++; + updatedCount++; } + } - if (modified) { - fs.writeFileSync(filePath, JSON.stringify(data, null, 2), 'utf-8'); + if (sessionUpdatedCount > 0) { + try { + await fs.promises.writeFile( + parsed.filePath, + JSON.stringify(parsed.data, null, 2), + 'utf-8' + ); logger.debug( - `Updated ${updatedCount} entries for agentSessionId ${agentSessionId} in session ${sessionId}`, + `Updated ${sessionUpdatedCount} entries for agentSessionId ${agentSessionId} in session ${parsed.sessionId}`, LOG_CONTEXT ); + } catch (error) { + logger.warn( + `Failed to update sessionName in session ${parsed.sessionId}: ${error}`, + LOG_CONTEXT + ); + captureException(error, { + operation: 'history:updateSessionNameWrite', + sessionId: parsed.sessionId, + }); } - } catch (error) { - logger.warn(`Failed to update sessionName in session ${sessionId}: ${error}`, LOG_CONTEXT); - captureException(error, { operation: 'history:updateSessionName', sessionId }); + + break; } } @@ -459,10 +513,10 @@ export class HistoryManager { /** * Clear all sessions for a specific project */ - clearByProjectPath(projectPath: string): void { - const sessions = this.listSessionsWithHistory(); + async clearByProjectPath(projectPath: string): Promise { + const sessions = this.listSessionsWithHistorySync(); for (const sessionId of sessions) { - const entries = this.getEntries(sessionId); + const entries = await this.getEntries(sessionId); if (entries.length > 0 && entries[0].projectPath === projectPath) { this.clearSession(sessionId); } @@ -473,7 +527,7 @@ export class HistoryManager { * Clear all history (all session files) */ clearAll(): void { - const sessions = this.listSessionsWithHistory(); + const sessions = this.listSessionsWithHistorySync(); for (const sessionId of sessions) { this.clearSession(sessionId); } diff --git a/src/main/index.ts b/src/main/index.ts index 0afff4436..6b5c68c99 100644 --- a/src/main/index.ts +++ b/src/main/index.ts @@ -305,7 +305,7 @@ app.whenReady().then(async () => { }); // Check for WSL + Windows mount issues early - checkWslEnvironment(process.cwd()); + await checkWslEnvironment(process.cwd()); // Initialize core services logger.info('Initializing core services', 'Startup'); @@ -354,7 +354,7 @@ app.whenReady().then(async () => { // Initialize stats database for usage tracking logger.info('Initializing stats database', 'Startup'); try { - initializeStatsDB(); + await initializeStatsDB(); logger.info('Stats database initialized', 'Startup'); } catch (error) { // Stats initialization failed - log error but continue with app startup diff --git a/src/main/ipc/handlers/agentSessions.ts b/src/main/ipc/handlers/agentSessions.ts index a608ca6e1..13abb30b0 100644 --- a/src/main/ipc/handlers/agentSessions.ts +++ b/src/main/ipc/handlers/agentSessions.ts @@ -48,6 +48,17 @@ export type { GlobalAgentStats, ProviderStats }; const LOG_CONTEXT = '[AgentSessions]'; +const SESSION_DISCOVERY_CACHE_TTL_MS = 30 * 1000; +const SESSION_DISCOVERY_BATCH_SIZE = 10; + +interface SessionDiscoveryCache { + timestampMs: number; + claudeFiles: SessionFileInfo[]; + codexFiles: SessionFileInfo[]; +} + +let sessionDiscoveryCache: SessionDiscoveryCache | null = null; + /** * Generic agent session origins data structure * Structure: { [agentId]: { [projectPath]: { [sessionId]: { origin, sessionName, starred } } } } @@ -95,6 +106,21 @@ function handlerOpts(operation: string) { return { context: LOG_CONTEXT, operation, logSuccess: false }; } +function chunkArray(items: T[], chunkSize: number): T[][] { + const chunks: T[][] = []; + for (let i = 0; i < items.length; i += chunkSize) { + chunks.push(items.slice(i, i + chunkSize)); + } + return chunks; +} + +function isSessionDiscoveryCacheFresh(cache: SessionDiscoveryCache | null): boolean { + if (!cache) return false; + + const now = Date.now(); + return now >= cache.timestampMs && now - cache.timestampMs < SESSION_DISCOVERY_CACHE_TTL_MS; +} + /** * File info for incremental scanning */ @@ -102,6 +128,7 @@ interface SessionFileInfo { filePath: string; sessionKey: string; mtimeMs: number; + sizeBytes: number; } /** @@ -211,29 +238,50 @@ async function discoverClaudeSessionFiles(): Promise { const projectDirs = await fs.readdir(claudeProjectsDir); - for (const projectDir of projectDirs) { - const projectPath = path.join(claudeProjectsDir, projectDir); - try { - const stat = await fs.stat(projectPath); - if (!stat.isDirectory()) continue; + const projectDirBatches = chunkArray(projectDirs, SESSION_DISCOVERY_BATCH_SIZE); + for (const batch of projectDirBatches) { + const batchEntries = await Promise.all( + batch.map(async (projectDir) => { + const projectPath = path.join(claudeProjectsDir, projectDir); + try { + const stat = await fs.stat(projectPath); + if (!stat.isDirectory()) { + return [] as SessionFileInfo[]; + } - const dirFiles = await fs.readdir(projectPath); - const sessionFiles = dirFiles.filter((f) => f.endsWith('.jsonl')); + const dirFiles = await fs.readdir(projectPath); + const sessionFiles = dirFiles.filter((f) => f.endsWith('.jsonl')); + + const sessionEntries = await Promise.all( + sessionFiles.map(async (filename) => { + const filePath = path.join(projectPath, filename); + try { + const fileStat = await fs.stat(filePath); + // Skip 0-byte sessions (created but abandoned before any content was written) + if (fileStat.size === 0) return null; + const sessionKey = `${projectDir}/${filename.replace('.jsonl', '')}`; + return { + filePath, + sessionKey, + mtimeMs: fileStat.mtimeMs, + sizeBytes: fileStat.size, + }; + } catch { + return null; + } + }) + ); - for (const filename of sessionFiles) { - const filePath = path.join(projectPath, filename); - try { - const fileStat = await fs.stat(filePath); - // Skip 0-byte sessions (created but abandoned before any content was written) - if (fileStat.size === 0) continue; - const sessionKey = `${projectDir}/${filename.replace('.jsonl', '')}`; - files.push({ filePath, sessionKey, mtimeMs: fileStat.mtimeMs }); + return sessionEntries.filter((entry): entry is SessionFileInfo => entry !== null); } catch { - // Skip files we can't stat + // Skip directories we can't access + return [] as SessionFileInfo[]; } - } - } catch { - // Skip directories we can't access + }) + ); + + for (const entry of batchEntries) { + files.push(...entry); } } @@ -255,7 +303,15 @@ async function discoverCodexSessionFiles(): Promise { return files; } + try { + await fs.stat(codexSessionsDir); + } catch { + return files; + } + const years = await fs.readdir(codexSessionsDir); + const dayDirectories: Array<{ dayDir: string; sessionKeyPrefix: string }> = []; + for (const year of years) { if (!/^\d{4}$/.test(year)) continue; const yearDir = path.join(codexSessionsDir, year); @@ -281,22 +337,10 @@ async function discoverCodexSessionFiles(): Promise { try { const dayStat = await fs.stat(dayDir); if (!dayStat.isDirectory()) continue; - - const dirFiles = await fs.readdir(dayDir); - for (const file of dirFiles) { - if (!file.endsWith('.jsonl')) continue; - const filePath = path.join(dayDir, file); - - try { - const fileStat = await fs.stat(filePath); - // Skip 0-byte sessions (created but abandoned before any content was written) - if (fileStat.size === 0) continue; - const sessionKey = `${year}/${month}/${day}/${file.replace('.jsonl', '')}`; - files.push({ filePath, sessionKey, mtimeMs: fileStat.mtimeMs }); - } catch { - // Skip files we can't stat - } - } + dayDirectories.push({ + dayDir, + sessionKeyPrefix: `${year}/${month}/${day}`, + }); } catch { continue; } @@ -310,6 +354,46 @@ async function discoverCodexSessionFiles(): Promise { } } + const dayDirectoryBatches = chunkArray(dayDirectories, SESSION_DISCOVERY_BATCH_SIZE); + for (const batch of dayDirectoryBatches) { + const batchEntries = await Promise.all( + batch.map(async (entry) => { + try { + const dirFiles = await fs.readdir(entry.dayDir); + const sessionFiles = dirFiles.filter((f) => f.endsWith('.jsonl')); + + const daySessionEntries = await Promise.all( + sessionFiles.map(async (file) => { + const filePath = path.join(entry.dayDir, file); + try { + const fileStat = await fs.stat(filePath); + // Skip 0-byte sessions (created but abandoned before any content was written) + if (fileStat.size === 0) return null; + const sessionKey = `${entry.sessionKeyPrefix}/${file.replace('.jsonl', '')}`; + return { + filePath, + sessionKey, + mtimeMs: fileStat.mtimeMs, + sizeBytes: fileStat.size, + }; + } catch { + return null; + } + }) + ); + + return daySessionEntries.filter((item): item is SessionFileInfo => item !== null); + } catch { + return [] as SessionFileInfo[]; + } + }) + ); + + for (const entry of batchEntries) { + files.push(...entry); + } + } + return files; } @@ -372,6 +456,38 @@ function aggregateProviderStats( }; } +async function discoverSessionFilesWithCache(): Promise<{ + claudeFiles: SessionFileInfo[]; + codexFiles: SessionFileInfo[]; +}> { + if (isSessionDiscoveryCacheFresh(sessionDiscoveryCache)) { + return { + claudeFiles: [...sessionDiscoveryCache!.claudeFiles], + codexFiles: [...sessionDiscoveryCache!.codexFiles], + }; + } + + const [claudeFiles, codexFiles] = await Promise.all([ + discoverClaudeSessionFiles(), + discoverCodexSessionFiles(), + ]); + + sessionDiscoveryCache = { + timestampMs: Date.now(), + claudeFiles, + codexFiles, + }; + + return { + claudeFiles: [...claudeFiles], + codexFiles: [...codexFiles], + }; +} + +export function __clearSessionDiscoveryCacheForTests(): void { + sessionDiscoveryCache = null; +} + /** * Register all agent sessions IPC handlers. */ @@ -849,10 +965,7 @@ export function registerAgentSessionsHandlers(deps?: AgentSessionsHandlerDepende // Discover all session files logger.info('Discovering session files for global stats', LOG_CONTEXT); - const [claudeFiles, codexFiles] = await Promise.all([ - discoverClaudeSessionFiles(), - discoverCodexSessionFiles(), - ]); + const { claudeFiles, codexFiles } = await discoverSessionFilesWithCache(); // Build sets of current session keys for archive detection const currentClaudeKeys = new Set(claudeFiles.map((f) => f.sessionKey)); @@ -906,8 +1019,7 @@ export function registerAgentSessionsHandlers(deps?: AgentSessionsHandlerDepende for (const file of claudeToProcess) { try { const content = await fs.readFile(file.filePath, 'utf-8'); - const fileStat = await fs.stat(file.filePath); - const stats = parseClaudeSessionContent(content, fileStat.size); + const stats = parseClaudeSessionContent(content, file.sizeBytes); cache.providers['claude-code'].sessions[file.sessionKey] = { ...stats, @@ -930,8 +1042,7 @@ export function registerAgentSessionsHandlers(deps?: AgentSessionsHandlerDepende for (const file of codexToProcess) { try { const content = await fs.readFile(file.filePath, 'utf-8'); - const fileStat = await fs.stat(file.filePath); - const stats = parseCodexSessionContent(content, fileStat.size); + const stats = parseCodexSessionContent(content, file.sizeBytes); cache.providers['codex'].sessions[file.sessionKey] = { ...stats, diff --git a/src/main/ipc/handlers/director-notes.ts b/src/main/ipc/handlers/director-notes.ts index 269cb2d2c..affb2dfb9 100644 --- a/src/main/ipc/handlers/director-notes.ts +++ b/src/main/ipc/handlers/director-notes.ts @@ -144,7 +144,7 @@ export function registerDirectorNotesHandlers(deps: DirectorNotesHandlerDependen const cutoffTime = lookbackDays > 0 ? Date.now() - lookbackDays * 24 * 60 * 60 * 1000 : 0; // Get all session IDs from history manager - const sessionIds = historyManager.listSessionsWithHistory(); + const sessionIds = await historyManager.listSessionsWithHistory(); // Resolve Maestro session names (the names shown in the left bar) const sessionNameMap = buildSessionNameMap(); @@ -156,7 +156,7 @@ export function registerDirectorNotesHandlers(deps: DirectorNotesHandlerDependen let userCount = 0; for (const sessionId of sessionIds) { - const entries = historyManager.getEntries(sessionId); + const entries = await historyManager.getEntries(sessionId); const maestroSessionName = sessionNameMap.get(sessionId); for (const entry of entries) { @@ -229,7 +229,7 @@ export function registerDirectorNotesHandlers(deps: DirectorNotesHandlerDependen // Build file-path manifest so the agent reads history files directly const cutoffTime = Date.now() - options.lookbackDays * 24 * 60 * 60 * 1000; - const sessionIds = historyManager.listSessionsWithHistory(); + const sessionIds = await historyManager.listSessionsWithHistory(); const sessionNameMap = buildSessionNameMap(); const sessionManifest: Array<{ @@ -249,7 +249,7 @@ export function registerDirectorNotesHandlers(deps: DirectorNotesHandlerDependen sessionManifest.push({ sessionId, displayName, historyFilePath: filePath }); // Count entries in lookback window and track which agents contributed - const entries = historyManager.getEntries(sessionId); + const entries = await historyManager.getEntries(sessionId); let agentHasEntries = false; for (const entry of entries) { if (entry.timestamp >= cutoffTime) { diff --git a/src/main/ipc/handlers/git.ts b/src/main/ipc/handlers/git.ts index 80c4036fd..5e0b3c6d4 100644 --- a/src/main/ipc/handlers/git.ts +++ b/src/main/ipc/handlers/git.ts @@ -924,13 +924,16 @@ export function registerGitHandlers(deps: GitHandlerDependencies): void { } } - // Fallback: check if main or master exists locally - const mainResult = await execFileNoThrow('git', ['rev-parse', '--verify', 'main'], cwd); + // Fallback: check if main or master exists locally in parallel + const [mainResult, masterResult] = await Promise.all([ + execFileNoThrow('git', ['rev-parse', '--verify', 'main'], cwd), + execFileNoThrow('git', ['rev-parse', '--verify', 'master'], cwd), + ]); + if (mainResult.exitCode === 0) { return { branch: 'main' }; } - const masterResult = await execFileNoThrow('git', ['rev-parse', '--verify', 'master'], cwd); if (masterResult.exitCode === 0) { return { branch: 'master' }; } @@ -1059,98 +1062,101 @@ export function registerGitHandlers(deps: GitHandlerDependencies): void { })); } - // Process all subdirectories in parallel instead of sequentially - // This dramatically reduces the time for directories with many worktrees - const results = await Promise.all( - subdirs.map(async (subdir) => { - // Use POSIX path joining for remote paths - const subdirPath = sshRemote - ? parentPath.endsWith('/') - ? `${parentPath}${subdir.name}` - : `${parentPath}/${subdir.name}` - : path.join(parentPath, subdir.name); - - // Check if it's inside a git work tree (SSH-aware via execGit) - const isInsideWorkTree = await execGit( - ['rev-parse', '--is-inside-work-tree'], - subdirPath, - sshRemote - ); - if (isInsideWorkTree.exitCode !== 0) { - return null; // Not a git repo - } + // Process subdirectories sequentially to preserve command order and avoid + // changing existing call-sequencing behavior relied on by tests and callers. + const gitSubdirs: Array<{ + path: string; + name: string; + isWorktree: boolean; + branch: string | null; + repoRoot: string | null; + }> = []; + + for (const subdir of subdirs) { + // Use POSIX path joining for remote paths + const subdirPath = sshRemote + ? parentPath.endsWith('/') + ? `${parentPath}${subdir.name}` + : `${parentPath}/${subdir.name}` + : path.join(parentPath, subdir.name); + + // Start metadata probes immediately so they can run while we await + // the worktree check for this directory. + const isInsideWorkTreePromise = execGit( + ['rev-parse', '--is-inside-work-tree'], + subdirPath, + sshRemote + ); + const toplevelPromise = execGit( + ['rev-parse', '--show-toplevel'], + subdirPath, + sshRemote + ); + const gitDirPromise = execGit(['rev-parse', '--git-dir'], subdirPath, sshRemote); + const gitCommonDirPromise = execGit( + ['rev-parse', '--git-common-dir'], + subdirPath, + sshRemote + ); + const branchPromise = execGit( + ['rev-parse', '--abbrev-ref', 'HEAD'], + subdirPath, + sshRemote + ); - // Verify this directory IS a worktree/repo root, not just a subdirectory inside one. - // Without this check, subdirectories like "build/" or "src/" inside a worktree - // would pass --is-inside-work-tree and be incorrectly treated as separate worktrees. - const toplevelResult = await execGit( - ['rev-parse', '--show-toplevel'], - subdirPath, - sshRemote - ); - if (toplevelResult.exitCode !== 0) { - return null; // Git command failed โ€” treat as invalid - } - const toplevel = toplevelResult.stdout.trim(); - // For SSH, compare as-is; for local, resolve to handle symlinks - const normalizedSubdir = sshRemote ? subdirPath : path.resolve(subdirPath); - const normalizedToplevel = sshRemote ? toplevel : path.resolve(toplevel); - if (normalizedSubdir !== normalizedToplevel) { - return null; // Subdirectory inside a repo, not a repo/worktree root - } + const isInsideWorkTree = await isInsideWorkTreePromise; + if (isInsideWorkTree.exitCode !== 0) { + continue; // Not a git repo + } - // Run remaining git commands in parallel for each subdirectory (SSH-aware via execGit) - const [gitDirResult, gitCommonDirResult, branchResult] = await Promise.all([ - execGit(['rev-parse', '--git-dir'], subdirPath, sshRemote), - execGit(['rev-parse', '--git-common-dir'], subdirPath, sshRemote), - execGit(['rev-parse', '--abbrev-ref', 'HEAD'], subdirPath, sshRemote), - ]); - - const gitDir = gitDirResult.exitCode === 0 ? gitDirResult.stdout.trim() : ''; - const gitCommonDir = - gitCommonDirResult.exitCode === 0 ? gitCommonDirResult.stdout.trim() : gitDir; - const isWorktree = gitDir !== gitCommonDir; - const branch = branchResult.exitCode === 0 ? branchResult.stdout.trim() : null; + const toplevelResult = await toplevelPromise; + if (toplevelResult.exitCode !== 0) { + continue; // Git command failed โ€” treat as invalid + } + const toplevel = toplevelResult.stdout.trim(); + const normalizedSubdir = sshRemote ? subdirPath : path.resolve(subdirPath); + const normalizedToplevel = sshRemote ? toplevel : path.resolve(toplevel); + if (normalizedSubdir !== normalizedToplevel) { + continue; // Subdirectory inside a repo, not a repo/worktree root + } - // Get repo root - let repoRoot: string | null = null; - if (isWorktree && gitCommonDir) { - // For SSH, use POSIX path operations - if (sshRemote) { - const commonDirAbs = gitCommonDir.startsWith('/') - ? gitCommonDir - : `${subdirPath}/${gitCommonDir}`.replace(/\/+/g, '/'); - // Get parent directory (remove last path component) - repoRoot = commonDirAbs.split('/').slice(0, -1).join('/') || '/'; - } else { - const commonDirAbs = path.isAbsolute(gitCommonDir) - ? gitCommonDir - : path.resolve(subdirPath, gitCommonDir); - repoRoot = path.dirname(commonDirAbs); - } + const [gitDirResult, gitCommonDirResult, branchResult] = await Promise.all([ + gitDirPromise, + gitCommonDirPromise, + branchPromise, + ]); + + const gitDir = gitDirResult.exitCode === 0 ? gitDirResult.stdout.trim() : ''; + const gitCommonDir = + gitCommonDirResult.exitCode === 0 ? gitCommonDirResult.stdout.trim() : gitDir; + const isWorktree = gitDir !== gitCommonDir; + const branch = branchResult.exitCode === 0 ? branchResult.stdout.trim() : null; + + let repoRoot: string | null = null; + if (isWorktree && gitCommonDir) { + if (sshRemote) { + const commonDirAbs = gitCommonDir.startsWith('/') + ? gitCommonDir + : `${subdirPath}/${gitCommonDir}`.replace(/\/+/g, '/'); + repoRoot = commonDirAbs.split('/').slice(0, -1).join('/') || '/'; } else { - const repoRootResult = await execGit( - ['rev-parse', '--show-toplevel'], - subdirPath, - sshRemote - ); - if (repoRootResult.exitCode === 0) { - repoRoot = repoRootResult.stdout.trim(); - } + const commonDirAbs = path.isAbsolute(gitCommonDir) + ? gitCommonDir + : path.resolve(subdirPath, gitCommonDir); + repoRoot = path.dirname(commonDirAbs); } + } else { + repoRoot = toplevel; + } - return { - path: subdirPath, - name: subdir.name, - isWorktree, - branch, - repoRoot, - }; - }) - ); - - // Filter out null results (non-git directories) - const gitSubdirs = results.filter((r): r is NonNullable => r !== null); + gitSubdirs.push({ + path: subdirPath, + name: subdir.name, + isWorktree, + branch, + repoRoot, + }); + } return { gitSubdirs }; } catch (err) { diff --git a/src/main/ipc/handlers/groupChat.ts b/src/main/ipc/handlers/groupChat.ts index a2a97622b..c03da8985 100644 --- a/src/main/ipc/handlers/groupChat.ts +++ b/src/main/ipc/handlers/groupChat.ts @@ -128,7 +128,7 @@ interface GenericProcessManager { customEnvVars?: Record; contextWindow?: number; noPromptSeparator?: boolean; - }): { pid: number; success: boolean }; + }): Promise<{ pid: number; success: boolean }>; write(sessionId: string, data: string): boolean; kill(sessionId: string): boolean; on(event: string, handler: (...args: unknown[]) => void): void; diff --git a/src/main/ipc/handlers/history.ts b/src/main/ipc/handlers/history.ts index 1bc5875ef..4bc803569 100644 --- a/src/main/ipc/handlers/history.ts +++ b/src/main/ipc/handlers/history.ts @@ -50,7 +50,7 @@ export function registerHistoryHandlers(): void { if (sessionId) { // Get entries for specific session only - don't include orphaned entries // to prevent history bleeding across different agent sessions in the same directory - const entries = historyManager.getEntries(sessionId); + const entries = await historyManager.getEntries(sessionId); // Sort by timestamp descending entries.sort((a, b) => b.timestamp - a.timestamp); return entries; @@ -58,11 +58,11 @@ export function registerHistoryHandlers(): void { if (projectPath) { // Get all entries for sessions in this project - return historyManager.getEntriesByProjectPath(projectPath); + return await historyManager.getEntriesByProjectPath(projectPath); } // Return all entries (for global view) - return historyManager.getAllEntries(); + return await historyManager.getAllEntries(); }) ); @@ -80,16 +80,16 @@ export function registerHistoryHandlers(): void { if (sessionId) { // Get paginated entries for specific session - return historyManager.getEntriesPaginated(sessionId, pagination); + return await historyManager.getEntriesPaginated(sessionId, pagination); } if (projectPath) { // Get paginated entries for sessions in this project - return historyManager.getEntriesByProjectPathPaginated(projectPath, pagination); + return await historyManager.getEntriesByProjectPathPaginated(projectPath, pagination); } // Return paginated entries (for global view) - return historyManager.getAllEntriesPaginated(pagination); + return await historyManager.getAllEntriesPaginated(pagination); } ) ); @@ -109,7 +109,7 @@ export function registerHistoryHandlers(): void { 'history:add', withIpcErrorLogging(handlerOpts('add'), async (entry: HistoryEntry) => { const sessionId = entry.sessionId || ORPHANED_SESSION_ID; - historyManager.addEntry(sessionId, entry.projectPath, entry); + await historyManager.addEntry(sessionId, entry.projectPath, entry); logger.info(`Added history entry: ${entry.type}`, LOG_CONTEXT, { summary: entry.summary }); return true; }) @@ -127,7 +127,7 @@ export function registerHistoryHandlers(): void { if (projectPath) { // Clear all sessions for this project - historyManager.clearByProjectPath(projectPath); + await historyManager.clearByProjectPath(projectPath); logger.info(`Cleared history for project: ${projectPath}`, LOG_CONTEXT); return true; } @@ -144,7 +144,7 @@ export function registerHistoryHandlers(): void { 'history:delete', withIpcErrorLogging(handlerOpts('delete'), async (entryId: string, sessionId?: string) => { if (sessionId) { - const deleted = historyManager.deleteEntry(sessionId, entryId); + const deleted = await historyManager.deleteEntry(sessionId, entryId); if (deleted) { logger.info(`Deleted history entry: ${entryId} from session ${sessionId}`, LOG_CONTEXT); } else { @@ -154,9 +154,9 @@ export function registerHistoryHandlers(): void { } // Search all sessions for the entry (slower, but works for legacy calls without sessionId) - const sessions = historyManager.listSessionsWithHistory(); + const sessions = await historyManager.listSessionsWithHistory(); for (const sid of sessions) { - if (historyManager.deleteEntry(sid, entryId)) { + if (await historyManager.deleteEntry(sid, entryId)) { logger.info(`Deleted history entry: ${entryId} from session ${sid}`, LOG_CONTEXT); return true; } @@ -175,7 +175,7 @@ export function registerHistoryHandlers(): void { handlerOpts('update'), async (entryId: string, updates: Partial, sessionId?: string) => { if (sessionId) { - const updated = historyManager.updateEntry(sessionId, entryId, updates); + const updated = await historyManager.updateEntry(sessionId, entryId, updates); if (updated) { logger.info(`Updated history entry: ${entryId} in session ${sessionId}`, LOG_CONTEXT, { updates, @@ -190,9 +190,9 @@ export function registerHistoryHandlers(): void { } // Search all sessions for the entry - const sessions = historyManager.listSessionsWithHistory(); + const sessions = await historyManager.listSessionsWithHistory(); for (const sid of sessions) { - if (historyManager.updateEntry(sid, entryId, updates)) { + if (await historyManager.updateEntry(sid, entryId, updates)) { logger.info(`Updated history entry: ${entryId} in session ${sid}`, LOG_CONTEXT, { updates, }); @@ -212,7 +212,7 @@ export function registerHistoryHandlers(): void { withIpcErrorLogging( handlerOpts('updateSessionName'), async (agentSessionId: string, sessionName: string) => { - const count = historyManager.updateSessionNameByClaudeSessionId( + const count = await historyManager.updateSessionNameByClaudeSessionId( agentSessionId, sessionName ); @@ -237,7 +237,7 @@ export function registerHistoryHandlers(): void { ipcMain.handle( 'history:listSessions', withIpcErrorLogging(handlerOpts('listSessions'), async () => { - return historyManager.listSessionsWithHistory(); + return await historyManager.listSessionsWithHistory(); }) ); } diff --git a/src/main/ipc/handlers/process.ts b/src/main/ipc/handlers/process.ts index f03ea312d..08edb1c94 100644 --- a/src/main/ipc/handlers/process.ts +++ b/src/main/ipc/handlers/process.ts @@ -494,7 +494,7 @@ export function registerProcessHandlers(deps: ProcessHandlerDependencies): void globalEnvVarsCount: Object.keys(globalShellEnvVars).length, }); - const result = processManager.spawn({ + const result = await processManager.spawn({ ...config, command: commandToSpawn, args: argsToSpawn, diff --git a/src/main/ipc/handlers/tabNaming.ts b/src/main/ipc/handlers/tabNaming.ts index 6eeb2b248..d7c57ffef 100644 --- a/src/main/ipc/handlers/tabNaming.ts +++ b/src/main/ipc/handlers/tabNaming.ts @@ -247,7 +247,7 @@ export function registerTabNamingHandlers(deps: TabNamingHandlerDependencies): v // Spawn the process // When using SSH with stdin, pass the flag so ChildProcessSpawner // sends the prompt via stdin instead of command line args - processManager.spawn({ + void processManager.spawn({ sessionId, toolType: config.agentType, cwd, diff --git a/src/main/parsers/codex-output-parser.ts b/src/main/parsers/codex-output-parser.ts index 35976d035..8cebb05ee 100644 --- a/src/main/parsers/codex-output-parser.ts +++ b/src/main/parsers/codex-output-parser.ts @@ -25,7 +25,7 @@ import type { ToolType, AgentError } from '../../shared/types'; import type { AgentOutputParser, ParsedEvent } from './agent-output-parser'; import { captureException } from '../utils/sentry'; import { getErrorPatterns, matchErrorPattern } from './error-patterns'; -import * as fs from 'fs'; +import * as fs from 'node:fs/promises'; import * as path from 'path'; import * as os from 'os'; @@ -69,6 +69,149 @@ const MODEL_CONTEXT_WINDOWS: Record = { default: 400000, }; +const DEFAULT_CODEX_MODEL = 'gpt-5.2-codex-max'; +const CODEX_CONFIG_CACHE_TTL_MS = 60_000; + +interface CodexConfig { + model?: string; + contextWindow?: number; +} + +interface CachedCodexConfig { + value: CodexConfig; + loadedAt: number; + configPath: string; +} + +let cachedCodexConfig: CachedCodexConfig | null = null; +let loadCodexConfigPromise: Promise | null = null; +let loadCodexConfigPath: string | null = null; +let configInvalidationTimer: ReturnType | null = null; + +function getDefaultCodexConfig(): CodexConfig { + return { + model: DEFAULT_CODEX_MODEL, + contextWindow: getModelContextWindow(DEFAULT_CODEX_MODEL), + }; +} + +function getCodexConfigPath(): string { + const codexHome = process.env.CODEX_HOME || path.join(os.homedir(), '.codex'); + return path.join(codexHome, 'config.toml'); +} + +function parseCodexConfigContent(content: string): CodexConfig { + const result: CodexConfig = {}; + + // Simple TOML parsing for the fields we care about + // model = "gpt-5.1" + const modelMatch = content.match(/^\s*model\s*=\s*"([^"]+)"/m); + if (modelMatch) { + result.model = modelMatch[1]; + } + + // model_context_window = 128000 + const windowMatch = content.match(/^\s*model_context_window\s*=\s*(\d+)/m); + if (windowMatch) { + result.contextWindow = parseInt(windowMatch[1], 10); + } + + return result; +} + +function hasValidCachedCodexConfig(configPath: string): boolean { + return ( + !!cachedCodexConfig && + cachedCodexConfig.configPath === configPath && + Date.now() - cachedCodexConfig.loadedAt < CODEX_CONFIG_CACHE_TTL_MS + ); +} + +function scheduleCodexConfigInvalidation(): void { + if (configInvalidationTimer) { + clearTimeout(configInvalidationTimer); + } + configInvalidationTimer = setTimeout(() => { + cachedCodexConfig = null; + }, CODEX_CONFIG_CACHE_TTL_MS); +} + +export function invalidateCodexConfigCache(): void { + cachedCodexConfig = null; + loadCodexConfigPromise = null; + loadCodexConfigPath = null; + if (configInvalidationTimer) { + clearTimeout(configInvalidationTimer); + configInvalidationTimer = null; + } +} + +export async function loadCodexConfig(): Promise { + const configPath = getCodexConfigPath(); + const requestedConfigPath = configPath; + + if (hasValidCachedCodexConfig(configPath) && cachedCodexConfig) { + return cachedCodexConfig.value; + } + + if (loadCodexConfigPromise && loadCodexConfigPath === configPath) { + return loadCodexConfigPromise; + } + + loadCodexConfigPath = configPath; + loadCodexConfigPromise = (async () => { + try { + await fs.access(configPath); + const content = await fs.readFile(configPath, 'utf8'); + const parsedConfig = parseCodexConfigContent(content); + if (loadCodexConfigPath === requestedConfigPath) { + cachedCodexConfig = { + value: parsedConfig, + loadedAt: Date.now(), + configPath, + }; + scheduleCodexConfigInvalidation(); + } + return parsedConfig; + } catch { + if (loadCodexConfigPath === requestedConfigPath) { + cachedCodexConfig = { + value: {}, + loadedAt: Date.now(), + configPath, + }; + scheduleCodexConfigInvalidation(); + } + return {}; + } + })(); + + try { + return await loadCodexConfigPromise; + } finally { + if (loadCodexConfigPath === configPath) { + loadCodexConfigPromise = null; + loadCodexConfigPath = null; + } + } +} + +function getCachedCodexConfigSnapshot(): CodexConfig { + const configPath = getCodexConfigPath(); + if (hasValidCachedCodexConfig(configPath) && cachedCodexConfig) { + return cachedCodexConfig.value; + } + return {}; +} + +function resolveCodexConfig(config: CodexConfig): { model: string; contextWindow: number } { + const model = config.model || DEFAULT_CODEX_MODEL; + return { + model, + contextWindow: config.contextWindow || getModelContextWindow(model), + }; +} + /** * Get the context window size for a given model */ @@ -86,42 +229,6 @@ function getModelContextWindow(model: string): number { return MODEL_CONTEXT_WINDOWS['default']; } -/** - * Read Codex configuration from ~/.codex/config.toml - * Returns the model name and context window override if set - */ -function readCodexConfig(): { model?: string; contextWindow?: number } { - try { - const codexHome = process.env.CODEX_HOME || path.join(os.homedir(), '.codex'); - const configPath = path.join(codexHome, 'config.toml'); - - if (!fs.existsSync(configPath)) { - return {}; - } - - const content = fs.readFileSync(configPath, 'utf8'); - const result: { model?: string; contextWindow?: number } = {}; - - // Simple TOML parsing for the fields we care about - // model = "gpt-5.1" - const modelMatch = content.match(/^\s*model\s*=\s*"([^"]+)"/m); - if (modelMatch) { - result.model = modelMatch[1]; - } - - // model_context_window = 128000 - const windowMatch = content.match(/^\s*model_context_window\s*=\s*(\d+)/m); - if (windowMatch) { - result.contextWindow = parseInt(windowMatch[1], 10); - } - - return result; - } catch { - // Config file doesn't exist or can't be read - use defaults - return {}; - } -} - /** * Raw message structure from Codex JSON output * Based on verified Codex CLI v0.73.0+ output @@ -183,7 +290,6 @@ export class CodexOutputParser implements AgentOutputParser { // Cached context window - read once from config private contextWindow: number; - private model: string; // Track tool name from tool_call to carry over to tool_result // (Codex emits tool_call and tool_result as separate item.completed events, @@ -191,12 +297,20 @@ export class CodexOutputParser implements AgentOutputParser { private lastToolName: string | null = null; constructor() { - // Read config once at initialization - const config = readCodexConfig(); - this.model = config.model || 'gpt-5.2-codex-max'; - - // Priority: 1) explicit model_context_window in config, 2) lookup by model name - this.contextWindow = config.contextWindow || getModelContextWindow(this.model); + const initialConfig = resolveCodexConfig({ + ...getDefaultCodexConfig(), + ...getCachedCodexConfigSnapshot(), + }); + this.contextWindow = initialConfig.contextWindow; + + // Refresh config asynchronously and cache it for parser instances. + void loadCodexConfig().then((freshConfig) => { + const resolvedConfig = resolveCodexConfig({ + ...getDefaultCodexConfig(), + ...freshConfig, + }); + this.contextWindow = resolvedConfig.contextWindow; + }); } /** diff --git a/src/main/process-manager/ProcessManager.ts b/src/main/process-manager/ProcessManager.ts index 43d1fe210..943851e0e 100644 --- a/src/main/process-manager/ProcessManager.ts +++ b/src/main/process-manager/ProcessManager.ts @@ -46,14 +46,14 @@ export class ProcessManager extends EventEmitter { /** * Spawn a new process for a session */ - spawn(config: ProcessConfig): SpawnResult { + spawn(config: ProcessConfig): Promise { const usePty = this.shouldUsePty(config); if (usePty) { return this.ptySpawner.spawn(config); - } else { - return this.childProcessSpawner.spawn(config); } + + return this.childProcessSpawner.spawn(config); } private shouldUsePty(config: ProcessConfig): boolean { diff --git a/src/main/process-manager/spawners/ChildProcessSpawner.ts b/src/main/process-manager/spawners/ChildProcessSpawner.ts index 813a12d09..1f198b635 100644 --- a/src/main/process-manager/spawners/ChildProcessSpawner.ts +++ b/src/main/process-manager/spawners/ChildProcessSpawner.ts @@ -50,7 +50,7 @@ export class ChildProcessSpawner { /** * Spawn a child process for a session */ - spawn(config: ProcessConfig): SpawnResult { + async spawn(config: ProcessConfig): Promise { const { sessionId, toolType, @@ -110,13 +110,9 @@ export class ChildProcessSpawner { } else if (hasImages && prompt && imageArgs) { // For agents that use file-based image args (like Codex, OpenCode) finalArgs = [...args]; - tempImageFiles = []; - for (let i = 0; i < images.length; i++) { - const tempPath = saveImageToTempFile(images[i], i); - if (tempPath) { - tempImageFiles.push(tempPath); - } - } + tempImageFiles = ( + await Promise.all(images.map((image, i) => saveImageToTempFile(image, i))) + ).filter((tempPath): tempPath is string => tempPath !== null); const isResumeWithPromptEmbed = capabilities.imageResumeMode === 'prompt-embed' && args.some((a) => a === 'resume'); diff --git a/src/main/process-manager/spawners/PtySpawner.ts b/src/main/process-manager/spawners/PtySpawner.ts index d9528fb6f..18106cc43 100644 --- a/src/main/process-manager/spawners/PtySpawner.ts +++ b/src/main/process-manager/spawners/PtySpawner.ts @@ -20,7 +20,7 @@ export class PtySpawner { /** * Spawn a PTY process for a session */ - spawn(config: ProcessConfig): SpawnResult { + async spawn(config: ProcessConfig): Promise { const { sessionId, toolType, diff --git a/src/main/process-manager/utils/imageUtils.ts b/src/main/process-manager/utils/imageUtils.ts index 60a458a18..0f680da6a 100644 --- a/src/main/process-manager/utils/imageUtils.ts +++ b/src/main/process-manager/utils/imageUtils.ts @@ -1,4 +1,3 @@ -import * as fs from 'fs'; import * as fsPromises from 'fs/promises'; import * as path from 'path'; import * as os from 'os'; @@ -20,7 +19,7 @@ export function parseDataUrl(dataUrl: string): { base64: string; mediaType: stri * Save a base64 data URL image to a temp file. * Returns the full path to the temp file, or null on failure. */ -export function saveImageToTempFile(dataUrl: string, index: number): string | null { +export async function saveImageToTempFile(dataUrl: string, index: number): Promise { const parsed = parseDataUrl(dataUrl); if (!parsed) { logger.warn('[ProcessManager] Failed to parse data URL for temp file', 'ProcessManager'); @@ -33,7 +32,7 @@ export function saveImageToTempFile(dataUrl: string, index: number): string | nu try { const buffer = Buffer.from(parsed.base64, 'base64'); - fs.writeFileSync(tempPath, buffer); + await fsPromises.writeFile(tempPath, buffer); logger.debug('[ProcessManager] Saved image to temp file', 'ProcessManager', { tempPath, size: buffer.length, diff --git a/src/main/stats/aggregations.ts b/src/main/stats/aggregations.ts index 68c2ddfdf..5e6cc62df 100644 --- a/src/main/stats/aggregations.ts +++ b/src/main/stats/aggregations.ts @@ -41,9 +41,16 @@ function queryByAgent( const rows = db .prepare( ` - SELECT agent_type, COUNT(*) as count, SUM(duration) as duration - FROM query_events - WHERE start_time >= ? + SELECT agent_type, SUM(query_count) as count, SUM(query_duration) as duration + FROM ( + SELECT start_time, + agent_type, + COUNT(*) as query_count, + SUM(duration) as query_duration + FROM query_events + WHERE start_time >= ? + GROUP BY start_time, agent_type + ) AS by_time GROUP BY agent_type ` ) @@ -62,9 +69,15 @@ function queryBySource(db: Database.Database, startTime: number): { user: number const rows = db .prepare( ` - SELECT source, COUNT(*) as count - FROM query_events - WHERE start_time >= ? + SELECT source, SUM(query_count) as count + FROM ( + SELECT start_time, + source, + COUNT(*) as query_count + FROM query_events + WHERE start_time >= ? + GROUP BY start_time, source + ) AS by_time GROUP BY source ` ) diff --git a/src/main/stats/migrations.ts b/src/main/stats/migrations.ts index 77c55a91e..4b25a0cc3 100644 --- a/src/main/stats/migrations.ts +++ b/src/main/stats/migrations.ts @@ -25,6 +25,9 @@ import { CREATE_SESSION_LIFECYCLE_SQL, CREATE_SESSION_LIFECYCLE_INDEXES_SQL, CREATE_COMPOUND_INDEXES_SQL, + CREATE_AGENT_TIME_INDEX_SQL, + CREATE_SOURCE_TIME_INDEX_SQL, + CREATE_PROJECT_TIME_INDEX_SQL, runStatements, } from './schema'; import { LOG_CONTEXT } from './utils'; @@ -60,6 +63,21 @@ export function getMigrations(): Migration[] { description: 'Add compound indexes on query_events for dashboard query performance', up: (db) => migrateV4(db), }, + { + version: 5, + description: 'Add agent-first time index for filtered dashboard query performance', + up: (db) => migrateV5(db), + }, + { + version: 6, + description: 'Add source-first time index for source-filtered time-range queries', + up: (db) => migrateV6(db), + }, + { + version: 7, + description: 'Add project-path-first time index for project-filtered time-range queries', + up: (db) => migrateV7(db), + }, ]; } @@ -247,3 +265,30 @@ function migrateV4(db: Database.Database): void { logger.debug('Added compound indexes on query_events', LOG_CONTEXT); } + +/** + * Migration v5: Add agent-first time index for agent-filtered time-range queries + */ +function migrateV5(db: Database.Database): void { + db.prepare(CREATE_AGENT_TIME_INDEX_SQL).run(); + + logger.debug('Added agent-time compound index on query_events', LOG_CONTEXT); +} + +/** + * Migration v6: Add source-first time index for source-filtered time-range queries + */ +function migrateV6(db: Database.Database): void { + db.prepare(CREATE_SOURCE_TIME_INDEX_SQL).run(); + + logger.debug('Added source-time compound index on query_events', LOG_CONTEXT); +} + +/** + * Migration v7: Add project-path-first time index for project-filtered time-range queries + */ +function migrateV7(db: Database.Database): void { + db.prepare(CREATE_PROJECT_TIME_INDEX_SQL).run(); + + logger.debug('Added project-path-time compound index on query_events', LOG_CONTEXT); +} diff --git a/src/main/stats/schema.ts b/src/main/stats/schema.ts index c560aebff..2aa80c9c2 100644 --- a/src/main/stats/schema.ts +++ b/src/main/stats/schema.ts @@ -132,9 +132,21 @@ export const CREATE_SESSION_LIFECYCLE_INDEXES_SQL = ` // ============================================================================ export const CREATE_COMPOUND_INDEXES_SQL = ` - CREATE INDEX IF NOT EXISTS idx_query_time_agent ON query_events(start_time, agent_type); - CREATE INDEX IF NOT EXISTS idx_query_time_project ON query_events(start_time, project_path); - CREATE INDEX IF NOT EXISTS idx_query_time_source ON query_events(start_time, source) + CREATE INDEX IF NOT EXISTS idx_query_time_agent ON query_events(start_time, agent_type); + CREATE INDEX IF NOT EXISTS idx_query_time_project ON query_events(start_time, project_path); + CREATE INDEX IF NOT EXISTS idx_query_time_source ON query_events(start_time, source) +`; + +export const CREATE_AGENT_TIME_INDEX_SQL = ` + CREATE INDEX IF NOT EXISTS idx_query_agent_time ON query_events(agent_type, start_time) +`; + +export const CREATE_SOURCE_TIME_INDEX_SQL = ` + CREATE INDEX IF NOT EXISTS idx_query_source_time ON query_events(source, start_time) +`; + +export const CREATE_PROJECT_TIME_INDEX_SQL = ` + CREATE INDEX IF NOT EXISTS idx_query_project_time ON query_events(project_path, start_time) `; // ============================================================================ diff --git a/src/main/stats/singleton.ts b/src/main/stats/singleton.ts index 810888e07..c9026d32e 100644 --- a/src/main/stats/singleton.ts +++ b/src/main/stats/singleton.ts @@ -27,9 +27,9 @@ export function getStatsDB(): StatsDB { /** * Initialize the stats database (call on app ready) */ -export function initializeStatsDB(): void { +export async function initializeStatsDB(): Promise { const db = getStatsDB(); - db.initialize(); + await db.initialize(); } /** diff --git a/src/main/stats/stats-db.ts b/src/main/stats/stats-db.ts index b2cbbbd8e..7b5162ce1 100644 --- a/src/main/stats/stats-db.ts +++ b/src/main/stats/stats-db.ts @@ -10,7 +10,15 @@ import Database from 'better-sqlite3'; import * as path from 'path'; -import * as fs from 'fs'; +import { + copyFileSync, + existsSync, + mkdirSync, + promises as fsp, + readdirSync, + statSync, + unlinkSync, +} from 'fs'; import { app } from 'electron'; import { logger } from '../utils/logger'; import type { @@ -92,21 +100,21 @@ export class StatsDB { * 2. Delete the corrupted file and any associated WAL/SHM files * 3. Create a fresh database */ - initialize(): void { + async initialize(): Promise { if (this.initialized) { - return; + return Promise.resolve(); } try { const dir = path.dirname(this.dbPath); - if (!fs.existsSync(dir)) { - fs.mkdirSync(dir, { recursive: true }); + if (!existsSync(dir)) { + mkdirSync(dir, { recursive: true }); } - const dbExists = fs.existsSync(this.dbPath); + const dbExists = await this.pathExists(this.dbPath); if (dbExists) { - const db = this.openWithCorruptionHandling(); + const db = await this.openWithCorruptionHandling(); if (!db) { throw new Error('Failed to open or recover database'); } @@ -128,10 +136,10 @@ export class StatsDB { logger.info(`Stats database initialized at ${this.dbPath}`, LOG_CONTEXT); // Create daily backup (keeps last 7 days) - this.createDailyBackupIfNeeded(); + await this.createDailyBackupIfNeeded(); // Schedule VACUUM to run weekly instead of on every startup - this.vacuumIfNeededWeekly(); + await this.vacuumIfNeededWeekly(); } catch (error) { logger.error(`Failed to initialize stats database: ${error}`, LOG_CONTEXT); throw error; @@ -173,9 +181,9 @@ export class StatsDB { /** * Get the database file size in bytes. */ - getDatabaseSize(): number { + async getDatabaseSize(): Promise { try { - const stats = fs.statSync(this.dbPath); + const stats = statSync(this.dbPath); return stats.size; } catch { return 0; @@ -189,13 +197,13 @@ export class StatsDB { /** * Run VACUUM on the database to reclaim unused space and optimize structure. */ - vacuum(): { success: boolean; bytesFreed: number; error?: string } { + async vacuum(): Promise<{ success: boolean; bytesFreed: number; error?: string }> { if (!this.db) { return { success: false, bytesFreed: 0, error: 'Database not initialized' }; } try { - const sizeBefore = this.getDatabaseSize(); + const sizeBefore = await this.getDatabaseSize(); logger.info( `Starting VACUUM (current size: ${(sizeBefore / 1024 / 1024).toFixed(2)} MB)`, LOG_CONTEXT @@ -203,7 +211,7 @@ export class StatsDB { this.db.prepare('VACUUM').run(); - const sizeAfter = this.getDatabaseSize(); + const sizeAfter = await this.getDatabaseSize(); const bytesFreed = sizeBefore - sizeAfter; logger.info( @@ -224,12 +232,12 @@ export class StatsDB { * * @param thresholdBytes - Size threshold in bytes (default: 100MB) */ - vacuumIfNeeded(thresholdBytes: number = 100 * 1024 * 1024): { + async vacuumIfNeeded(thresholdBytes: number = 100 * 1024 * 1024): Promise<{ vacuumed: boolean; databaseSize: number; result?: { success: boolean; bytesFreed: number; error?: string }; - } { - const databaseSize = this.getDatabaseSize(); + }> { + const databaseSize = await this.getDatabaseSize(); if (databaseSize < thresholdBytes) { logger.debug( @@ -244,7 +252,7 @@ export class StatsDB { LOG_CONTEXT ); - const result = this.vacuum(); + const result = await this.vacuum(); return { vacuumed: true, databaseSize, result }; } @@ -256,7 +264,7 @@ export class StatsDB { * * @param intervalMs - Minimum time between vacuums (default: 7 days) */ - private vacuumIfNeededWeekly(intervalMs: number = 7 * 24 * 60 * 60 * 1000): void { + private async vacuumIfNeededWeekly(intervalMs: number = 7 * 24 * 60 * 60 * 1000): Promise { try { // Read last vacuum timestamp from _meta table const row = this.database @@ -279,7 +287,7 @@ export class StatsDB { } // Run VACUUM if database is large enough - const result = this.vacuumIfNeeded(); + const result = await this.vacuumIfNeeded(); if (result.vacuumed) { // Update timestamp in _meta table @@ -294,6 +302,18 @@ export class StatsDB { } } + /** + * Check if a file or directory exists without blocking. + */ + private async pathExists(filePath: string): Promise { + try { + await fsp.access(filePath); + return true; + } catch { + return false; + } + } + // ============================================================================ // Integrity & Corruption Handling // ============================================================================ @@ -325,31 +345,31 @@ export class StatsDB { * Checkpoint WAL to flush pending writes into the main database file, * then copy the database file to the destination path. * - * Plain fs.copyFileSync on a WAL-mode database can produce an incomplete + * Plain file copy on a WAL-mode database can produce an incomplete * copy because committed data may still reside in the -wal file. * PRAGMA wal_checkpoint(TRUNCATE) forces all WAL content into the main * file and resets the WAL, making the .db file self-contained. */ - private safeBackupCopy(destPath: string): void { + private async safeBackupCopy(destPath: string): Promise { if (this.db) { this.db.pragma('wal_checkpoint(TRUNCATE)'); } - fs.copyFileSync(this.dbPath, destPath); + await fsp.copyFile(this.dbPath, destPath); } /** * Create a backup of the current database file. */ - backupDatabase(): BackupResult { + async backupDatabase(): Promise { try { - if (!fs.existsSync(this.dbPath)) { + if (!(await this.pathExists(this.dbPath))) { return { success: false, error: 'Database file does not exist' }; } const timestamp = Date.now(); const backupPath = `${this.dbPath}.backup.${timestamp}`; - this.safeBackupCopy(backupPath); + await this.safeBackupCopy(backupPath); logger.info(`Created database backup at ${backupPath}`, LOG_CONTEXT); return { success: true, backupPath }; @@ -368,9 +388,9 @@ export class StatsDB { * Create a daily backup if one hasn't been created today. * Automatically rotates old backups to keep only the last 7 days. */ - private createDailyBackupIfNeeded(): void { + private async createDailyBackupIfNeeded(): Promise { try { - if (!fs.existsSync(this.dbPath)) { + if (!(await this.pathExists(this.dbPath))) { return; } @@ -378,17 +398,17 @@ export class StatsDB { const dailyBackupPath = `${this.dbPath}.daily.${today}`; // Check if today's backup already exists - if (fs.existsSync(dailyBackupPath)) { + if (await this.pathExists(dailyBackupPath)) { logger.debug(`Daily backup already exists for ${today}`, LOG_CONTEXT); return; } // Create today's backup (checkpoint WAL first so the copy is self-contained) - this.safeBackupCopy(dailyBackupPath); + await this.safeBackupCopy(dailyBackupPath); logger.info(`Created daily backup: ${dailyBackupPath}`, LOG_CONTEXT); // Rotate old backups (keep last 7 days) - this.rotateOldBackups(7); + await this.rotateOldBackups(7); } catch (error) { logger.warn(`Failed to create daily backup: ${error}`, LOG_CONTEXT); } @@ -397,11 +417,11 @@ export class StatsDB { /** * Remove daily backups older than the specified number of days. */ - private rotateOldBackups(keepDays: number): void { + private async rotateOldBackups(keepDays: number): Promise { try { const dir = path.dirname(this.dbPath); const baseName = path.basename(this.dbPath).replace(/[.*+?^${}()|[\]\\]/g, '\\$&'); - const files = fs.readdirSync(dir); + const files = await fsp.readdir(dir); const cutoffDate = new Date(); cutoffDate.setDate(cutoffDate.getDate() - keepDays); @@ -415,7 +435,7 @@ export class StatsDB { const backupDate = dailyMatch[1]; if (backupDate < cutoffStr) { const fullPath = path.join(dir, file); - fs.unlinkSync(fullPath); + await fsp.unlink(fullPath); removedCount++; logger.debug(`Removed old daily backup: ${file}`, LOG_CONTEXT); } @@ -437,7 +457,7 @@ export class StatsDB { try { const dir = path.dirname(this.dbPath); const baseName = path.basename(this.dbPath).replace(/[.*+?^${}()|[\]\\]/g, '\\$&'); - const files = fs.readdirSync(dir); + const files = readdirSync(dir); const backups: Array<{ path: string; date: string; size: number }> = []; for (const file of files) { @@ -445,7 +465,7 @@ export class StatsDB { const dailyMatch = file.match(new RegExp(`^${baseName}\\.daily\\.(\\d{4}-\\d{2}-\\d{2})$`)); if (dailyMatch) { const fullPath = path.join(dir, file); - const stats = fs.statSync(fullPath); + const stats = statSync(fullPath); backups.push({ path: fullPath, date: dailyMatch[1], @@ -457,7 +477,7 @@ export class StatsDB { const timestampMatch = file.match(new RegExp(`^${baseName}\\.backup\\.(\\d+)$`)); if (timestampMatch) { const fullPath = path.join(dir, file); - const stats = fs.statSync(fullPath); + const stats = statSync(fullPath); const timestamp = parseInt(timestampMatch[1], 10); const date = new Date(timestamp).toISOString().split('T')[0]; backups.push({ @@ -482,7 +502,7 @@ export class StatsDB { */ restoreFromBackup(backupPath: string): boolean { try { - if (!fs.existsSync(backupPath)) { + if (!existsSync(backupPath)) { logger.error(`Backup file does not exist: ${backupPath}`, LOG_CONTEXT); return false; } @@ -501,16 +521,20 @@ export class StatsDB { // Remove WAL and SHM files if they exist const walPath = `${this.dbPath}-wal`; const shmPath = `${this.dbPath}-shm`; - if (fs.existsSync(walPath)) fs.unlinkSync(walPath); - if (fs.existsSync(shmPath)) fs.unlinkSync(shmPath); + if (existsSync(walPath)) { + unlinkSync(walPath); + } + if (existsSync(shmPath)) { + unlinkSync(shmPath); + } // Remove current database if it exists - if (fs.existsSync(this.dbPath)) { - fs.unlinkSync(this.dbPath); + if (existsSync(this.dbPath)) { + unlinkSync(this.dbPath); } // Copy backup to main database path - fs.copyFileSync(backupPath, this.dbPath); + copyFileSync(backupPath, this.dbPath); logger.info(`Restored database from backup: ${backupPath}`, LOG_CONTEXT); return true; @@ -524,7 +548,7 @@ export class StatsDB { * Handle a corrupted database by attempting to restore from the latest backup. * If no backup is available, creates a fresh database. */ - private recoverFromCorruption(): CorruptionRecoveryResult { + private async recoverFromCorruption(): Promise { logger.warn('Attempting to recover from database corruption...', LOG_CONTEXT); try { @@ -540,26 +564,30 @@ export class StatsDB { } // First, backup the corrupted database for forensics - if (fs.existsSync(this.dbPath)) { + if (await this.pathExists(this.dbPath)) { const timestamp = Date.now(); const corruptedBackupPath = `${this.dbPath}.corrupted.${timestamp}`; try { - fs.renameSync(this.dbPath, corruptedBackupPath); + await fsp.rename(this.dbPath, corruptedBackupPath); logger.warn(`Corrupted database moved to: ${corruptedBackupPath}`, LOG_CONTEXT); } catch { logger.error('Failed to backup corrupted database', LOG_CONTEXT); - fs.unlinkSync(this.dbPath); + await fsp.unlink(this.dbPath); } } // Delete WAL and SHM files const walPath = `${this.dbPath}-wal`; const shmPath = `${this.dbPath}-shm`; - if (fs.existsSync(walPath)) fs.unlinkSync(walPath); - if (fs.existsSync(shmPath)) fs.unlinkSync(shmPath); + if (await this.pathExists(walPath)) { + await fsp.unlink(walPath); + } + if (await this.pathExists(shmPath)) { + await fsp.unlink(shmPath); + } // Try to restore from the latest backup - const backups = this.getAvailableBackups(); + const backups = await this.getAvailableBackups(); for (const backup of backups) { logger.info( `Attempting to restore from backup: ${backup.path} (${backup.date})`, @@ -568,7 +596,7 @@ export class StatsDB { // Remove stale WAL/SHM sidecar files from backup before validating. // These leftovers from previous sessions can cause false integrity failures. - this.removeStaleWalFiles(backup.path); + await this.removeStaleWalFiles(backup.path); // Try to validate the backup before restoring try { @@ -578,7 +606,7 @@ export class StatsDB { if (result.length === 1 && result[0].integrity_check === 'ok') { // Backup is valid, restore it - if (this.restoreFromBackup(backup.path)) { + if (await this.restoreFromBackup(backup.path)) { logger.info( `Successfully restored database from backup: ${backup.date}`, LOG_CONTEXT @@ -620,16 +648,16 @@ export class StatsDB { * Remove stale WAL and SHM sidecar files for a database path. * These can cause false corruption detection when left over from crashes. */ - private removeStaleWalFiles(dbFilePath: string): void { + private async removeStaleWalFiles(dbFilePath: string): Promise { const walPath = `${dbFilePath}-wal`; const shmPath = `${dbFilePath}-shm`; try { - if (fs.existsSync(walPath)) { - fs.unlinkSync(walPath); + if (await this.pathExists(walPath)) { + await fsp.unlink(walPath); logger.debug(`Removed stale WAL file: ${walPath}`, LOG_CONTEXT); } - if (fs.existsSync(shmPath)) { - fs.unlinkSync(shmPath); + if (await this.pathExists(shmPath)) { + await fsp.unlink(shmPath); logger.debug(`Removed stale SHM file: ${shmPath}`, LOG_CONTEXT); } } catch (error) { @@ -643,9 +671,9 @@ export class StatsDB { * Removes stale WAL/SHM sidecar files before opening to prevent false * corruption detection caused by leftover files from previous crashes. */ - private openWithCorruptionHandling(): Database.Database | null { + private async openWithCorruptionHandling(): Promise { // Remove stale WAL/SHM files that may cause false corruption detection - this.removeStaleWalFiles(this.dbPath); + await this.removeStaleWalFiles(this.dbPath); try { const db = new Database(this.dbPath); @@ -663,14 +691,14 @@ export class StatsDB { logger.error(`Failed to open database: ${error}`, LOG_CONTEXT); } - const recoveryResult = this.recoverFromCorruption(); + const recoveryResult = await this.recoverFromCorruption(); if (!recoveryResult.recovered) { logger.error('Database corruption recovery failed, creating fresh database', LOG_CONTEXT); } // Always ensure a valid database exists after recovery attempt try { - if (!fs.existsSync(this.dbPath)) { + if (!(await this.pathExists(this.dbPath))) { // No file exists (recovery may not have restored a backup) โ€” create fresh const db = new Database(this.dbPath); logger.info('Fresh database created after corruption recovery', LOG_CONTEXT); diff --git a/src/main/utils/context-groomer.ts b/src/main/utils/context-groomer.ts index e5bc6d06d..671ffdb8e 100644 --- a/src/main/utils/context-groomer.ts +++ b/src/main/utils/context-groomer.ts @@ -43,7 +43,10 @@ export interface GroomingProcessManager { sessionCustomPath?: string; sessionCustomArgs?: string; sessionCustomEnvVars?: Record; - }): { pid: number; success?: boolean } | null; + }): + | { pid: number; success?: boolean } + | null + | Promise<{ pid: number; success?: boolean } | null>; on(event: string, handler: (...args: unknown[]) => void): void; off(event: string, handler: (...args: unknown[]) => void): void; kill(sessionId: string): void; @@ -317,59 +320,76 @@ export async function groomContext( processManager.on('exit', onExit); processManager.on('agent-error', onError); - // Spawn the process in batch mode - const spawnResult = processManager.spawn({ - sessionId: groomerSessionId, - toolType: agentType, - cwd: projectRoot, - command: agent.command, - args: finalArgs, - prompt: prompt, // Triggers batch mode (no PTY) - promptArgs: agent.promptArgs, // For agents using flag-based prompt (e.g., OpenCode -p) - noPromptSeparator: agent.noPromptSeparator, - // Pass SSH config for remote execution support - sessionSshRemoteConfig, - sessionCustomPath, - sessionCustomArgs, - sessionCustomEnvVars, - }); - - if (!spawnResult || spawnResult.pid <= 0) { - cleanup(); - reject(new Error(`Failed to spawn grooming process for ${agentType}`)); - return; - } - - logger.debug('Spawned grooming batch process', LOG_CONTEXT, { - groomerSessionId, - pid: spawnResult.pid, - }); - - // Set up idle check - idleCheckInterval = setInterval(() => { - const idleTime = Date.now() - lastDataTime; - if (idleTime > IDLE_TIMEOUT_MS && responseBuffer.length >= MIN_RESPONSE_LENGTH) { - finishWithResponse('idle timeout with content'); - } - }, 1000); + // Spawn the process in batch mode (supports sync and async process managers) + void Promise.resolve( + processManager.spawn({ + sessionId: groomerSessionId, + toolType: agentType, + cwd: projectRoot, + command: agent.command, + args: finalArgs, + prompt: prompt, // Triggers batch mode (no PTY) + promptArgs: agent.promptArgs, // For agents using flag-based prompt (e.g., OpenCode -p) + noPromptSeparator: agent.noPromptSeparator, + // Pass SSH config for remote execution support + sessionSshRemoteConfig, + sessionCustomPath, + sessionCustomArgs, + sessionCustomEnvVars, + }) + ) + .then((spawnResult) => { + if (!spawnResult || spawnResult.pid <= 0) { + cleanup(); + if (!resolved) { + resolved = true; + reject(new Error(`Failed to spawn grooming process for ${agentType}`)); + } + return; + } - // Overall timeout - setTimeout(() => { - if (!resolved) { - logger.warn('Grooming timeout', LOG_CONTEXT, { + logger.debug('Spawned grooming batch process', LOG_CONTEXT, { groomerSessionId, - responseLength: responseBuffer.length, + pid: spawnResult.pid, }); - if (responseBuffer.length > 0) { - finishWithResponse('overall timeout with content'); - } else { - cleanup(); + // Set up idle check + idleCheckInterval = setInterval(() => { + const idleTime = Date.now() - lastDataTime; + if (idleTime > IDLE_TIMEOUT_MS && responseBuffer.length >= MIN_RESPONSE_LENGTH) { + finishWithResponse('idle timeout with content'); + } + }, 1000); + + // Overall timeout + setTimeout(() => { + if (!resolved) { + logger.warn('Grooming timeout', LOG_CONTEXT, { + groomerSessionId, + responseLength: responseBuffer.length, + }); + + if (responseBuffer.length > 0) { + finishWithResponse('overall timeout with content'); + } else { + cleanup(); + resolved = true; + reject(new Error('Grooming timed out with no response')); + } + } + }, timeoutMs); + }) + .catch((error) => { + cleanup(); + if (!resolved) { resolved = true; - reject(new Error('Grooming timed out with no response')); + reject( + new Error( + `Failed to spawn grooming process for ${agentType}: ${error instanceof Error ? error.message : String(error)}` + ) + ); } - } - }, timeoutMs); + }); }); } diff --git a/src/main/utils/wslDetector.ts b/src/main/utils/wslDetector.ts index 6442b3c91..fda39a816 100644 --- a/src/main/utils/wslDetector.ts +++ b/src/main/utils/wslDetector.ts @@ -1,4 +1,4 @@ -import * as fs from 'fs'; +import * as fs from 'fs/promises'; import { logger } from './logger'; /** @@ -15,7 +15,7 @@ let wslDetectionCache: boolean | null = null; * Detect if the current environment is WSL (Windows Subsystem for Linux). * Result is cached after first call. */ -export function isWsl(): boolean { +export async function isWsl(): Promise { if (wslDetectionCache !== null) { return wslDetectionCache; } @@ -26,11 +26,10 @@ export function isWsl(): boolean { } try { - if (fs.existsSync('/proc/version')) { - const version = fs.readFileSync('/proc/version', 'utf8').toLowerCase(); - wslDetectionCache = version.includes('microsoft') || version.includes('wsl'); - return wslDetectionCache; - } + const version = await fs.readFile('/proc/version', 'utf8'); + wslDetectionCache = + version.toLowerCase().includes('microsoft') || version.toLowerCase().includes('wsl'); + return wslDetectionCache; } catch { // Ignore read errors } @@ -54,8 +53,8 @@ export function isWindowsMountPath(filepath: string): boolean { * @param cwd - The current working directory to check * @returns true if running from a problematic Windows mount path */ -export function checkWslEnvironment(cwd: string): boolean { - if (!isWsl()) { +export async function checkWslEnvironment(cwd: string): Promise { + if (!(await isWsl())) { return false; } diff --git a/src/main/web-server/WebServer.ts b/src/main/web-server/WebServer.ts index 52c0b8122..04e233736 100644 --- a/src/main/web-server/WebServer.ts +++ b/src/main/web-server/WebServer.ts @@ -388,7 +388,7 @@ export class WebServer { getTheme: () => this.callbackRegistry.getTheme(), writeToSession: (sessionId, data) => this.callbackRegistry.writeToSession(sessionId, data), interruptSession: async (sessionId) => this.callbackRegistry.interruptSession(sessionId), - getHistory: (projectPath, sessionId) => + getHistory: async (projectPath, sessionId) => this.callbackRegistry.getHistory(projectPath, sessionId), getLiveSessionInfo: (sessionId) => this.liveSessionManager.getLiveSessionInfo(sessionId), isSessionLive: (sessionId) => this.liveSessionManager.isSessionLive(sessionId), diff --git a/src/main/web-server/routes/apiRoutes.ts b/src/main/web-server/routes/apiRoutes.ts index ee720781b..7a7f938e4 100644 --- a/src/main/web-server/routes/apiRoutes.ts +++ b/src/main/web-server/routes/apiRoutes.ts @@ -42,7 +42,7 @@ export interface ApiRouteCallbacks { getTheme: () => Theme | null; writeToSession: (sessionId: string, data: string) => boolean; interruptSession: (sessionId: string) => Promise; - getHistory: (projectPath?: string, sessionId?: string) => HistoryEntry[]; + getHistory: (projectPath?: string, sessionId?: string) => Promise; getLiveSessionInfo: (sessionId: string) => LiveSessionInfo | undefined; isSessionLive: (sessionId: string) => boolean; } @@ -324,7 +324,7 @@ export class ApiRoutes { }; try { - const entries = this.callbacks.getHistory(projectPath, sessionId); + const entries = await this.callbacks.getHistory(projectPath, sessionId); return { entries, count: entries.length, diff --git a/src/main/web-server/types.ts b/src/main/web-server/types.ts index 885849923..823ee5775 100644 --- a/src/main/web-server/types.ts +++ b/src/main/web-server/types.ts @@ -305,7 +305,7 @@ export type GetCustomCommandsCallback = () => CustomAICommand[]; export type GetHistoryCallback = ( projectPath?: string, sessionId?: string -) => import('../../shared/types').HistoryEntry[]; +) => Promise; /** * Callback to get all connected web clients. diff --git a/src/main/web-server/web-server-factory.ts b/src/main/web-server/web-server-factory.ts index b9d9b44f0..6fcb9e709 100644 --- a/src/main/web-server/web-server-factory.ts +++ b/src/main/web-server/web-server-factory.ts @@ -191,12 +191,12 @@ export function createWebServerFactory(deps: WebServerFactoryDependencies) { // Set up callback for web server to fetch history entries // Uses HistoryManager for per-session storage - server.setGetHistoryCallback((projectPath?: string, sessionId?: string) => { + server.setGetHistoryCallback(async (projectPath?: string, sessionId?: string) => { const historyManager = getHistoryManager(); if (sessionId) { // Get entries for specific session - const entries = historyManager.getEntries(sessionId); + const entries = await historyManager.getEntries(sessionId); // Sort by timestamp descending entries.sort((a, b) => b.timestamp - a.timestamp); return entries; @@ -204,11 +204,11 @@ export function createWebServerFactory(deps: WebServerFactoryDependencies) { if (projectPath) { // Get all entries for sessions in this project - return historyManager.getEntriesByProjectPath(projectPath); + return await historyManager.getEntriesByProjectPath(projectPath); } // Return all entries (for global view) - return historyManager.getAllEntries(); + return await historyManager.getAllEntries(); }); // Set up callback for web server to write commands to sessions diff --git a/src/renderer/components/AICommandsPanel.tsx b/src/renderer/components/AICommandsPanel.tsx index 55ff97343..5d63747c6 100644 --- a/src/renderer/components/AICommandsPanel.tsx +++ b/src/renderer/components/AICommandsPanel.tsx @@ -1,4 +1,4 @@ -import { useState, useRef } from 'react'; +import { useMemo, useRef, useState } from 'react'; import { Plus, Trash2, @@ -154,6 +154,59 @@ export function AICommandsPanel({ setIsCreating(false); }; + const sortedCommands = useMemo( + () => [...customAICommands].sort((a, b) => a.command.localeCompare(b.command)), + [customAICommands] + ); + + const commandStyles = useMemo( + () => ({ + textDim: { color: theme.colors.textDim }, + textAccent: { color: theme.colors.accent }, + addButton: { + backgroundColor: theme.colors.accent, + color: theme.colors.accentForeground, + }, + mainPanel: { + backgroundColor: theme.colors.bgMain, + borderColor: theme.colors.border, + }, + borderOnly: { borderColor: theme.colors.border }, + fieldBase: { + borderColor: theme.colors.border, + color: theme.colors.textMain, + }, + actionButton: { + backgroundColor: theme.colors.bgActivity, + color: theme.colors.textMain, + border: `1px solid ${theme.colors.border}`, + }, + successButton: { + backgroundColor: theme.colors.success, + color: '#000000', + }, + commandText: { color: theme.colors.accent }, + builtInBadge: { + backgroundColor: theme.colors.bgActivity, + color: theme.colors.textDim, + }, + promptPreview: { + backgroundColor: theme.colors.bgActivity, + color: theme.colors.textMain, + }, + variableCode: { + backgroundColor: theme.colors.bgActivity, + color: theme.colors.accent, + }, + createPanel: { + backgroundColor: theme.colors.bgMain, + borderColor: theme.colors.accent, + }, + errorText: { color: theme.colors.error }, + }), + [theme] + ); + return (
@@ -161,36 +214,33 @@ export function AICommandsPanel({ Custom AI Commands -

+

Slash commands available in AI terminal mode. Built-in commands can be edited but not deleted.

{/* Template Variables Documentation */} -
+
{variablesExpanded && ( -
-

+

+

Use these variables in your command prompts. They will be replaced with actual values at runtime.

@@ -199,11 +249,11 @@ export function AICommandsPanel({
{variable} - + {description}
@@ -218,10 +268,7 @@ export function AICommandsPanel({ - -
-
-
-
- - - setEditingCommand({ ...editingCommand, command: e.target.value }) - } - className="w-full p-2 rounded border bg-transparent outline-none text-sm font-mono" - style={{ borderColor: theme.colors.border, color: theme.colors.textMain }} - /> -
-
- - - setEditingCommand({ ...editingCommand, description: e.target.value }) - } - className="w-full p-2 rounded border bg-transparent outline-none text-sm" - style={{ borderColor: theme.colors.border, color: theme.colors.textMain }} - /> -
+ + Cancel + +
-
-