fix: handle milliseconds in ISO 8601 rate limit reset timestamps#1846
fix: handle milliseconds in ISO 8601 rate limit reset timestamps#1846qvidal01 wants to merge 13 commits intoAndyMik90:developfrom
Conversation
Walks users through converting ideation, roadmap, or insights output into batch specs interactively. Works from any project directory. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…rehype deps - Add ac-phase.py: interactive phased spec executor with category-based grouping - Sanitize git branch names in worktree manager to remove invalid characters - Fix terminal size detection in batch-from-discovery for non-TTY environments - Install rehype-raw and rehype-sanitize dependencies - Add desktop.env, auto-claude-desktop.sh, images/ to .gitignore Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Adds run_preflight() calls at the start of all CLI/runner entry points for token refresh, ollama checks, and stale lock cleanup. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The regex for extracting rate limit reset times didn't account for optional milliseconds (.000) in ISO strings, causing the trailing Z to be dropped and dates parsed as local time instead of UTC. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
|
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds a preflight validation hook invoked by CLI runners, sanitizes git branch names, updates .gitignore, refines batch spec slugging, broadens a frontend timestamp regex, and introduces three interactive orchestration scripts for phased spec execution and batch management. Changes
Sequence Diagram(s)sequenceDiagram
participant CLI as CLI Entry Point
participant Hook as preflight_hook.run_preflight()
participant Env as .env loader
participant Token as Token Validator/Auto-fix
participant Ollama as Ollama API
participant FS as Venv/Locks/Node/Git checks
participant App as Main Application
CLI->>Hook: run_preflight()
Hook->>Env: load .env
Env-->>Hook: env values
Hook->>Token: validate OAuth token
alt token valid
Token-->>Hook: OK
else token missing/expired
Token->>Token: attempt auto-fix (read ~/.claude/.credentials.json)
Token-->>Hook: fixed / failed
end
Hook->>Ollama: query models (non-fatal)
Ollama-->>Hook: models / unreachable
Hook->>FS: check venv imports, node version, stale locks, git status
FS-->>Hook: report / auto-fix results
Hook-->>CLI: success / failure (exit on critical failure)
CLI->>App: continue if success
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 2❌ Failed checks (2 warnings)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary of ChangesHello @qvidal01, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the robustness and usability of the Auto-Claude system by introducing comprehensive preflight checks and self-healing mechanisms for its backend. It also expands the CLI toolkit with new interactive scripts designed to streamline the phased execution of specifications and facilitate the batch creation of tasks directly from discovery outputs. Additionally, a critical bug related to timestamp parsing in GitHub error handling has been addressed, improving the accuracy of rate limit reset time calculations. Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces a fix for parsing ISO 8601 timestamps with milliseconds, which is a welcome correction. However, the bulk of the changes involve adding significant new tooling, including preflight checks and interactive scripts for managing specs and batches. While this new functionality looks very useful, the PR title and description should be updated to reflect the full scope of these changes.
My review focuses on the new Python scripts. I've identified several instances of hardcoded paths and IP addresses that will impact portability for other developers. I've also suggested replacing curl subprocess calls with native Python HTTP requests for better cross-platform compatibility. Additionally, there's a minor style issue with an import statement.
Please review the specific comments for details and suggestions.
| def find_auto_claude_dir(): | ||
| """Try to find the Auto-Claude installation.""" | ||
| candidates = [ | ||
| Path("/aidata/projects/Auto-Claude"), |
There was a problem hiding this comment.
The hardcoded path Path("/aidata/projects/Auto-Claude") is not portable and will cause this script to fail for other developers. This path should be configurable, for example, via a command-line argument or an environment variable. Since you already have an --auto-claude-dir argument, perhaps this path can be derived from that.
| print(f"\n{BLUE}[2/6] Checking Ollama connectivity{RESET}") | ||
|
|
||
| # Read URL from .env | ||
| ollama_url = "http://192.168.0.234:11434" |
There was a problem hiding this comment.
This hardcoded IP address http://192.168.0.234:11434 is not portable and will likely fail for other developers. A more sensible default would be http://localhost:11434, which is the standard for running Ollama locally. Alternatively, you could make the OLLAMA_BASE_URL a required environment variable and not provide a default here.
| ollama_url = "http://192.168.0.234:11434" | |
| ollama_url = "http://localhost:11434" |
| result = subprocess.run( | ||
| ["curl", "-s", "-m", "5", f"{ollama_url}/api/tags"], | ||
| capture_output=True, text=True, timeout=10 | ||
| ) |
There was a problem hiding this comment.
Using subprocess to call curl introduces a dependency on an external command-line tool that might not be available on all systems, especially on Windows. It's more robust and portable to use a Python HTTP library like the built-in urllib.request or a third-party library like requests (if it's already a dependency) to make this API call.
| # Check common project locations | ||
| project_dirs = [ | ||
| Path.home() / "projects", | ||
| Path("/aidata/projects"), |
|
|
||
| def _check_ollama(env: dict) -> bool: | ||
| """Check Ollama connectivity. Warns but doesn't fail (not always needed).""" | ||
| ollama_url = env.get("OLLAMA_BASE_URL", "http://192.168.0.234:11434") |
There was a problem hiding this comment.
Similar to preflight.py, this hardcoded IP address http://192.168.0.234:11434 is not portable. Please consider changing the default to http://localhost:11434.
| ollama_url = env.get("OLLAMA_BASE_URL", "http://192.168.0.234:11434") | |
| ollama_url = env.get("OLLAMA_BASE_URL", "http://localhost:11434") |
| result = subprocess.run( | ||
| ["curl", "-s", "-m", "3", f"{ollama_url}/api/tags"], | ||
| capture_output=True, text=True, timeout=5 | ||
| ) |
|
|
||
| def _check_stale_locks() -> None: | ||
| """Remove stale .lock files from spec directories.""" | ||
| project_dirs = [Path.home() / "projects", Path("/aidata/projects")] |
| for lock_file in pdir.glob("*/.auto-claude/specs/*/.lock"): | ||
| try: | ||
| # Only remove locks older than 1 hour | ||
| import time |
| RESET = "\033[0m" | ||
|
|
||
| def ok(msg): | ||
| print(f" {GREEN}✓{RESET} {msg}") |
Check failure
Code scanning / CodeQL
Clear-text logging of sensitive information High
| content = content.replace(old_token, new_token) | ||
| else: | ||
| content = f"CLAUDE_CODE_OAUTH_TOKEN={new_token}\n" + content | ||
| ENV_FILE.write_text(content) |
Check failure
Code scanning / CodeQL
Clear-text storage of sensitive information High
|
|
||
| ENV_FILE.write_text("\n".join(lines)) | ||
| os.environ["CLAUDE_CODE_OAUTH_TOKEN"] = new_token | ||
| print(f"[preflight] Token auto-fixed from ~/.claude/.credentials.json ({new_token[:20]}...)") |
Check failure
Code scanning / CodeQL
Clear-text logging of sensitive information High
| Usage: python preflight.py [--fix] | ||
| """ | ||
| import json | ||
| import os |
Check notice
Code scanning / CodeQL
Unused import Note
| import os | ||
| import sys | ||
| import subprocess | ||
| import time |
Check notice
Code scanning / CodeQL
Unused import Note
| if os.environ.get("SKIP_PREFLIGHT") == "1": | ||
| return True | ||
|
|
||
| _PREFLIGHT_RAN = True |
Check notice
Code scanning / CodeQL
Unused global variable Note
| import signal | ||
| import subprocess | ||
| import sys | ||
| import time |
Check notice
Code scanning / CodeQL
Unused import Note
| idx = int(raw) - 1 | ||
| if 0 <= idx < len(options): | ||
| return [idx] | ||
| except ValueError: |
Check notice
Code scanning / CodeQL
Empty except Note
| "count": count, | ||
| "label": f"Ideation ({count} ideas)", | ||
| }) | ||
| except (json.JSONDecodeError, KeyError): |
Check notice
Code scanning / CodeQL
Empty except Note
| "count": count, | ||
| "label": f"Roadmap ({count} features)", | ||
| }) | ||
| except (json.JSONDecodeError, KeyError): |
Check notice
Code scanning / CodeQL
Empty except Note
There was a problem hiding this comment.
Actionable comments posted: 26
🤖 Fix all issues with AI agents
In @.gitignore:
- Around line 178-179: The .gitignore currently lists "desktop.env" and
"auto-claude-desktop.sh" in the Misc/Development tail; remove the redundant
"desktop.env" entry if your file is already covered by the existing ".env.*"
pattern (line referencing .env.*) or keep it only if the literal filename
"desktop.env" (no dot) must be ignored, and move "auto-claude-desktop.sh" into
the existing "Auto Claude Generated" section (the block titled "Auto Claude
Generated" around lines 57–66) so generated scripts are grouped correctly.
- Line 180: The .gitignore entry "images/" is too broad and ignores any
directory named images anywhere in the repo; change it to scope to the
repository root by replacing "images/" with "/images/" (or alternatively add
explicit allow rules like "!/path/to/package/images/" for any subproject images
you want tracked) so only the top-level images directory is ignored and
package/subproject image folders are not silently excluded.
In `@apps/backend/core/worktree.py`:
- Around line 345-351: The get_branch_name method currently only strips a
limited set of characters which misses Git-specific invalid patterns; update
get_branch_name to defensively enforce git ref rules by additionally
removing/control-replacing spaces, control characters, consecutive dots (".."),
any "@{" sequence, trailing dots or ".lock", and collapse multiple hyphens, or
alternatively call out to git check-ref-format --normalize to validate/normalize
the sanitized name; ensure you reference get_branch_name and account for
upstream generate_spec_name() but still harden get_branch_name so it never
returns a branch string with forbidden patterns.
In `@apps/backend/preflight_hook.py`:
- Around line 135-168: run_preflight currently calls sys.exit(1) on failures
which kills the process; instead define a custom exception (e.g.,
PreflightError) and replace all sys.exit(1) calls in run_preflight (including
the ENV_FILE missing branch and where _check_and_fix_token() is checked) with
raising PreflightError(message), or alternatively return False from those
failure points and let the caller decide; ensure callers of run_preflight handle
the new exception/False path. Reference symbols: run_preflight, _PREFLIGHT_RAN,
ENV_FILE, _load_env_vars, _check_and_fix_token, _check_ollama,
_check_stale_locks.
- Around line 117-132: The function _check_stale_locks currently uses hardcoded
project paths (Path.home() / "projects" and Path("/aidata/projects")) and
imports time inside the loop; change it to accept a list of project directories
as a parameter (e.g., project_dirs: Iterable[Path] = None) or build the list
from the current working directory (Path.cwd()) so it doesn't scan
developer-specific paths, and move the import time to module top-level; update
callers or provide a sensible default (like [Path.cwd() / "projects"]) and keep
the stale-lock removal logic and age threshold unchanged.
- Line 92: The print statement currently exposes part of the OAuth token via
new_token[:20]; change it to a non-identifying confirmation (e.g., "Token saved
to ~/.claude/.credentials.json" or include only the token length, not any
characters). Locate the print call that references new_token (the f-string
printing "[preflight] Token auto-fixed...") and replace the interpolation of
new_token with a redacted message or token length, ensuring no substring of the
token is ever logged to stdout.
- Around line 99-114: The _check_ollama function currently hardcodes a private
IP and shells out to curl; change the default OLLAMA_BASE_URL to
"http://localhost:11434" and replace the subprocess/curl call with a stdlib HTTP
request using urllib.request (build the URL f"{ollama_url}/api/tags", use
urllib.request.urlopen with a timeout, catch URLError/HTTPError/Exception), keep
the same warning prints and return True on any failure so this remains
non-blocking; update references to the function name _check_ollama in-place and
ensure you still read env.get("OLLAMA_BASE_URL", "http://localhost:11434").
In `@apps/backend/preflight.py`:
- Line 67: The code logs sensitive OAuth token prefixes using token and
new_token in calls to ok(...)—remove any token content from logs and replace
with a non-sensitive message (e.g., "OAuth token present" or "New OAuth token
obtained") in the ok() calls; locate the uses of token and new_token in
preflight.py (the ok(...) invocations that include token[:20] and
new_token[:20]) and update them to avoid printing or formatting any part of the
token.
- Line 117: The hardcoded Ollama URL variable ollama_url should default to
"http://localhost:11434" instead of the private IP and the code should avoid
using a curl subprocess for portability; update the ollama_url assignment in
preflight.py (and mirror the same change in preflight_hook.py) to
"http://localhost:11434" and replace any subprocess curl call that probes that
URL with a cross-platform HTTP request using the project's HTTP client (e.g.,
requests or urllib) or the existing HTTP helper function so status checks use a
direct HTTP GET and handle timeouts/errors instead of shelling out.
- Around line 264-266: The print call uses an unnecessary f-string (Ruff F541)
on the branch that checks self.auto_fix; replace the f-string print(f"\n Run
with --fix to attempt auto-repair") with a normal string print("\n Run with
--fix to attempt auto-repair") (or add real placeholders if intended) in the
same block where self.auto_fix is evaluated alongside the print of Remaining so
the behavior is unchanged.
- Line 16: VENV_PYTHON currently hardcodes the Unix path
(".venv"/"bin"/"python") which fails on Windows; update the VENV_PYTHON
definition to select the platform-appropriate executable by checking the
platform (e.g., sys.platform or os.name) and using ("Scripts","python.exe") for
Windows and ("bin","python") for others—replace the current VENV_PYTHON
expression (referencing VENV_PYTHON and BACKEND_DIR) with a conditional path
construction that uses pathlib to join the correct segments.
- Around line 182-185: The hardcoded project paths in the project_dirs variable
make the code environment-specific; update the code that builds project_dirs in
preflight.py to accept configurable locations (e.g., read from an environment
variable like PROJECT_DIRS, a config file, or application settings) and fall
back to sensible defaults if not provided; modify the code that references
project_dirs to parse a delimited list or config value and construct Path
objects (referencing the project_dirs variable and any helper function you add,
e.g., load_project_dirs or get_project_dirs_from_env) so deployments and
developer machines can supply their own directories instead of relying on
~/projects and /aidata/projects.
- Around line 99-106: The current token update uses content.replace(old_token,
new_token) which can mangle other lines; change the logic in the routine around
ENV_FILE/content/old_token/new_token to perform a line-by-line update that only
touches the CLAUDE_CODE_OAUTH_TOKEN entry: read ENV_FILE.read_text() as lines,
iterate and if a line startswith "CLAUDE_CODE_OAUTH_TOKEN=" replace that whole
line with "CLAUDE_CODE_OAUTH_TOKEN=<new_token>", otherwise keep the line, and if
no such key exists prepend the new key line; then write the joined lines back
with ENV_FILE.write_text(content).
- Around line 7-12: Remove the unused imports `os` and `time` and reorder the
remaining imports into a sorted, lint-compliant block: keep `json`,
`subprocess`, and `sys` as top-level imports and `Path` from `pathlib` as a
grouped import; update the import block at the top of preflight.py (the lines
importing json, os, sys, subprocess, time and Path) so only `json`,
`subprocess`, `sys`, and `from pathlib import Path` remain and are alphabetized
to satisfy Ruff I001.
In `@scripts/ac-phase.py`:
- Around line 709-711: The current use of os.execvp and defaulting EDITOR to
"nano" is Unix-specific; replace the execvp call with a portable subprocess.run
invocation and set the default editor based on platform detection (use "notepad"
when on Windows, "nano" or the existing default on Unix). Locate the editor
variable and change the call from os.execvp(editor, [editor, str(pf)]) to using
subprocess.run([editor, str(pf)]) (or subprocess.run([editor, str(pf)],
check=True)) and add minimal exception handling around it to surface errors;
keep the environment-override behavior via os.environ.get("EDITOR", ...). Ensure
you import subprocess and use os.name or platform.system() to select the Windows
default.
- Around line 405-412: subprocess.run calls (e.g., the gen_cmd invocation in
scripts/ac-phase.py) currently omit a timeout and can hang; add a configurable
timeout parameter (e.g., pass timeout=SUBPROCESS_TIMEOUT or timeout=600) to each
subprocess.run call and handle subprocess.TimeoutExpired by catching it,
printing/logging a timeout message (similar style to the existing
KeyboardInterrupt/return False behavior), and ensuring the function returns
False on timeout; define SUBPROCESS_TIMEOUT near the top of the module or read
from configuration/env so the timeout is adjustable and apply the same change to
the other subprocess.run invocations in this file.
- Around line 505-506: The code appends phase_num to state["completed_phases"]
without checking for duplicates; update the block that currently does if not
failed: state["completed_phases"].append(phase_num) to first check whether
phase_num is already present (e.g., if phase_num not in
state["completed_phases"]: state["completed_phases"].append(phase_num)) or
convert completed_phases to a set-like structure before appending to prevent
duplicate entries when a phase is re-run.
- Line 159: The open(req_file) call (and every other open(...) call that reads
or writes JSON in scripts/ac-phase.py, e.g., the other open(...) usages that
load/save JSON) must explicitly set encoding="utf-8"; update each such call to
use open(path, "r", encoding="utf-8") for reads (and "w", encoding="utf-8" for
writes) so UTF-8 content is handled consistently across platforms (ensure you
update the open(req_file) invocation and the other JSON file opens in the same
module).
- Around line 196-202: load_phases currently opens and json.loads phases.json
without handling malformed JSON; do the same fix for load_phase_state: wrap the
json.load call in a try/except catching json.JSONDecodeError, log or print a
clear message including the file path and the exception, and return None (or a
safe fallback) on error to avoid crashing; follow the pattern used by
get_spec_title for error handling and use the unique symbols load_phases and
load_phase_state to locate and update the logic.
- Line 49: VENV_PYTHON currently hardcodes the Unix venv layout
(".venv/bin/python"); change it to select the correct venv executable path based
on the platform by importing and using the shared platform abstraction (e.g.,
isWindows()/isMacOS()/isLinux()) and building VENV_PYTHON as AUTO_CLAUDE_ROOT /
"apps" / "backend" / ".venv" / ("Scripts" / "python.exe" if isWindows() else
"bin" / "python"); also update the fallback logic that chooses the python
executable (the variable that picks str(VENV_PYTHON) if it exists) to use
isWindows() to prefer "python" on Windows and "python3" otherwise. Ensure you
import the platform helper (isWindows) from the project's platform abstraction
and do not call platform.system() directly.
In `@scripts/batch-from-discovery.py`:
- Around line 389-397: The "services" field is hardcoded to ["frontend"];
replace it with a lookup derived from the item's category by introducing a
CATEGORY_TO_SERVICE mapping and using CATEGORY_TO_SERVICE.get(cat, ["frontend"])
(or make it configurable) so tasks from "infrastructure", "security",
"documentation", "testing", etc. map to the correct service(s); update the
return block that builds the task dict (the snippet using item, cat,
TYPE_TO_WORKFLOW, EFFORT_TO_PRIORITY, EFFORT_TO_HOURS) to use this mapping
instead of the static ["frontend"].
- Around line 240-242: The dict comprehension in _normalize_roadmap_feature can
raise KeyError because it assumes every phase dict has "id"; change the
construction of phases to defensively handle missing "id" (e.g., iterate
roadmap_data.get("phases", []) and use p.get("id") to skip or default entries,
and use p.get("name", p.get("id", "")) for the value) so phases = { ... } never
accesses p["id"] directly; ensure phase_name =
phases.get(feature.get("phase_id", ""), "") remains valid with the new safe
mapping.
- Around line 170-188: The loop in _collect_insights_tasks can hit
AttributeError when iterating a JSON list whose elements (msg) aren't dicts;
update the exception handling to also catch AttributeError so non-dict list
elements don't crash the collector, e.g., extend the existing except clause that
catches json.JSONDecodeError/KeyError/TypeError to include AttributeError and
keep skipping those malformed entries while preserving the current behavior for
dict items and suggestedTasks processing.
- Around line 122-136: The ideation file read uses ideation_file.read_text()
without explicit encoding and assumes the JSON root is a dict (using data.get),
which can raise AttributeError and mis-handle non-UTF-8 systems; change the read
to ideation_file.read_text(encoding="utf-8") (and apply same pattern to other
read_text/write_text calls in the file) and update the handling after json.loads
to guard for non-dict roots (e.g., check isinstance(data, dict) before calling
data.get or else treat it as an empty dict / compute count appropriately), and
expand the except to catch AttributeError in addition to json.JSONDecodeError
and KeyError so malformed structures are handled gracefully.
- Around line 551-557: The subprocess.run call using cmd/result can hang
indefinitely; update the invocation in scripts/batch-from-discovery.py to pass a
reasonable timeout (e.g., timeout=...) and wrap the call in a try/except that
catches subprocess.TimeoutExpired and subprocess.CalledProcessError, terminating
or logging the child process and exiting non‑zero; ensure you reference the
existing cmd and run_py variables, handle cleanup (cancel/kill the process if
needed), and surface a clear error message via logging or printing before
returning/raising.
- Around line 37-38: The clear_screen function should stop checking os.name and
calling os.system directly; instead import and use the platform abstraction
(e.g., a provided is_windows() or similar utility from the platform module) and
call the terminal clear command via subprocess.run (e.g.,
subprocess.run(["cls"], check=True) for Windows or subprocess.run(["clear"],
check=True) for POSIX) inside clear_screen; update imports to include subprocess
and the platform utility and ensure you use the abstraction
(is_windows()/Platform.is_windows) to choose the command and subprocess.run with
check=True rather than os.system.
| desktop.env | ||
| auto-claude-desktop.sh |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider placing desktop.env and auto-claude-desktop.sh in the appropriate existing sections.
desktop.env is already covered by .env.* on line 16, so this entry is redundant unless the file is literally named desktop.env (no dot prefix). auto-claude-desktop.sh would fit better under the "Auto Claude Generated" section (lines 57–66) rather than appended at the end under "Misc / Development."
🤖 Prompt for AI Agents
In @.gitignore around lines 178 - 179, The .gitignore currently lists
"desktop.env" and "auto-claude-desktop.sh" in the Misc/Development tail; remove
the redundant "desktop.env" entry if your file is already covered by the
existing ".env.*" pattern (line referencing .env.*) or keep it only if the
literal filename "desktop.env" (no dot) must be ignored, and move
"auto-claude-desktop.sh" into the existing "Auto Claude Generated" section (the
block titled "Auto Claude Generated" around lines 57–66) so generated scripts
are grouped correctly.
| Agents.md | ||
| desktop.env | ||
| auto-claude-desktop.sh | ||
| images/ |
There was a problem hiding this comment.
images/ is overly broad — it ignores any directory named images anywhere in the repo.
This pattern matches at every level of the tree, not just the root. If any package or sub-project has an images/ directory that should be tracked, those files will be silently ignored. Consider scoping it to the root:
Proposed fix
-images/
+/images/📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| images/ | |
| /images/ |
🤖 Prompt for AI Agents
In @.gitignore at line 180, The .gitignore entry "images/" is too broad and
ignores any directory named images anywhere in the repo; change it to scope to
the repository root by replacing "images/" with "/images/" (or alternatively add
explicit allow rules like "!/path/to/package/images/" for any subproject images
you want tracked) so only the top-level images directory is ignored and
package/subproject image folders are not silently excluded.
| def get_branch_name(self, spec_name: str) -> str: | ||
| """Get the branch name for a spec.""" | ||
| return f"auto-claude/{spec_name}" | ||
| # Sanitize spec_name: remove characters invalid in git branch names | ||
| sanitized = re.sub(r'[\[\]~^:?*\\{}]', '', spec_name) | ||
| # Collapse repeated dashes from removal | ||
| sanitized = re.sub(r'-{2,}', '-', sanitized) | ||
| return f"auto-claude/{sanitized}" |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
🧩 Analysis chain
🏁 Script executed:
# Find all calls to get_branch_name
rg -n 'get_branch_name\(' --type=py -B2 -A2Repository: AndyMik90/Auto-Claude
Length of output: 2309
🏁 Script executed:
# Check for any validation of spec_name before it reaches this function
rg -n 'spec_name' --type=py -B2 -A2 | head -100Repository: AndyMik90/Auto-Claude
Length of output: 6249
🏁 Script executed:
# Look for tests of the get_branch_name function
rg -n 'test.*get_branch_name|get_branch_name.*test' --type=py -B1 -A5Repository: AndyMik90/Auto-Claude
Length of output: 47
🏁 Script executed:
# Examine the worktree.py file around the function
head -360 apps/backend/core/worktree.py | tail -60Repository: AndyMik90/Auto-Claude
Length of output: 2603
🏁 Script executed:
# Find _generate_spec_name implementation
rg -n '_generate_spec_name' --type=py -A10 | head -60Repository: AndyMik90/Auto-Claude
Length of output: 4293
🏁 Script executed:
# Check the full test_create_worktree_with_spec_name test
rg -n 'test_create_worktree_with_spec_name' --type=py -A20Repository: AndyMik90/Auto-Claude
Length of output: 1429
🏁 Script executed:
# Look for any validation of spec_name in the codebase
rg -n 'spec_name.*validate|validate.*spec_name|spec_name.*check' --type=py -B2 -A2Repository: AndyMik90/Auto-Claude
Length of output: 47
🏁 Script executed:
# Find the actual implementation of _generate_spec_name
rg -n 'def _generate_spec_name' --type=py -A30Repository: AndyMik90/Auto-Claude
Length of output: 2753
🏁 Script executed:
# Check if spec_name can come from external sources (not just _generate_spec_name)
rg -n 'spec_name\s*=' --type=py | grep -v test | head -30Repository: AndyMik90/Auto-Claude
Length of output: 1751
🏁 Script executed:
# Find the actual generate_spec_name implementation
rg -n 'def generate_spec_name' --type=py -A20Repository: AndyMik90/Auto-Claude
Length of output: 1360
🏁 Script executed:
# Check batch_commands.py line 64 to see spec_name construction
sed -n '60,70p' apps/backend/cli/batch_commands.pyRepository: AndyMik90/Auto-Claude
Length of output: 511
🏁 Script executed:
# Get the complete generate_spec_name function
sed -n '123,180p' apps/backend/spec/pipeline/models.pyRepository: AndyMik90/Auto-Claude
Length of output: 1080
🏁 Script executed:
# Get more of the generate_spec_name function including the actual sanitization logic
sed -n '123,200p' apps/backend/spec/pipeline/models.pyRepository: AndyMik90/Auto-Claude
Length of output: 1552
🏁 Script executed:
# Get the complete generate_spec_name function
sed -n '123,210p' apps/backend/spec/pipeline/models.pyRepository: AndyMik90/Auto-Claude
Length of output: 1837
Branch name sanitization is technically incomplete but mitigated by upstream validation.
Git's branch naming rules forbid spaces, .. sequences, @{, trailing . or .lock, and control characters. However, all spec_name values reaching this function are pre-sanitized by generate_spec_name() in models.py, which strips non-alphanumeric characters (keeping only hyphens and alphanumerics). This upstream sanitization ensures that problematic patterns like "foo..bar" or "test.lock" cannot occur in practice.
The current implementation is safe and functional, but adding comprehensive git ref validation here would be good defensive practice—for instance, if spec_name ever comes from external or user-controlled input. If you want to harden this, consider extending the sanitization to handle edge cases (spaces, dots, etc.), or optionally use git check-ref-format --normalize for authoritative validation.
🤖 Prompt for AI Agents
In `@apps/backend/core/worktree.py` around lines 345 - 351, The get_branch_name
method currently only strips a limited set of characters which misses
Git-specific invalid patterns; update get_branch_name to defensively enforce git
ref rules by additionally removing/control-replacing spaces, control characters,
consecutive dots (".."), any "@{" sequence, trailing dots or ".lock", and
collapse multiple hyphens, or alternatively call out to git check-ref-format
--normalize to validate/normalize the sanitized name; ensure you reference
get_branch_name and account for upstream generate_spec_name() but still harden
get_branch_name so it never returns a branch string with forbidden patterns.
|
|
||
| ENV_FILE.write_text("\n".join(lines)) | ||
| os.environ["CLAUDE_CODE_OAUTH_TOKEN"] = new_token | ||
| print(f"[preflight] Token auto-fixed from ~/.claude/.credentials.json ({new_token[:20]}...)") |
There was a problem hiding this comment.
Security: Logging partial OAuth token to stdout.
Printing the first 20 characters of a token is enough to identify (and potentially misuse) it, especially if logs are aggregated or shared. Replace with a non-identifying confirmation:
- print(f"[preflight] Token auto-fixed from ~/.claude/.credentials.json ({new_token[:20]}...)")
+ print("[preflight] Token auto-fixed from ~/.claude/.credentials.json")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| print(f"[preflight] Token auto-fixed from ~/.claude/.credentials.json ({new_token[:20]}...)") | |
| print("[preflight] Token auto-fixed from ~/.claude/.credentials.json") |
🤖 Prompt for AI Agents
In `@apps/backend/preflight_hook.py` at line 92, The print statement currently
exposes part of the OAuth token via new_token[:20]; change it to a
non-identifying confirmation (e.g., "Token saved to ~/.claude/.credentials.json"
or include only the token length, not any characters). Locate the print call
that references new_token (the f-string printing "[preflight] Token
auto-fixed...") and replace the interpolation of new_token with a redacted
message or token length, ensuring no substring of the token is ever logged to
stdout.
| def _check_ollama(env: dict) -> bool: | ||
| """Check Ollama connectivity. Warns but doesn't fail (not always needed).""" | ||
| ollama_url = env.get("OLLAMA_BASE_URL", "http://192.168.0.234:11434") | ||
| try: | ||
| result = subprocess.run( | ||
| ["curl", "-s", "-m", "3", f"{ollama_url}/api/tags"], | ||
| capture_output=True, text=True, timeout=5 | ||
| ) | ||
| if result.returncode != 0: | ||
| print(f"[preflight] WARNING: Ollama unreachable at {ollama_url}") | ||
| print("[preflight] Local LLM tasks (embeddings, complexity) may fail") | ||
| return True # Warn but don't block | ||
| return True | ||
| except Exception: | ||
| print(f"[preflight] WARNING: Could not reach Ollama at {ollama_url}") | ||
| return True # Warn but don't block |
There was a problem hiding this comment.
Hardcoded private IP 192.168.0.234 as default Ollama URL and curl dependency.
Two issues here:
- The default
192.168.0.234:11434is environment-specific and will fail for any other developer. Standard Ollama default ishttp://localhost:11434. - Shelling out to
curlbreaks on Windows (wherecurlmay not exist). Useurllib.requestfrom stdlib instead.
Suggested fix
def _check_ollama(env: dict) -> bool:
"""Check Ollama connectivity. Warns but doesn't fail (not always needed)."""
- ollama_url = env.get("OLLAMA_BASE_URL", "http://192.168.0.234:11434")
+ ollama_url = env.get("OLLAMA_BASE_URL", "http://localhost:11434")
try:
- result = subprocess.run(
- ["curl", "-s", "-m", "3", f"{ollama_url}/api/tags"],
- capture_output=True, text=True, timeout=5
- )
- if result.returncode != 0:
+ import urllib.request
+ req = urllib.request.Request(f"{ollama_url}/api/tags", method="GET")
+ with urllib.request.urlopen(req, timeout=3) as resp:
+ if resp.status != 200:
+ print(f"[preflight] WARNING: Ollama unreachable at {ollama_url}")
+ print("[preflight] Local LLM tasks (embeddings, complexity) may fail")
+ return True
+ except Exception:
+ print(f"[preflight] WARNING: Could not reach Ollama at {ollama_url}")
+ return True # Warn but don't block🤖 Prompt for AI Agents
In `@apps/backend/preflight_hook.py` around lines 99 - 114, The _check_ollama
function currently hardcodes a private IP and shells out to curl; change the
default OLLAMA_BASE_URL to "http://localhost:11434" and replace the
subprocess/curl call with a stdlib HTTP request using urllib.request (build the
URL f"{ollama_url}/api/tags", use urllib.request.urlopen with a timeout, catch
URLError/HTTPError/Exception), keep the same warning prints and return True on
any failure so this remains non-blocking; update references to the function name
_check_ollama in-place and ensure you still read env.get("OLLAMA_BASE_URL",
"http://localhost:11434").
| # Ideation | ||
| ideation_file = ac_dir / "ideation" / "ideation.json" | ||
| if ideation_file.exists(): | ||
| try: | ||
| data = json.loads(ideation_file.read_text()) | ||
| count = len(data.get("ideas", [])) | ||
| sources.append({ | ||
| "type": "ideation", | ||
| "file": ideation_file, | ||
| "data": data, | ||
| "count": count, | ||
| "label": f"Ideation ({count} ideas)", | ||
| }) | ||
| except (json.JSONDecodeError, KeyError): | ||
| pass |
There was a problem hiding this comment.
Specify encoding='utf-8' and handle AttributeError for malformed JSON.
read_text()without an explicit encoding uses the system default, which may not be UTF-8 on Windows — this affects allread_text()/write_text()calls in this file.- If the JSON root is a list instead of a dict,
data.get(...)raisesAttributeError, which is not caught.
♻️ Suggested fix (apply pattern to all read_text/write_text calls)
- data = json.loads(ideation_file.read_text())
+ data = json.loads(ideation_file.read_text(encoding="utf-8"))
count = len(data.get("ideas", []))
...
- except (json.JSONDecodeError, KeyError):
+ except (json.JSONDecodeError, KeyError, AttributeError):
pass🤖 Prompt for AI Agents
In `@scripts/batch-from-discovery.py` around lines 122 - 136, The ideation file
read uses ideation_file.read_text() without explicit encoding and assumes the
JSON root is a dict (using data.get), which can raise AttributeError and
mis-handle non-UTF-8 systems; change the read to
ideation_file.read_text(encoding="utf-8") (and apply same pattern to other
read_text/write_text calls in the file) and update the handling after json.loads
to guard for non-dict roots (e.g., check isinstance(data, dict) before calling
data.get or else treat it as an empty dict / compute count appropriately), and
expand the except to catch AttributeError in addition to json.JSONDecodeError
and KeyError so malformed structures are handled gracefully.
| def _collect_insights_tasks(insights_dir: Path): | ||
| """Collect task suggestions from insights output files.""" | ||
| tasks = [] | ||
| for f in sorted(insights_dir.glob("*.json")): | ||
| try: | ||
| data = json.loads(f.read_text()) | ||
| # Could be a chat messages file or a direct suggestions file | ||
| if isinstance(data, list): | ||
| for msg in data: | ||
| for t in msg.get("suggestedTasks", []): | ||
| if t.get("title"): | ||
| tasks.append(t) | ||
| elif isinstance(data, dict): | ||
| for t in data.get("suggestedTasks", data.get("tasks", [])): | ||
| if isinstance(t, dict) and t.get("title"): | ||
| tasks.append(t) | ||
| except (json.JSONDecodeError, KeyError, TypeError): | ||
| continue | ||
| return tasks |
There was a problem hiding this comment.
AttributeError not caught if list elements are non-dict.
On line 179, if an element in the JSON array is not a dict (e.g., a string), msg.get(...) raises AttributeError, which isn't in the except clause. Add it for robustness:
- except (json.JSONDecodeError, KeyError, TypeError):
+ except (json.JSONDecodeError, KeyError, TypeError, AttributeError):🤖 Prompt for AI Agents
In `@scripts/batch-from-discovery.py` around lines 170 - 188, The loop in
_collect_insights_tasks can hit AttributeError when iterating a JSON list whose
elements (msg) aren't dicts; update the exception handling to also catch
AttributeError so non-dict list elements don't crash the collector, e.g., extend
the existing except clause that catches json.JSONDecodeError/KeyError/TypeError
to include AttributeError and keep skipping those malformed entries while
preserving the current behavior for dict items and suggestedTasks processing.
| def _normalize_roadmap_feature(feature, roadmap_data): | ||
| phases = {p["id"]: p.get("name", p["id"]) for p in roadmap_data.get("phases", [])} | ||
| phase_name = phases.get(feature.get("phase_id", ""), "") |
There was a problem hiding this comment.
Potential KeyError if a phase entry lacks "id".
Line 241 accesses p["id"] twice without a guard. If any phase object in the roadmap JSON is missing the "id" key, this crashes the script with an unhandled KeyError.
🛡️ Suggested defensive fix
- phases = {p["id"]: p.get("name", p["id"]) for p in roadmap_data.get("phases", [])}
+ phases = {
+ p["id"]: p.get("name", p["id"])
+ for p in roadmap_data.get("phases", [])
+ if "id" in p
+ }🤖 Prompt for AI Agents
In `@scripts/batch-from-discovery.py` around lines 240 - 242, The dict
comprehension in _normalize_roadmap_feature can raise KeyError because it
assumes every phase dict has "id"; change the construction of phases to
defensively handle missing "id" (e.g., iterate roadmap_data.get("phases", [])
and use p.get("id") to skip or default entries, and use p.get("name",
p.get("id", "")) for the value) so phases = { ... } never accesses p["id"]
directly; ensure phase_name = phases.get(feature.get("phase_id", ""), "")
remains valid with the new safe mapping.
| return { | ||
| "title": f"[{item['id']}] {item['title']}", | ||
| "description": item["description"], | ||
| "workflow_type": TYPE_TO_WORKFLOW.get(cat, "feature"), | ||
| "services": ["frontend"], | ||
| "priority": EFFORT_TO_PRIORITY.get(effort, 5), | ||
| "complexity": "quick" if effort in ("trivial", "small") else "standard", | ||
| "estimated_hours": EFFORT_TO_HOURS.get(effort, 4), | ||
| } |
There was a problem hiding this comment.
"services" is hardcoded to ["frontend"] for all tasks.
Tasks sourced from categories like infrastructure, security, documentation, or testing may not target the frontend. This hardcoded value could produce incorrect batch task metadata. Consider deriving the service from the item's category or making it configurable.
🤖 Prompt for AI Agents
In `@scripts/batch-from-discovery.py` around lines 389 - 397, The "services" field
is hardcoded to ["frontend"]; replace it with a lookup derived from the item's
category by introducing a CATEGORY_TO_SERVICE mapping and using
CATEGORY_TO_SERVICE.get(cat, ["frontend"]) (or make it configurable) so tasks
from "infrastructure", "security", "documentation", "testing", etc. map to the
correct service(s); update the return block that builds the task dict (the
snippet using item, cat, TYPE_TO_WORKFLOW, EFFORT_TO_PRIORITY, EFFORT_TO_HOURS)
to use this mapping instead of the static ["frontend"].
| cmd = [ | ||
| sys.executable, str(run_py), | ||
| "--project", str(project_dir), | ||
| "--batch-create", str(output_file), | ||
| ] | ||
|
|
||
| result = subprocess.run(cmd, cwd=str(project_dir)) |
There was a problem hiding this comment.
subprocess.run has no timeout — script can hang indefinitely.
If the run.py batch-create process stalls (e.g., waiting on input or a network call), this script will block forever with no recovery path. Consider adding a timeout parameter and handling subprocess.TimeoutExpired.
🛡️ Suggested fix
- result = subprocess.run(cmd, cwd=str(project_dir))
+ try:
+ result = subprocess.run(cmd, cwd=str(project_dir), timeout=600)
+ except subprocess.TimeoutExpired:
+ error("Batch creation timed out after 10 minutes.")
+ sys.exit(1)🤖 Prompt for AI Agents
In `@scripts/batch-from-discovery.py` around lines 551 - 557, The subprocess.run
call using cmd/result can hang indefinitely; update the invocation in
scripts/batch-from-discovery.py to pass a reasonable timeout (e.g., timeout=...)
and wrap the call in a try/except that catches subprocess.TimeoutExpired and
subprocess.CalledProcessError, terminating or logging the child process and
exiting non‑zero; ensure you reference the existing cmd and run_py variables,
handle cleanup (cancel/kill the process if needed), and surface a clear error
message via logging or printing before returning/raising.
AndyMik90
left a comment
There was a problem hiding this comment.
🤖 Auto Claude PR Review
Merge Verdict: 🔴 BLOCKED
🔴 Blocked - 8 CI check(s) failing. Fix CI before merge.
Blocked: 8 CI check(s) failing. Fix CI before merge.
Risk Assessment
| Factor | Level | Notes |
|---|---|---|
| Complexity | High | Based on lines changed |
| Security Impact | Low | Based on security findings |
| Scope Coherence | Good | Based on structural review |
🚨 Blocking Issues (Must Fix)
- CI Failed: CI Complete
- CI Failed: CodeQL
- CI Failed: Lint Complete
- CI Failed: test-python (3.12, windows-latest)
- CI Failed: test-python (3.13, ubuntu-latest)
- CI Failed: test-python (3.12, ubuntu-latest)
- CI Failed: test-python (3.12, macos-latest)
- CI Failed: Python (Ruff)
Findings Summary
- High: 2 issue(s)
- Medium: 6 issue(s)
- Low: 6 issue(s)
Generated by Auto Claude PR Review
Findings (14 selected of 14 total)
🔵 [70e6af72c926] [LOW] OAuth token partial disclosure in log output
📁 apps/backend/preflight_hook.py:92
The preflight scripts print the first 20 characters of OAuth tokens to stdout in three locations. For tokens like 'sk-ant-oat01-...' (14-char prefix), this exposes ~6 characters of actual key material. While this is a local CLI tool and the exposure is limited, token material in stdout can be captured by CI/CD logs, terminal session recordings, or log aggregation if the tool is run in automated contexts. Best practice is to mask tokens entirely or show only the last 4 characters.
Suggested fix:
Replace token[:20] with a fully masked representation, e.g.: f'***{new_token[-4:]}' to show only the last 4 characters, or simply omit the token value entirely from the log message.
🔵 [22242def5f18] [LOW] Hardcoded expired token prefix in source code
📁 apps/backend/preflight_hook.py:52
A specific expired token prefix 'sk-ant-oat01-cnqsmZU' is committed to source code and used as a pattern match. While the token is identified as expired and the prefix alone is not sufficient for authentication, committing actual (even partial) token material to a repository is a credential hygiene issue. This pattern is checked on every CLI invocation via the preflight hook.
Suggested fix:
Remove the hardcoded token prefix. Instead, validate tokens by attempting an API call or checking token expiry metadata, rather than maintaining a list of known-expired token prefixes in source code.
🔵 [2a5074246e97] [LOW] Hardcoded internal network IP as Ollama default
📁 apps/backend/preflight_hook.py:101
The preflight scripts use 'http://192.168.0.234:11434' as the default Ollama URL, which is an internal/private network IP specific to the developer's infrastructure. The rest of the codebase consistently uses 'http://localhost:11434' as the default (see integrations/graphiti/config.py and .env.example). This inconsistency exposes internal network topology information in public source code.
Suggested fix:
Change the default to 'http://localhost:11434' to be consistent with the rest of the codebase (see .env.example line 311 and integrations/graphiti/config.py line 69).
🟠 [6292ed9f383d] [HIGH] Heavy code duplication between preflight.py and preflight_hook.py
📁 apps/backend/preflight.py:51
preflight.py and preflight_hook.py duplicate the same logic in 3 areas: (1) .env file parsing (partition-based key=value parsing) at preflight.py:51-57 and preflight_hook.py:35-40, (2) OAuth token checking with identical hardcoded expired-token prefix at preflight.py:60-61 and preflight_hook.py:46-56, (3) token auto-fix from ~/.claude/.credentials.json at preflight.py:84-110 and preflight_hook.py:61-96. The preflight_hook.py was intended to be a 'lightweight wrapper around preflight.py' (per its docstring), but instead reimplements everything from scratch. If the token-fix logic or .env parsing needs to change, it must be updated in two places.
Suggested fix:
Have preflight_hook.py import and delegate to preflight.py's PreflightCheck class (as its docstring implies), or extract shared utilities (env parsing, token fix) into a common module that both files import.
🟡 [db0a08525374] [MEDIUM] Hardcoded private network IP address as default Ollama URL
📁 apps/backend/preflight_hook.py:101
Both preflight files hardcode a private network IP '192.168.0.234:11434' as the default Ollama URL. This is environment-specific to the author's local network and will fail for any other user or CI environment. In preflight.py line 117, the same IP is hardcoded without even checking .env first (it only reads .env afterward). In preflight_hook.py:101, it's used as the fallback default. | Same hardcoded private LAN IP http://192.168.0.234:11434 duplicated from preflight.py. Both files should use http://localhost:11434 as the standard default.
Suggested fix:
Use a generic localhost default: 'http://localhost:11434' which is Ollama's standard default, or omit the check entirely if the URL isn't configured.
🟡 [8e4f445200fd] [MEDIUM] OAuth token prefix leaked to stdout in production logs
📁 apps/backend/preflight_hook.py:92
Both preflight files print the first 20 characters of OAuth tokens to stdout. These are real authentication credentials printed during normal CLI startup. In preflight.py line 67 the existing token prefix is printed, and in lines 92 and 107 (preflight_hook.py / preflight.py) the new token prefix is printed after auto-fix. This leaks credential material to terminal output, log files, and CI systems that may capture stdout.
Suggested fix:
Mask the token more aggressively, e.g., show only the first 8 chars or just confirm presence: 'OAuth token present (sk-ant-****)' or 'Token auto-fixed successfully'.
🟡 [a6cd748aed0e] [MEDIUM] Hardcoded Unix venv path breaks cross-platform support
📁 scripts/ac-phase.py:49
ac-phase.py hardcodes VENV_PYTHON as '.venv/bin/python' (Unix path), and similarly preflight.py:16 uses the same pattern. Per CLAUDE.md, this project supports Windows, macOS, and Linux, and the rule is 'Never hardcode paths. Use findExecutable() and joinPaths().' On Windows, the venv python is at '.venv/Scripts/python.exe', so this script will silently fall back to 'python3' from PATH (line 385) which may be a completely different Python without the project's dependencies.
Suggested fix:
Use platform-aware venv path: `VENV_PYTHON = AUTO_CLAUDE_ROOT / 'apps' / 'backend' / '.venv' / ('Scripts' if sys.platform == 'win32' else 'bin') / ('python.exe' if sys.platform == 'win32' else 'python')`
🟠 [e1f1e5544e5f] [HIGH] Branch name sanitization changes naming for existing worktrees with no migration
📁 apps/backend/core/worktree.py:348
The get_branch_name() method now sanitizes spec names by removing characters like [], ~, ^, etc. This changes the branch name that will be computed for any existing spec whose name contains these characters (e.g., '016-[sec-001]-fix-xss' was previously branched as 'auto-claude/016-[sec-001]-fix-xss' and now becomes 'auto-claude/016-sec-001-fix-xss'). Since get_branch_name() is called by 6 other methods (create_worktree, remove_worktree, merge_worktree, get_worktree_info, etc.), existing worktrees created before this change will have branches with the old unsanitized name, but all operations will now look for the new sanitized name — causing a mismatch.
Suggested fix:
Add a migration check: when get_worktree_info or create_worktree detects the old (unsanitized) branch exists but the new (sanitized) name doesn't, rename the branch. Alternatively, sanitize only at creation time and store the actual branch name in worktree metadata.
🔵 [be3a50250b71] [LOW] Import statement inside loop body
📁 apps/backend/preflight_hook.py:126
The 'import time' statement is placed inside the for-loop body in _check_stale_locks(). While Python caches imports so this isn't a performance bug, it's a code quality issue — the import should be at the top of the file or at least at the function level, not re-executed on every iteration.
Suggested fix:
Move 'import time' to the top of the file alongside the other imports (json, os, subprocess, sys).
🟡 [77e01453196d] [MEDIUM] Hardcoded Unix-only venv path violates cross-platform rules
📁 apps/backend/preflight.py:16
VENV_PYTHON uses .venv/bin/python which only works on Unix. The project's CLAUDE.md explicitly requires: 'Never use process.platform directly. Import from apps/backend/core/platform/'. The existing codebase at core/workspace/setup.py (line 731) correctly handles both platforms with for candidate in ('bin/python', 'Scripts/python.exe'). Searched Grep('.venv.*bin.*python|Scripts.*python', 'apps/backend/') — confirmed the existing cross-platform pattern in workspace/setup.py.
Suggested fix:
Use the same cross-platform pattern as core/workspace/setup.py: check both 'bin/python' and 'Scripts/python.exe' candidates, or use the platform abstraction from core.platform.
🟡 [7fd35cebd965] [MEDIUM] Hardcoded private LAN IP as Ollama default
📁 apps/backend/preflight.py:117
The default Ollama URL is hardcoded to http://192.168.0.234:11434 — a private LAN IP specific to one developer's setup. This will fail for any other user/environment. The standard Ollama default is http://localhost:11434. This same issue appears in preflight_hook.py:101.
Suggested fix:
Use `http://localhost:11434` as the default Ollama URL, consistent with Ollama's standard configuration. The .env override will still apply for custom setups.
🟡 [c432b51b07eb] [MEDIUM] Redefines ANSI colors instead of using existing ui.colors module
📁 apps/backend/preflight.py:18
The file defines its own ANSI color constants (RED, GREEN, YELLOW, BLUE, RESET) and output helpers (ok, warn, fail, info). The codebase already has a comprehensive ui package at apps/backend/ui/ with Color class, color(), success(), error(), warning(), info(), print_header(), print_section(), print_status() etc. Searched Grep('from.*ui import|from ui import', 'apps/backend/') — confirmed the existing ui module is used throughout the codebase (cli/, agents/, etc.).
Suggested fix:
Import from the existing `ui` module: `from ui.colors import Color, success, error, warning, info` and `from ui.formatters import print_header, print_section`. This ensures consistent terminal output and respects the ui module's color-support detection (which gracefully degrades on terminals that don't support ANSI).
🔵 [b681822cfa89] [LOW] Hardcoded filesystem paths specific to one developer's setup
📁 apps/backend/preflight.py:184
Both preflight.py (line 184) and preflight_hook.py (line 119) hardcode Path('/aidata/projects') and Path.home() / 'projects' as locations to scan for stuck locks. These are environment-specific paths that won't exist on other machines and have no configuration mechanism.
Suggested fix:
Either make these configurable via .env (e.g., AUTO_CLAUDE_PROJECT_DIRS), or remove the hardcoded `/aidata/projects` path since it's specific to one developer's setup. The `Path.home() / 'projects'` is at least somewhat conventional.
🔵 [d328614a9ca7] [LOW] Hardcoded OAuth token prefix for expiration detection
📁 apps/backend/preflight.py:61
The code checks for a specific token prefix sk-ant-oat01-cnqsmZU to detect expired tokens. This is a one-time value from a specific developer's expired token. This pattern doesn't generalize — future expired tokens will have different prefixes. Same issue in preflight_hook.py:52.
Suggested fix:
Consider validating tokens by making a lightweight API call or checking token expiry metadata rather than matching specific token prefixes.
This review was generated by Auto Claude.
Companion to ac-phase — provides a unified CLI for creating, building, QA'ing, and cleaning up batch specs from discovery outputs or JSON files. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…overy Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Adds interactive codebase Q&A as option 1 in the ac-batch menu and as --insights CLI flag. Supports multi-turn conversation with history and suggested starter questions. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Adds ideation (brainstorm improvements) and roadmap (strategic feature planning) as menu options and CLI flags. The interactive menu is now organized into Discover/Create/Build/Manage sections for clarity. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
|
||
| print() | ||
| try: | ||
| result = subprocess.run(cmd, cwd=str(backend_dir), |
Check notice
Code scanning / CodeQL
Unused local variable Note
| if 0 <= idx < len(suggestions): | ||
| raw = suggestions[idx] | ||
| print(f" {C_DIM}Q: {raw}{C_RESET}") | ||
| except ValueError: |
Check notice
Code scanning / CodeQL
Empty except Note
| print(f" {C_BOLD}Next steps:{C_RESET}") | ||
| print(f" • Run {C_CYAN}ac-batch --discover{C_RESET} to create specs from this roadmap") | ||
| print(f" • Or select option 4 from the menu") | ||
| except (json.JSONDecodeError, KeyError): |
Check notice
Code scanning / CodeQL
Empty except Note
| print(f" {C_BOLD}Next steps:{C_RESET}") | ||
| print(f" • Run {C_CYAN}ac-batch --discover{C_RESET} to create specs from these ideas") | ||
| print(f" • Or select option 4 from the menu") | ||
| except (json.JSONDecodeError, KeyError): |
Check notice
Code scanning / CodeQL
Empty except Note
| idx = int(raw) - 1 | ||
| if 0 <= idx < len(options): | ||
| return [idx] | ||
| except ValueError: |
Check notice
Code scanning / CodeQL
Empty except Note
| import signal | ||
| import subprocess | ||
| import sys | ||
| import time |
Check notice
Code scanning / CodeQL
Unused import Note
|
|
||
| import argparse | ||
| import json | ||
| import os |
Check notice
Code scanning / CodeQL
Unused import Note
Titles containing "/" (e.g. "SSL/TLS") caused spec directory creation to fail by interpreting the slash as a path separator. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 10
🤖 Fix all issues with AI agents
In `@scripts/ac-batch.py`:
- Around line 625-627: The history list is being appended with a placeholder
"(see above)" instead of the assistant's actual reply; change the code that
launches the assistant subprocess (the call that produces `raw` or the function
invoking insights_runner) to capture the subprocess output (use
subprocess.run(..., capture_output=True, text=True) or subprocess.PIPE and
decode stdout) and replace the placeholder append (history.append({"role":
"assistant", "content": "(see above)"})) with the real response string (e.g.,
history.append({"role": "assistant", "content": assistant_output})) and add
minimal error handling if the subprocess fails so history reflects the true
assistant reply or an error marker.
- Around line 140-142: The code opens and reads/writes files without specifying
encoding which can cause Unicode errors on Windows; update the open(req_file)
call inside the try block (and every other open(...) usage) to pass
encoding="utf-8", and also call .read_text(encoding="utf-8") and
.write_text(..., encoding="utf-8") wherever Path.read_text()/write_text() are
used (the spots around the req_file read and the other file I/O occurrences
flagged). Ensure you only change the file I/O calls (e.g., the with
open(req_file) context, Path.read_text, Path.write_text) to include
encoding="utf-8" and leave the rest of the logic intact.
- Around line 325-330: The loop that calls specs_dir.iterdir() can throw
FileNotFoundError when the specs directory returned by
get_specs_dir(project_dir) doesn't exist; before iterating, ensure specs_dir
exists (e.g., check specs_dir.exists() or is_dir()) and either create it
(mkdir(parents=True, exist_ok=True)) or skip the iteration and leave spec_dir
None; update the code that uses spec_dir accordingly so it handles the
missing-directory case. Include references to get_specs_dir, specs_dir,
spec_dir, and spec_id when making the change.
- Around line 59-68: The color constants (C_RESET, C_BOLD, C_DIM, C_GREEN,
C_YELLOW, C_RED, C_CYAN, C_BLUE, C_MAGENTA) are always set and will emit ANSI
codes when stdout is not a TTY; change their initialization to be conditional on
sys.stdout.isatty() (e.g., set them to empty strings when not a TTY) so
piped/redirected output isn't garbled, and ensure any code that references these
symbols continues to work when they are empty by importing or computing them
near the top of scripts/ac-batch.py; optionally respect an override env var or
CLI flag to force colors on/off if desired.
- Around line 370-377: In action_build_spec, the QA subprocess call using
subprocess.run(qa_cmd, cwd=str(project_dir)) discards the result so failures are
ignored; change this to capture and check the return (either use
subprocess.run(..., check=True) wrapped in a try/except
subprocess.CalledProcessError or assign result = subprocess.run(...) and inspect
result.returncode), and on non-zero exit print a failure message (use same
coloring as success) and return False instead of proceeding to print "✓ Spec …
done"; keep the existing KeyboardInterrupt handling intact and ensure qa_cmd and
run_qa logic short-circuits to propagate False to callers like action_build_all
when QA fails.
- Line 56: VENV_PYTHON currently hardcodes the Unix venv layout and get_python()
falls back to PATH; change detection to use the backend platform abstraction
(import isWindows/isLinux/isMacOS from apps/backend/core/platform/) and
construct the venv interpreter path accordingly (use
".venv\\Scripts\\python.exe" on Windows, ".venv/bin/python" on Unix) so
VENV_PYTHON points to the project venv, and update get_python() to prefer that
resolved VENV_PYTHON before falling back to system python; do not use
process.platform directly.
- Around line 446-452: The QA subprocess failures are ignored because
subprocess.run(qa_cmd, cwd=str(project_dir)) doesn't check the exit code; change
the call to use subprocess.run(..., check=True) (or capture the CompletedProcess
and inspect .returncode) and catch subprocess.CalledProcessError to log/print
the failure and return False so the outer function doesn't report success on QA
failures; keep the existing KeyboardInterrupt handling around the qa_cmd
invocation and reference qa_cmd, project_dir, subprocess.run, and
subprocess.CalledProcessError when making the change.
In `@scripts/ac-phase.py`:
- Around line 35-38: The SIGINT handler _handle_sigint currently prints
"Progress has been saved" but doesn't call save_phase_state, so update
_handle_sigint to either (A) actually persist state by calling
save_phase_state(project_dir, state) (ensuring _handle_sigint has access to
project_dir and state, e.g., capture them via a closure or global) before
calling sys.exit(0), or (B) if you don't want to change handler scope, change
the message to "Progress may have been saved — run ac-phase again to resume."
Mention the related symbols run_phase, save_phase_state, project_dir, and state
when making the change.
- Around line 598-611: The input() call for the phase number can raise EOFError
which is not handled; update the try/except around input() in the choice == "2"
branch to also catch EOFError (e.g., except (ValueError, EOFError):) and handle
it similarly to the ValueError case by printing an appropriate message (using
C_RED/C_RESET) and returning to the menu flow instead of letting the exception
propagate; keep the existing behavior for valid pnum handling (finding phase,
calling run_phase, reloading specs with load_all_specs, and calling
print_phase_status).
- Around line 530-543: The two unguarded input() calls in run_all_phases (the
failure confirmation when not success and the pause confirmation before the next
phase) can raise EOFError when stdin is closed; wrap each input() in a
try/except EOFError and assign the appropriate default response instead of
letting the exception propagate: for the "Continue? [y/N]" prompt (in the block
using phase['phase'] when not success) treat EOFError as ans = "n" (default No),
and for the "Continue to next phase? [Y/n]" prompt (the pause block referencing
next_p) treat EOFError as ans = "y" (default Yes); keep the rest of the logic
unchanged and optionally also strip().lower() the default values so comparisons
behave the same.
| RUN_PY = AUTO_CLAUDE_ROOT / "apps" / "backend" / "run.py" | ||
| SPEC_RUNNER_PY = AUTO_CLAUDE_ROOT / "apps" / "backend" / "runners" / "spec_runner.py" | ||
| INSIGHTS_RUNNER_PY = AUTO_CLAUDE_ROOT / "apps" / "backend" / "runners" / "insights_runner.py" | ||
| VENV_PYTHON = AUTO_CLAUDE_ROOT / "apps" / "backend" / ".venv" / "bin" / "python" |
There was a problem hiding this comment.
Hardcoded Unix venv path breaks Windows compatibility.
VENV_PYTHON uses the Unix layout (.venv/bin/python). On Windows the venv interpreter lives at .venv\Scripts\python.exe. The fallback in get_python() (line 103) silently picks up whatever python3 is on PATH, which may not be the project venv at all — leading to missing-dependency errors at runtime.
Proposed fix
-VENV_PYTHON = AUTO_CLAUDE_ROOT / "apps" / "backend" / ".venv" / "bin" / "python"
+import platform as _platform
+
+if _platform.system() == "Windows":
+ VENV_PYTHON = AUTO_CLAUDE_ROOT / "apps" / "backend" / ".venv" / "Scripts" / "python.exe"
+else:
+ VENV_PYTHON = AUTO_CLAUDE_ROOT / "apps" / "backend" / ".venv" / "bin" / "python"Ideally, use the project's cross-platform utilities from apps/backend/core/platform/ instead of inline detection.
As per coding guidelines, **/*.{py,ts,tsx,js}: "Never use process.platform directly. Import platform abstraction from apps/frontend/src/main/platform/ or apps/backend/core/platform/. Use cross-platform utilities like isWindows(), isMacOS(), isLinux()."
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| VENV_PYTHON = AUTO_CLAUDE_ROOT / "apps" / "backend" / ".venv" / "bin" / "python" | |
| from apps.backend.core.platform import isWindows | |
| # ... other code ... | |
| if isWindows(): | |
| VENV_PYTHON = AUTO_CLAUDE_ROOT / "apps" / "backend" / ".venv" / "Scripts" / "python.exe" | |
| else: | |
| VENV_PYTHON = AUTO_CLAUDE_ROOT / "apps" / "backend" / ".venv" / "bin" / "python" |
🤖 Prompt for AI Agents
In `@scripts/ac-batch.py` at line 56, VENV_PYTHON currently hardcodes the Unix
venv layout and get_python() falls back to PATH; change detection to use the
backend platform abstraction (import isWindows/isLinux/isMacOS from
apps/backend/core/platform/) and construct the venv interpreter path accordingly
(use ".venv\\Scripts\\python.exe" on Windows, ".venv/bin/python" on Unix) so
VENV_PYTHON points to the project venv, and update get_python() to prefer that
resolved VENV_PYTHON before falling back to system python; do not use
process.platform directly.
| # Colors | ||
| C_RESET = "\033[0m" | ||
| C_BOLD = "\033[1m" | ||
| C_DIM = "\033[2m" | ||
| C_GREEN = "\033[32m" | ||
| C_YELLOW = "\033[33m" | ||
| C_RED = "\033[31m" | ||
| C_CYAN = "\033[36m" | ||
| C_BLUE = "\033[34m" | ||
| C_MAGENTA = "\033[35m" |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
ANSI escape codes are emitted unconditionally, garbling piped/redirected output.
When stdout is not a TTY (e.g. ac-batch --status > log.txt), the color codes will litter the output. A common pattern is to disable colors when not sys.stdout.isatty().
Suggested approach
+_NO_COLOR = not sys.stdout.isatty() or os.environ.get("NO_COLOR")
+
# Colors
-C_RESET = "\033[0m"
-C_BOLD = "\033[1m"
+C_RESET = "" if _NO_COLOR else "\033[0m"
+C_BOLD = "" if _NO_COLOR else "\033[1m"
# ... same for remaining codes🤖 Prompt for AI Agents
In `@scripts/ac-batch.py` around lines 59 - 68, The color constants (C_RESET,
C_BOLD, C_DIM, C_GREEN, C_YELLOW, C_RED, C_CYAN, C_BLUE, C_MAGENTA) are always
set and will emit ANSI codes when stdout is not a TTY; change their
initialization to be conditional on sys.stdout.isatty() (e.g., set them to empty
strings when not a TTY) so piped/redirected output isn't garbled, and ensure any
code that references these symbols continues to work when they are empty by
importing or computing them near the top of scripts/ac-batch.py; optionally
respect an override env var or CLI flag to force colors on/off if desired.
| try: | ||
| with open(req_file) as f: | ||
| data = json.load(f) |
There was a problem hiding this comment.
Specify encoding="utf-8" on all file I/O.
Multiple open(), read_text(), and write_text() calls throughout the file rely on the platform default encoding, which on Windows may not be UTF-8. Since spec data is likely UTF-8 JSON, this can cause UnicodeDecodeError on Windows.
Affected locations: lines 141, 285, 481, 524, 614.
Example fix (line 141)
- with open(req_file) as f:
+ with open(req_file, encoding="utf-8") as f:Apply the same pattern to the other open() calls, and pass encoding="utf-8" to read_text() / write_text() as well.
🤖 Prompt for AI Agents
In `@scripts/ac-batch.py` around lines 140 - 142, The code opens and reads/writes
files without specifying encoding which can cause Unicode errors on Windows;
update the open(req_file) call inside the try block (and every other open(...)
usage) to pass encoding="utf-8", and also call .read_text(encoding="utf-8") and
.write_text(..., encoding="utf-8") wherever Path.read_text()/write_text() are
used (the spots around the req_file read and the other file I/O occurrences
flagged). Ensure you only change the file I/O calls (e.g., the with
open(req_file) context, Path.read_text, Path.write_text) to include
encoding="utf-8" and leave the rest of the logic intact.
| specs_dir = get_specs_dir(project_dir) | ||
| spec_dir = None | ||
| for d in specs_dir.iterdir(): | ||
| if d.is_dir() and d.name.startswith(spec_id + "-"): | ||
| spec_dir = d | ||
| break |
There was a problem hiding this comment.
specs_dir.iterdir() crashes if the directory doesn't exist yet.
If no specs have been created, specs_dir (<project>/.auto-claude/specs) may not exist, and iterdir() will raise FileNotFoundError.
Proposed fix
specs_dir = get_specs_dir(project_dir)
+ if not specs_dir.exists():
+ print(f" {C_YELLOW}No specs directory found.{C_RESET}")
+ return False
spec_dir = None
for d in specs_dir.iterdir():📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| specs_dir = get_specs_dir(project_dir) | |
| spec_dir = None | |
| for d in specs_dir.iterdir(): | |
| if d.is_dir() and d.name.startswith(spec_id + "-"): | |
| spec_dir = d | |
| break | |
| specs_dir = get_specs_dir(project_dir) | |
| if not specs_dir.exists(): | |
| print(f" {C_YELLOW}No specs directory found.{C_RESET}") | |
| return False | |
| spec_dir = None | |
| for d in specs_dir.iterdir(): | |
| if d.is_dir() and d.name.startswith(spec_id + "-"): | |
| spec_dir = d | |
| break |
🤖 Prompt for AI Agents
In `@scripts/ac-batch.py` around lines 325 - 330, The loop that calls
specs_dir.iterdir() can throw FileNotFoundError when the specs directory
returned by get_specs_dir(project_dir) doesn't exist; before iterating, ensure
specs_dir exists (e.g., check specs_dir.exists() or is_dir()) and either create
it (mkdir(parents=True, exist_ok=True)) or skip the iteration and leave spec_dir
None; update the code that uses spec_dir accordingly so it handles the
missing-directory case. Include references to get_specs_dir, specs_dir,
spec_dir, and spec_id when making the change.
| try: | ||
| subprocess.run(qa_cmd, cwd=str(project_dir)) | ||
| except KeyboardInterrupt: | ||
| print(f"\n {C_YELLOW}QA interrupted for {spec_id}{C_RESET}") | ||
| return False | ||
|
|
||
| print(f" {C_GREEN}✓ Spec {spec_id} done{C_RESET}") | ||
| return True |
There was a problem hiding this comment.
QA return code not checked in action_build_spec.
When run_qa=True, the QA subprocess result is discarded (line 371). If QA fails, the function still prints "✓ Spec … done" and returns True, misrepresenting the outcome to both the user and the caller (action_build_all).
Proposed fix
try:
- subprocess.run(qa_cmd, cwd=str(project_dir))
+ qa_result = subprocess.run(qa_cmd, cwd=str(project_dir))
except KeyboardInterrupt:
print(f"\n {C_YELLOW}QA interrupted for {spec_id}{C_RESET}")
return False
+ if qa_result.returncode != 0:
+ print(f" {C_RED}QA failed for {spec_id}{C_RESET}")
+ return False🤖 Prompt for AI Agents
In `@scripts/ac-batch.py` around lines 370 - 377, In action_build_spec, the QA
subprocess call using subprocess.run(qa_cmd, cwd=str(project_dir)) discards the
result so failures are ignored; change this to capture and check the return
(either use subprocess.run(..., check=True) wrapped in a try/except
subprocess.CalledProcessError or assign result = subprocess.run(...) and inspect
result.returncode), and on non-zero exit print a failure message (use same
coloring as success) and return False instead of proceeding to print "✓ Spec …
done"; keep the existing KeyboardInterrupt handling intact and ensure qa_cmd and
run_qa logic short-circuits to propagate False to callers like action_build_all
when QA fails.
| try: | ||
| subprocess.run(qa_cmd, cwd=str(project_dir)) | ||
| except KeyboardInterrupt: | ||
| print(f"\n {C_YELLOW}QA interrupted{C_RESET}") | ||
| return False | ||
|
|
||
| return True |
There was a problem hiding this comment.
QA subprocess failures are silently ignored.
The return code of each subprocess.run(qa_cmd, ...) call is never checked. If QA fails for a spec, the loop continues and the function returns True, incorrectly signalling success.
Proposed fix
+ failed = []
for i, s in enumerate(targets, 1):
print(f" {C_CYAN}[{i}/{len(targets)}] QA for {s['id']}...{C_RESET}")
qa_cmd = [python, str(RUN_PY), "--project-dir", str(project_dir),
"--spec", s["id"], "--qa"]
print(f" {C_DIM}$ {' '.join(qa_cmd)}{C_RESET}")
try:
- subprocess.run(qa_cmd, cwd=str(project_dir))
+ result = subprocess.run(qa_cmd, cwd=str(project_dir))
+ if result.returncode != 0:
+ failed.append(s["id"])
except KeyboardInterrupt:
print(f"\n {C_YELLOW}QA interrupted{C_RESET}")
return False
- return True
+ if failed:
+ print(f" {C_RED}QA failed for: {', '.join(failed)}{C_RESET}")
+ return len(failed) == 0🤖 Prompt for AI Agents
In `@scripts/ac-batch.py` around lines 446 - 452, The QA subprocess failures are
ignored because subprocess.run(qa_cmd, cwd=str(project_dir)) doesn't check the
exit code; change the call to use subprocess.run(..., check=True) (or capture
the CompletedProcess and inspect .returncode) and catch
subprocess.CalledProcessError to log/print the failure and return False so the
outer function doesn't report success on QA failures; keep the existing
KeyboardInterrupt handling around the qa_cmd invocation and reference qa_cmd,
project_dir, subprocess.run, and subprocess.CalledProcessError when making the
change.
| # Track conversation for context | ||
| history.append({"role": "user", "content": raw}) | ||
| history.append({"role": "assistant", "content": "(see above)"}) |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Conversation history stores a placeholder instead of the actual assistant response.
Every assistant turn is recorded as "(see above)", so subsequent calls to insights_runner with --history-file receive no meaningful prior context. If the runner relies on this history for multi-turn conversation, replies will lack continuity.
Consider capturing the subprocess output (e.g. via capture_output=True or subprocess.PIPE) and storing the real response, or document that the runner itself is responsible for maintaining its own conversation state.
🤖 Prompt for AI Agents
In `@scripts/ac-batch.py` around lines 625 - 627, The history list is being
appended with a placeholder "(see above)" instead of the assistant's actual
reply; change the code that launches the assistant subprocess (the call that
produces `raw` or the function invoking insights_runner) to capture the
subprocess output (use subprocess.run(..., capture_output=True, text=True) or
subprocess.PIPE and decode stdout) and replace the placeholder append
(history.append({"role": "assistant", "content": "(see above)"})) with the real
response string (e.g., history.append({"role": "assistant", "content":
assistant_output})) and add minimal error handling if the subprocess fails so
history reflects the true assistant reply or an error marker.
| def _handle_sigint(sig, frame): | ||
| """Handle Ctrl+C gracefully without tracebacks.""" | ||
| print(f"\n\n Interrupted. Progress has been saved — run ac-phase again to resume.") | ||
| sys.exit(0) |
There was a problem hiding this comment.
SIGINT handler claims progress is saved but doesn't actually save it.
The handler prints "Progress has been saved" but only calls sys.exit(0). If the interrupt occurs between state mutations and the next save_phase_state call (e.g., mid-loop in run_phase), progress is not saved. Either actually save state here (requires access to project_dir and state) or soften the message to "Progress may have been saved."
🤖 Prompt for AI Agents
In `@scripts/ac-phase.py` around lines 35 - 38, The SIGINT handler _handle_sigint
currently prints "Progress has been saved" but doesn't call save_phase_state, so
update _handle_sigint to either (A) actually persist state by calling
save_phase_state(project_dir, state) (ensuring _handle_sigint has access to
project_dir and state, e.g., capture them via a closure or global) before
calling sys.exit(0), or (B) if you don't want to change handler scope, change
the message to "Progress may have been saved — run ac-phase again to resume."
Mention the related symbols run_phase, save_phase_state, project_dir, and state
when making the change.
| if not success: | ||
| print(f"\n {C_YELLOW}⚠ Phase {phase['phase']} had failures. Continue? [y/N]{C_RESET} ", end="") | ||
| ans = input().strip().lower() | ||
| if ans != "y": | ||
| print(" Stopping. Fix failures and re-run.") | ||
| return | ||
|
|
||
| # Pause between phases for review | ||
| if pause and i < len(remaining) - 1: | ||
| next_p = remaining[i + 1] | ||
| print(f"\n {C_CYAN}▸ Next up: Phase {next_p['phase']}: {next_p['name']}{C_RESET}") | ||
| print(f" {C_BOLD}Continue to next phase? [Y/n]{C_RESET} ", end="") | ||
| ans = input().strip().lower() | ||
| if ans == "n": |
There was a problem hiding this comment.
Unhandled EOFError on input() calls in run_all_phases.
Lines 532 and 542 call input() without catching EOFError. If stdin is closed (piped input, CI), this crashes with a traceback. The interactive loop at line 581 correctly catches both KeyboardInterrupt and EOFError, but these calls don't.
Proposed fix
- print(f"\n {C_YELLOW}⚠ Phase {phase['phase']} had failures. Continue? [y/N]{C_RESET} ", end="")
- ans = input().strip().lower()
+ try:
+ print(f"\n {C_YELLOW}⚠ Phase {phase['phase']} had failures. Continue? [y/N]{C_RESET} ", end="")
+ ans = input().strip().lower()
+ except (KeyboardInterrupt, EOFError):
+ print("\n Stopping.")
+ return
if ans != "y":
print(" Stopping. Fix failures and re-run.")
return
# Pause between phases for review
if pause and i < len(remaining) - 1:
next_p = remaining[i + 1]
print(f"\n {C_CYAN}▸ Next up: Phase {next_p['phase']}: {next_p['name']}{C_RESET}")
- print(f" {C_BOLD}Continue to next phase? [Y/n]{C_RESET} ", end="")
- ans = input().strip().lower()
+ try:
+ print(f" {C_BOLD}Continue to next phase? [Y/n]{C_RESET} ", end="")
+ ans = input().strip().lower()
+ except (KeyboardInterrupt, EOFError):
+ print("\n Paused. Run ac-phase again to continue.")
+ return🤖 Prompt for AI Agents
In `@scripts/ac-phase.py` around lines 530 - 543, The two unguarded input() calls
in run_all_phases (the failure confirmation when not success and the pause
confirmation before the next phase) can raise EOFError when stdin is closed;
wrap each input() in a try/except EOFError and assign the appropriate default
response instead of letting the exception propagate: for the "Continue? [y/N]"
prompt (in the block using phase['phase'] when not success) treat EOFError as
ans = "n" (default No), and for the "Continue to next phase? [Y/n]" prompt (the
pause block referencing next_p) treat EOFError as ans = "y" (default Yes); keep
the rest of the logic unchanged and optionally also strip().lower() the default
values so comparisons behave the same.
| elif choice == "2": | ||
| print(f"\n Which phase? (1-{len(phases_data['phases'])}): ", end="") | ||
| try: | ||
| pnum = int(input().strip()) | ||
| phase = next((p for p in phases_data["phases"] if p["phase"] == pnum), None) | ||
| if phase: | ||
| run_phase(project_dir, phase, phases_data, specs) | ||
| specs = load_all_specs(project_dir) | ||
| print() | ||
| print_phase_status(project_dir, phases_data, specs) | ||
| else: | ||
| print(f" {C_RED}Phase {pnum} not found{C_RESET}") | ||
| except ValueError: | ||
| print(f" {C_RED}Invalid input{C_RESET}") |
There was a problem hiding this comment.
EOFError not caught for phase number input.
Line 601's input() is only wrapped in try/except ValueError. An EOFError (closed stdin) would propagate uncaught and crash.
Proposed fix
try:
pnum = int(input().strip())
- except ValueError:
+ except (ValueError, EOFError):
print(f" {C_RED}Invalid input{C_RESET}")🤖 Prompt for AI Agents
In `@scripts/ac-phase.py` around lines 598 - 611, The input() call for the phase
number can raise EOFError which is not handled; update the try/except around
input() in the choice == "2" branch to also catch EOFError (e.g., except
(ValueError, EOFError):) and handle it similarly to the ValueError case by
printing an appropriate message (using C_RED/C_RESET) and returning to the menu
flow instead of letting the exception propagate; keep the existing behavior for
valid pnum handling (finding phase, calling run_phase, reloading specs with
load_all_specs, and calling print_phase_status).
Extract [xxx-NNN] tags from batch task titles and include them with brackets in the spec directory name (e.g. 003-[ci-001]-add-csv-export). This allows ac-phase to parse categories and group specs into proper execution phases. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@apps/backend/cli/batch_commands.py`:
- Around line 63-65: The import re is inside the loop where task_slug is
computed (task_slug = re.sub(...)) causing a style violation (Ruff E402) and
needless repeated import; move the statement importing the re module to the
top-level imports alongside the other imports (so it is executed once at module
import time) and remove the in-loop import, leaving the task_slug logic in place
that uses task_title and re.sub calls.
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@apps/backend/cli/batch_commands.py`:
- Line 66: The current regex assigned to tag_match only captures a single
word-digits segment and will miss multi-part or alphabetic-only tags; update the
pattern used where tag_match is created (the assignment using task_title) to a
more permissive capture that allows letters, digits, hyphens and optionally
dots/slashes (e.g., change the regex to capture one or more of those characters
inside the brackets) so tags like [sec-sub-001], [hotfix], or tags with
dots/slashes are matched and the remaining title group is still captured.
- Around line 66-72: The generated directory name spec_name currently embeds
literal brackets (constructed in the block using tag_match, tag, title_slug and
spec_id) which are glob metacharacters; change the formatting to avoid '[' and
']' (e.g., build spec_name as f"{spec_id}-{tag}-{title_slug}" or
f"{spec_id}-({tag})-{title_slug}" or replace brackets with hyphens) and keep the
existing slug sanitization for title_slug; also review any code that extracts
IDs via iterdir() (the earlier ID-extraction logic) to tolerate the new naming
scheme so directory parsing still works.
| spec_name = f"{spec_id}-{task_slug}" | ||
|
|
||
| # Extract category tag like [sec-001] from title if present | ||
| tag_match = re.match(r"^\[(\w+-\d+)\]\s*(.*)", task_title) |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Tag regex only matches a single word-digits segment.
r"^\[(\w+-\d+)\]\s*(.*)" won't match tags with multiple segments (e.g., [sec-sub-001]), purely alphabetic tags (e.g., [hotfix]), or tags with dots/slashes. If the tag taxonomy may expand, consider a more permissive capture like r"^\[([\w\-]+)\]\s*(.*)".
🤖 Prompt for AI Agents
In `@apps/backend/cli/batch_commands.py` at line 66, The current regex assigned to
tag_match only captures a single word-digits segment and will miss multi-part or
alphabetic-only tags; update the pattern used where tag_match is created (the
assignment using task_title) to a more permissive capture that allows letters,
digits, hyphens and optionally dots/slashes (e.g., change the regex to capture
one or more of those characters inside the brackets) so tags like [sec-sub-001],
[hotfix], or tags with dots/slashes are matched and the remaining title group is
still captured.
| tag_match = re.match(r"^\[(\w+-\d+)\]\s*(.*)", task_title) | ||
| if tag_match: | ||
| tag = tag_match.group(1) # e.g. "sec-001" | ||
| title_rest = tag_match.group(2) # e.g. "Remove hardcoded API key..." | ||
| title_slug = re.sub(r"[^\w\-]", "-", title_rest.lower()) | ||
| title_slug = re.sub(r"-+", "-", title_slug).strip("-")[:50] | ||
| spec_name = f"{spec_id}-[{tag}]-{title_slug}" |
There was a problem hiding this comment.
Brackets in directory names can break shell globbing and downstream tooling.
spec_name like 001-[sec-001]-remove-hardcoded contains literal [ and ], which are glob metacharacters. Any unquoted shell expansion (e.g., ls .auto-claude/specs/001-*) or tools that interpret glob patterns will misinterpret these characters. This also affects Line 56's iterdir()-based ID extraction if future code passes these paths through shell commands or subprocess without quoting.
Consider escaping or replacing brackets in the spec name:
♻️ Suggested fix — use parentheses or plain hyphens instead of brackets
- spec_name = f"{spec_id}-[{tag}]-{title_slug}"
+ spec_name = f"{spec_id}-{tag}-{title_slug}"This keeps the tag visible in the directory name without introducing glob-sensitive characters. If visual grouping of the tag is desired, parentheses (sec-001) are safer but still non-ideal; plain hyphens are the most portable.
🤖 Prompt for AI Agents
In `@apps/backend/cli/batch_commands.py` around lines 66 - 72, The generated
directory name spec_name currently embeds literal brackets (constructed in the
block using tag_match, tag, title_slug and spec_id) which are glob
metacharacters; change the formatting to avoid '[' and ']' (e.g., build
spec_name as f"{spec_id}-{tag}-{title_slug}" or
f"{spec_id}-({tag})-{title_slug}" or replace brackets with hyphens) and keep the
existing slug sanitization for title_slug; also review any code that extracts
IDs via iterdir() (the earlier ID-extraction logic) to tolerate the new naming
scheme so directory parsing still works.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Fix all issues with AI agents
In `@scripts/REFERENCE.md`:
- Around line 100-112: The fenced code block listing CLI examples (the lines
starting with "~/auto-claude.sh ...") lacks a language tag; update that code
fence to use a shell language identifier (e.g., add "bash" after the opening
backticks) so syntax highlighting applies to the CLI examples for commands like
"~/auto-claude.sh ideation", "run", "github review-pr" and others.
- Around line 138-142: The fenced code block containing the lines "ac-batch =
WHAT to build (discover → create → build)", "ac-phase = HOW to build it (phased
execution with review gates)", and "auto-claude.sh = individual commands for
one-off tasks" needs a language identifier; update the opening fence from ``` to
```text (or ```bash if you prefer command styling) so the block is annotated
(e.g., replace the triple backticks before the block with ```text).
- Around line 89-96: The fenced code block showing the CLI commands (the
ac-phase examples) is missing a language identifier; update the opening fence to
include a shell language (e.g., add "bash" after the triple backticks) so the
block becomes a bash-highlighted block and ensure the rest of the block (the
ac-phase lines) remains unchanged.
- Around line 64-85: The fenced code block showing CLI examples (commands like
"ac-batch --insights", "ac-batch --build", etc.) needs a language identifier for
syntax highlighting; update the opening fence from ``` to ```bash so the block
becomes a bash-highlighted code block in scripts/REFERENCE.md.
| ``` | ||
| Discover: | ||
| ac-batch --insights Interactive codebase Q&A | ||
| ac-batch --insights "question" One-shot question | ||
| ac-batch --ideation Brainstorm improvements | ||
| ac-batch --roadmap Strategic feature roadmap | ||
|
|
||
| Create: | ||
| ac-batch --discover Create specs from discovery outputs | ||
| ac-batch --create tasks.json Create specs from JSON file | ||
|
|
||
| Build: | ||
| ac-batch --build Build all pending specs | ||
| ac-batch --build --spec 003 Build specific spec | ||
| ac-batch --build --qa Build all + run QA | ||
| ac-batch --qa QA all built specs | ||
|
|
||
| Manage: | ||
| ac-batch --status Show all spec statuses | ||
| ac-batch --cleanup Preview cleanup | ||
| ac-batch --cleanup --confirm Delete completed specs | ||
| ``` |
There was a problem hiding this comment.
Add a language identifier to this fenced block.
Specify a language for syntax highlighting (e.g., bash for CLI examples).
✅ Suggested fix
-```
+```bash🧰 Tools
🪛 markdownlint-cli2 (0.20.0)
[warning] 64-64: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🤖 Prompt for AI Agents
In `@scripts/REFERENCE.md` around lines 64 - 85, The fenced code block showing CLI
examples (commands like "ac-batch --insights", "ac-batch --build", etc.) needs a
language identifier for syntax highlighting; update the opening fence from ```
to ```bash so the block becomes a bash-highlighted code block in
scripts/REFERENCE.md.
| ``` | ||
| ac-phase Interactive menu | ||
| ac-phase --status Show phase progress | ||
| ac-phase --run Run next pending phase | ||
| ac-phase --run --phase 2 Run specific phase | ||
| ac-phase --run --all Run all remaining phases | ||
| ac-phase --init Regenerate phases from specs | ||
| ``` |
There was a problem hiding this comment.
Add a language identifier to this fenced block.
Specify a language for syntax highlighting (e.g., bash for CLI examples).
✅ Suggested fix
-```
+```bash🧰 Tools
🪛 markdownlint-cli2 (0.20.0)
[warning] 89-89: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🤖 Prompt for AI Agents
In `@scripts/REFERENCE.md` around lines 89 - 96, The fenced code block showing the
CLI commands (the ac-phase examples) is missing a language identifier; update
the opening fence to include a shell language (e.g., add "bash" after the triple
backticks) so the block becomes a bash-highlighted block and ensure the rest of
the block (the ac-phase lines) remains unchanged.
| ``` | ||
| ~/auto-claude.sh ideation <path> Brainstorm improvements | ||
| ~/auto-claude.sh roadmap <path> Create implementation roadmap | ||
| ~/auto-claude.sh insights <path> "question" Ask about the codebase | ||
| ~/auto-claude.sh spec <path> --task "..." Create spec for a task | ||
| ~/auto-claude.sh run <path> --spec 001 Execute a spec build | ||
| ~/auto-claude.sh github <path> review-pr 42 Review a PR | ||
| ~/auto-claude.sh github <path> triage Triage issues | ||
| ~/auto-claude.sh github <path> auto-fix 456 Auto-fix an issue | ||
| ~/auto-claude.sh list <path> List specs | ||
| ~/auto-claude.sh worktrees <path> --cleanup Clean up worktrees | ||
| ~/auto-claude.sh ui Start desktop app | ||
| ``` |
There was a problem hiding this comment.
Add a language identifier to this fenced block.
Specify a language for syntax highlighting (e.g., bash for CLI examples).
✅ Suggested fix
-```
+```bash🧰 Tools
🪛 markdownlint-cli2 (0.20.0)
[warning] 100-100: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🤖 Prompt for AI Agents
In `@scripts/REFERENCE.md` around lines 100 - 112, The fenced code block listing
CLI examples (the lines starting with "~/auto-claude.sh ...") lacks a language
tag; update that code fence to use a shell language identifier (e.g., add "bash"
after the opening backticks) so syntax highlighting applies to the CLI examples
for commands like "~/auto-claude.sh ideation", "run", "github review-pr" and
others.
| ``` | ||
| ac-batch = WHAT to build (discover → create → build) | ||
| ac-phase = HOW to build it (phased execution with review gates) | ||
| auto-claude.sh = individual commands for one-off tasks | ||
| ``` |
There was a problem hiding this comment.
Add a language identifier to this fenced block.
If this is a conceptual snippet, use text; if you want command-like styling, use bash.
✅ Suggested fix
-```
+```text📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ``` | |
| ac-batch = WHAT to build (discover → create → build) | |
| ac-phase = HOW to build it (phased execution with review gates) | |
| auto-claude.sh = individual commands for one-off tasks | |
| ``` |
🧰 Tools
🪛 markdownlint-cli2 (0.20.0)
[warning] 138-138: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🤖 Prompt for AI Agents
In `@scripts/REFERENCE.md` around lines 138 - 142, The fenced code block
containing the lines "ac-batch = WHAT to build (discover → create → build)",
"ac-phase = HOW to build it (phased execution with review gates)", and
"auto-claude.sh = individual commands for one-off tasks" needs a language
identifier; update the opening fence from ``` to ```text (or ```bash if you
prefer command styling) so the block is annotated (e.g., replace the triple
backticks before the block with ```text).
ac-batch builds were failing with BUILD BLOCKED because run.py requires human review approval. Since ac-batch is interactive and the user already confirms what to build, --force is appropriate. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Same fix as ac-batch — ac-phase was also missing --force when invoking run.py, causing BUILD BLOCKED errors. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
AndyMik90
left a comment
There was a problem hiding this comment.
🤖 Auto Claude PR Review
🔴 Follow-up Review: Blocked
🔴 Blocked - 8 CI check(s) failing. Fix CI before merge.
Resolution Status
- ✅ Resolved: 0 previous findings addressed
- ❌ Unresolved: 14 previous findings remain
- 🆕 New Issues: 11 new findings in recent changes
Finding Validation
- 🔍 Dismissed as False Positives: 0 findings were re-investigated and found to be incorrect
- ✓ Confirmed Valid: 25 findings verified as genuine issues
- 👤 Needs Human Review: 1 findings require manual verification
🚨 Blocking Issues
- quality: [UNRESOLVED] Heavy code duplication between preflight.py and preflight_hook.py
- quality: [UNRESOLVED] Hardcoded private network IP address as default Ollama URL
- quality: [UNRESOLVED] OAuth token prefix leaked to stdout in production logs
- quality: [UNRESOLVED] Hardcoded Unix venv path breaks cross-platform support
- quality: [UNRESOLVED] Branch name sanitization changes naming for existing worktrees with no migration
- quality: [UNRESOLVED] Hardcoded Unix-only venv path violates cross-platform rules
- quality: [UNRESOLVED] Hardcoded private LAN IP as Ollama default
- quality: [UNRESOLVED] Redefines ANSI colors instead of using existing ui.colors module
- quality: Token auto-fix uses naive string replace that could corrupt .env file
- quality: Preflight hook runs before argument parsing, blocking --help and --list
Verdict
BLOCKED: 8 CI checks are failing (CI Complete, CodeQL, Lint Complete, Python Ruff, test-python on all 3 platforms). Additionally, none of the 14 previous findings have been addressed — all remain unresolved. The new commits (9 commits adding ac-batch.py, ac-phase.py, batch-from-discovery.py, and related changes) introduce 6 new confirmed issues including a HIGH severity finding (preflight hook blocks --help/--list when .env is missing). The unused imports (time, os) and unused variable flagged by GitHub Advanced Security are likely contributing to the Ruff and CodeQL CI failures. Until CI passes and previous findings are addressed, this PR cannot be merged.
Review Process
Agents invoked: resolution-verifier, new-code-reviewer, comment-analyzer, finding-validator
This is an AI-generated follow-up review using parallel specialist analysis with finding validation.
Findings (25 selected of 25 total)
🔵 [70e6af72c926] [LOW] [UNRESOLVED] OAuth token partial disclosure in log output
📁 apps/backend/preflight_hook.py:92
The preflight scripts print the first 20 characters of OAuth tokens to stdout in three locations. For tokens like 'sk-ant-oat01-...' (14-char prefix), this exposes ~6 characters of actual key material. While this is a local CLI tool and the exposure is limited, token material in stdout can be captured by CI/CD logs, terminal session recordings, or log aggregation if the tool is run in automated contexts. Best practice is to mask tokens entirely or show only the last 4 characters.
Resolution note: preflight_hook.py:92 still prints token[:20]: print(f"[preflight] Token auto-fixed from ~/.claude/.credentials.json ({new_token[:20]}...)")
Suggested fix:
Replace token[:20] with a fully masked representation, e.g.: f'***{new_token[-4:]}' to show only the last 4 characters, or simply omit the token value entirely from the log message.
🔵 [22242def5f18] [LOW] [UNRESOLVED] Hardcoded expired token prefix in source code
📁 apps/backend/preflight_hook.py:52
A specific expired token prefix 'sk-ant-oat01-cnqsmZU' is committed to source code and used as a pattern match. While the token is identified as expired and the prefix alone is not sufficient for authentication, committing actual (even partial) token material to a repository is a credential hygiene issue. This pattern is checked on every CLI invocation via the preflight hook.
Resolution note: preflight_hook.py:52 still has: known_expired = ["sk-ant-oat01-cnqsmZU"]
Suggested fix:
Remove the hardcoded token prefix. Instead, validate tokens by attempting an API call or checking token expiry metadata, rather than maintaining a list of known-expired token prefixes in source code.
🔵 [2a5074246e97] [LOW] [UNRESOLVED] Hardcoded internal network IP as Ollama default
📁 apps/backend/preflight_hook.py:101
The preflight scripts use 'http://192.168.0.234:11434' as the default Ollama URL, which is an internal/private network IP specific to the developer's infrastructure. The rest of the codebase consistently uses 'http://localhost:11434' as the default (see integrations/graphiti/config.py and .env.example). This inconsistency exposes internal network topology information in public source code.
Resolution note: preflight_hook.py:101 still has: ollama_url = env.get("OLLAMA_BASE_URL", "http://192.168.0.234:11434")
Suggested fix:
Change the default to 'http://localhost:11434' to be consistent with the rest of the codebase (see .env.example line 311 and integrations/graphiti/config.py line 69).
🟠 [6292ed9f383d] [HIGH] [UNRESOLVED] Heavy code duplication between preflight.py and preflight_hook.py
📁 apps/backend/preflight.py:51
preflight.py and preflight_hook.py duplicate the same logic in 3 areas: (1) .env file parsing (partition-based key=value parsing) at preflight.py:51-57 and preflight_hook.py:35-40, (2) OAuth token checking with identical hardcoded expired-token prefix at preflight.py:60-61 and preflight_hook.py:46-56, (3) token auto-fix from ~/.claude/.credentials.json at preflight.py:84-110 and preflight_hook.py:61-96. The preflight_hook.py was intended to be a 'lightweight wrapper around preflight.py' (per its docstring), but instead reimplements everything from scratch. If the token-fix logic or .env parsing needs to change, it must be updated in two places.
Resolution note: Both preflight.py and preflight_hook.py still independently implement .env parsing (lines 51-57 vs 33-41), OAuth checking (60-67 vs 44-58), Ollama connectivity (112-149 vs 99-114), and token auto-fix (84-110 vs 61-96). No shared code or imports between files.
Suggested fix:
Have preflight_hook.py import and delegate to preflight.py's PreflightCheck class (as its docstring implies), or extract shared utilities (env parsing, token fix) into a common module that both files import.
🟡 [db0a08525374] [MEDIUM] [UNRESOLVED] Hardcoded private network IP address as default Ollama URL
📁 apps/backend/preflight_hook.py:101
Both preflight files hardcode a private network IP '192.168.0.234:11434' as the default Ollama URL. This is environment-specific to the author's local network and will fail for any other user or CI environment. In preflight.py line 117, the same IP is hardcoded without even checking .env first (it only reads .env afterward). In preflight_hook.py:101, it's used as the fallback default. | Same hardcoded private LAN IP http://192.168.0.234:11434 duplicated from preflight.py. Both files should use http://localhost:11434 as the standard default.
Resolution note: Both files still hardcode 192.168.0.234:11434. preflight_hook.py:101: env.get("OLLAMA_BASE_URL", "http://192.168.0.234:11434"), preflight.py:117: ollama_url = "http://192.168.0.234:11434"
Suggested fix:
Use a generic localhost default: 'http://localhost:11434' which is Ollama's standard default, or omit the check entirely if the URL isn't configured.
🟡 [8e4f445200fd] [MEDIUM] [UNRESOLVED] OAuth token prefix leaked to stdout in production logs
📁 apps/backend/preflight_hook.py:92
Both preflight files print the first 20 characters of OAuth tokens to stdout. These are real authentication credentials printed during normal CLI startup. In preflight.py line 67 the existing token prefix is printed, and in lines 92 and 107 (preflight_hook.py / preflight.py) the new token prefix is printed after auto-fix. This leaks credential material to terminal output, log files, and CI systems that may capture stdout.
Resolution note: Three locations still print token[:20]: preflight_hook.py:92, preflight.py:67 ok(f"OAuth token present ({token[:20]}...)"), preflight.py:107 ok(f"Token auto-fixed...({new_token[:20]}...)")
Suggested fix:
Mask the token more aggressively, e.g., show only the first 8 chars or just confirm presence: 'OAuth token present (sk-ant-****)' or 'Token auto-fixed successfully'.
🟡 [a6cd748aed0e] [MEDIUM] [UNRESOLVED] Hardcoded Unix venv path breaks cross-platform support
📁 scripts/ac-phase.py:49
ac-phase.py hardcodes VENV_PYTHON as '.venv/bin/python' (Unix path), and similarly preflight.py:16 uses the same pattern. Per CLAUDE.md, this project supports Windows, macOS, and Linux, and the rule is 'Never hardcode paths. Use findExecutable() and joinPaths().' On Windows, the venv python is at '.venv/Scripts/python.exe', so this script will silently fall back to 'python3' from PATH (line 385) which may be a completely different Python without the project's dependencies.
Resolution note: scripts/ac-phase.py:49 still has: VENV_PYTHON = AUTO_CLAUDE_ROOT / "apps" / "backend" / ".venv" / "bin" / "python"
Suggested fix:
Use platform-aware venv path: `VENV_PYTHON = AUTO_CLAUDE_ROOT / 'apps' / 'backend' / '.venv' / ('Scripts' if sys.platform == 'win32' else 'bin') / ('python.exe' if sys.platform == 'win32' else 'python')`
🟠 [e1f1e5544e5f] [HIGH] [UNRESOLVED] Branch name sanitization changes naming for existing worktrees with no migration
📁 apps/backend/core/worktree.py:348
The get_branch_name() method now sanitizes spec names by removing characters like [], ~, ^, etc. This changes the branch name that will be computed for any existing spec whose name contains these characters (e.g., '016-[sec-001]-fix-xss' was previously branched as 'auto-claude/016-[sec-001]-fix-xss' and now becomes 'auto-claude/016-sec-001-fix-xss'). Since get_branch_name() is called by 6 other methods (create_worktree, remove_worktree, merge_worktree, get_worktree_info, etc.), existing worktrees created before this change will have branches with the old unsanitized name, but all operations will now look for the new sanitized name — causing a mismatch.
Resolution note: worktree.py:347-351 still sanitizes brackets from branch names with no migration for existing worktrees: sanitized = re.sub(r'[[]~^:?*\{}]', '', spec_name)
Suggested fix:
Add a migration check: when get_worktree_info or create_worktree detects the old (unsanitized) branch exists but the new (sanitized) name doesn't, rename the branch. Alternatively, sanitize only at creation time and store the actual branch name in worktree metadata.
🔵 [be3a50250b71] [LOW] [UNRESOLVED] Import statement inside loop body
📁 apps/backend/preflight_hook.py:126
The 'import time' statement is placed inside the for-loop body in _check_stale_locks(). While Python caches imports so this isn't a performance bug, it's a code quality issue — the import should be at the top of the file or at least at the function level, not re-executed on every iteration.
Resolution note: preflight_hook.py:126 still has 'import time' inside the for-loop body in _check_stale_locks()
Suggested fix:
Move 'import time' to the top of the file alongside the other imports (json, os, subprocess, sys).
🟡 [77e01453196d] [MEDIUM] [UNRESOLVED] Hardcoded Unix-only venv path violates cross-platform rules
📁 apps/backend/preflight.py:16
VENV_PYTHON uses .venv/bin/python which only works on Unix. The project's CLAUDE.md explicitly requires: 'Never use process.platform directly. Import from apps/backend/core/platform/'. The existing codebase at core/workspace/setup.py (line 731) correctly handles both platforms with for candidate in ('bin/python', 'Scripts/python.exe'). Searched Grep('.venv.*bin.*python|Scripts.*python', 'apps/backend/') — confirmed the existing cross-platform pattern in workspace/setup.py.
Resolution note: preflight.py:16 still has: VENV_PYTHON = BACKEND_DIR / ".venv" / "bin" / "python"
Suggested fix:
Use the same cross-platform pattern as core/workspace/setup.py: check both 'bin/python' and 'Scripts/python.exe' candidates, or use the platform abstraction from core.platform.
🟡 [7fd35cebd965] [MEDIUM] [UNRESOLVED] Hardcoded private LAN IP as Ollama default
📁 apps/backend/preflight.py:117
The default Ollama URL is hardcoded to http://192.168.0.234:11434 — a private LAN IP specific to one developer's setup. This will fail for any other user/environment. The standard Ollama default is http://localhost:11434. This same issue appears in preflight_hook.py:101.
Resolution note: preflight.py:117 still has: ollama_url = "http://192.168.0.234:11434"
Suggested fix:
Use `http://localhost:11434` as the default Ollama URL, consistent with Ollama's standard configuration. The .env override will still apply for custom setups.
🟡 [c432b51b07eb] [MEDIUM] [UNRESOLVED] Redefines ANSI colors instead of using existing ui.colors module
📁 apps/backend/preflight.py:18
The file defines its own ANSI color constants (RED, GREEN, YELLOW, BLUE, RESET) and output helpers (ok, warn, fail, info). The codebase already has a comprehensive ui package at apps/backend/ui/ with Color class, color(), success(), error(), warning(), info(), print_header(), print_section(), print_status() etc. Searched Grep('from.*ui import|from ui import', 'apps/backend/') — confirmed the existing ui module is used throughout the codebase (cli/, agents/, etc.).
Resolution note: preflight.py:18-22 still defines own ANSI colors (RED, GREEN, YELLOW, BLUE, RESET) instead of importing from existing ui/colors.py module
Suggested fix:
Import from the existing `ui` module: `from ui.colors import Color, success, error, warning, info` and `from ui.formatters import print_header, print_section`. This ensures consistent terminal output and respects the ui module's color-support detection (which gracefully degrades on terminals that don't support ANSI).
🔵 [b681822cfa89] [LOW] [UNRESOLVED] Hardcoded filesystem paths specific to one developer's setup
📁 apps/backend/preflight.py:184
Both preflight.py (line 184) and preflight_hook.py (line 119) hardcode Path('/aidata/projects') and Path.home() / 'projects' as locations to scan for stuck locks. These are environment-specific paths that won't exist on other machines and have no configuration mechanism.
Resolution note: Both files still hardcode: preflight.py:182-185 and preflight_hook.py:119 use Path.home() / 'projects' and Path('/aidata/projects')
Suggested fix:
Either make these configurable via .env (e.g., AUTO_CLAUDE_PROJECT_DIRS), or remove the hardcoded `/aidata/projects` path since it's specific to one developer's setup. The `Path.home() / 'projects'` is at least somewhat conventional.
🔵 [d328614a9ca7] [LOW] [UNRESOLVED] Hardcoded OAuth token prefix for expiration detection
📁 apps/backend/preflight.py:61
The code checks for a specific token prefix sk-ant-oat01-cnqsmZU to detect expired tokens. This is a one-time value from a specific developer's expired token. This pattern doesn't generalize — future expired tokens will have different prefixes. Same issue in preflight_hook.py:52.
Resolution note: preflight.py:61 still has: if not token or token.startswith("sk-ant-oat01-cnqsmZU")
Suggested fix:
Consider validating tokens by making a lightweight API call or checking token expiry metadata rather than matching specific token prefixes.
🟡 [NEW-003] [MEDIUM] Token auto-fix uses naive string replace that could corrupt .env file
📁 apps/backend/preflight.py:100
The _fix_token method uses content.replace(old_token, new_token) which performs a global replacement across the entire .env file. If the token string appears elsewhere (comments, other variables), all occurrences get replaced. The sibling file preflight_hook.py handles this correctly with line-by-line replacement targeting only CLAUDE_CODE_OAUTH_TOKEN= lines.
Suggested fix:
Use line-by-line replacement like preflight_hook.py: parse lines, find the CLAUDE_CODE_OAUTH_TOKEN= line, replace only that line.
🔵 [NEW-006] [LOW] Branch name sanitization incomplete and removes valid characters
📁 apps/backend/core/worktree.py:348
The sanitization regex removes brackets [] which are valid git branch characters, while missing other invalid patterns: spaces, '..' sequences, '.lock' endings, and leading '-'. The regex should only remove actually invalid git ref characters.
Suggested fix:
Remove [] from the sanitization regex (they're valid in git refs). Add handling for spaces, '..', leading '-', and '.lock' endings.
🟠 [NEW-008] [HIGH] Preflight hook runs before argument parsing, blocking --help and --list
📁 apps/backend/cli/main.py:289
run_preflight() is called at the start of main() before parse_args(). If .env doesn't exist, preflight calls sys.exit(1), preventing users from running --help, --list, or any read-only command. New users who haven't configured their environment yet cannot even see usage instructions.
Suggested fix:
Move the preflight call after argument parsing, and skip for read-only commands (--help, --list, --batch-status).
🔵 [NEW-009] [LOW] os.system() used for clear_screen instead of subprocess
📁 scripts/batch-from-discovery.py:38
Uses os.system('clear') which invokes a shell. While the argument is hardcoded (no injection risk), os.system() is generally discouraged. subprocess.run(['clear']) or ANSI escape codes would be more idiomatic.
Suggested fix:
Use print('\033[2J\033[H', end='', flush=True) for portable terminal clearing.
🔵 [NEW-010] [LOW] Hardcoded developer-specific path /aidata/projects/Auto-Claude
📁 scripts/batch-from-discovery.py:407
find_auto_claude_dir() includes Path('/aidata/projects/Auto-Claude') as a candidate, which is specific to one developer's server. While non-existent paths are safely skipped, this should be replaced with an environment variable lookup.
Suggested fix:
Replace with os.environ.get('AUTO_CLAUDE_DIR') or rely on --auto-claude-dir CLI argument.
🔵 [NEW-011] [LOW] Ollama check shells out to curl instead of using Python urllib
📁 apps/backend/preflight_hook.py:103
Both preflight files use subprocess.run(['curl', ...]) for HTTP requests. curl may not be available on all platforms (especially Windows). Python's urllib.request provides the same functionality without external dependencies.
Suggested fix:
Use urllib.request.urlopen(url, timeout=3) from Python's standard library.
🔵 [CMT-001] [LOW] [FROM COMMENTS] Unused local variable 'result' (GitHub Advanced Security)
📁 scripts/ac-batch.py:619
result = subprocess.run(cmd, ...) is assigned but never checked. Unlike 8 other subprocess.run() calls in the same file that check result.returncode, this one silently ignores failures. Flagged by GitHub code scanning.
Suggested fix:
Either remove the assignment (bare subprocess.run()) or add return code checking.
🔵 [CMT-002] [LOW] [FROM COMMENTS] Unused import 'time' (GitHub Advanced Security / Ruff)
📁 scripts/ac-batch.py:32
'import time' at line 32 is never used anywhere in the script. Likely contributing to CI Ruff lint failures.
Suggested fix:
Remove 'import time' from line 32.
🔵 [CMT-003] [LOW] [FROM COMMENTS] Unused import 'time' (GitHub Advanced Security / Ruff)
📁 scripts/ac-phase.py:26
'import time' at line 26 is never used anywhere in the script. Likely contributing to CI Ruff lint failures.
Suggested fix:
Remove 'import time' from line 26.
🔵 [CMT-004] [LOW] [FROM COMMENTS] Unused import 'os' (GitHub Advanced Security / Ruff)
📁 scripts/ac-batch.py:27
'import os' at line 27 is never used anywhere in ac-batch.py. Likely contributing to CI Ruff lint failures.
Suggested fix:
Remove 'import os' from line 27.
🔵 [CMT-005] [LOW] [FROM COMMENTS] Empty except blocks without comments (GitHub Advanced Security)
📁 scripts/ac-batch.py:148
Five except blocks across ac-batch.py (lines 148, 491, 535, 604) and ac-phase.py (line 167) contain only 'pass' with no explanatory comment. Flagged by GitHub code scanning.
Suggested fix:
Add brief explanatory comments to each empty except block.
This review was generated by Auto Claude.
Summary
extractRateLimitResetTimeto handle optional milliseconds (.000) in ISO 8601 timestampstoISOString()output like2026-02-15T12:00:00.000Zhad the.000Zsuffix dropped by the regex, causingnew Date()to parse as local time instead of UTCgithub-error-parsertest "should generate fallback message when reset time has passed" to fail with a timezone-sized offsetTest plan
github-error-parser.test.tspass🤖 Generated with Claude Code
Summary by CodeRabbit
New Features
Bug Fixes
Documentation
Chores