Skip to content
Open
Show file tree
Hide file tree
Changes from 9 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -175,3 +175,6 @@ OPUS_ANALYSIS_AND_IDEAS.md
/shared_docs
logs/security/
Agents.md
desktop.env
auto-claude-desktop.sh
Comment on lines +178 to +179
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Consider placing desktop.env and auto-claude-desktop.sh in the appropriate existing sections.

desktop.env is already covered by .env.* on line 16, so this entry is redundant unless the file is literally named desktop.env (no dot prefix). auto-claude-desktop.sh would fit better under the "Auto Claude Generated" section (lines 57–66) rather than appended at the end under "Misc / Development."

🤖 Prompt for AI Agents
In @.gitignore around lines 178 - 179, The .gitignore currently lists
"desktop.env" and "auto-claude-desktop.sh" in the Misc/Development tail; remove
the redundant "desktop.env" entry if your file is already covered by the
existing ".env.*" pattern (line referencing .env.*) or keep it only if the
literal filename "desktop.env" (no dot) must be ignored, and move
"auto-claude-desktop.sh" into the existing "Auto Claude Generated" section (the
block titled "Auto Claude Generated" around lines 57–66) so generated scripts
are grouped correctly.

images/
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

images/ is overly broad — it ignores any directory named images anywhere in the repo.

This pattern matches at every level of the tree, not just the root. If any package or sub-project has an images/ directory that should be tracked, those files will be silently ignored. Consider scoping it to the root:

Proposed fix
-images/
+/images/
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
images/
/images/
🤖 Prompt for AI Agents
In @.gitignore at line 180, The .gitignore entry "images/" is too broad and
ignores any directory named images anywhere in the repo; change it to scope to
the repository root by replacing "images/" with "/images/" (or alternatively add
explicit allow rules like "!/path/to/package/images/" for any subproject images
you want tracked) so only the top-level images directory is ignored and
package/subproject image folders are not silently excluded.

4 changes: 3 additions & 1 deletion apps/backend/cli/batch_commands.py
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,9 @@ def handle_batch_create_command(batch_file: str, project_dir: str) -> bool:
for idx, task in enumerate(tasks, 1):
spec_id = f"{next_id:03d}"
task_title = task.get("title", f"Task {idx}")
task_slug = task_title.lower().replace(" ", "-")[:50]
import re
task_slug = re.sub(r"[^\w\-]", "-", task_title.lower())
task_slug = re.sub(r"-+", "-", task_slug).strip("-")[:50]
spec_name = f"{spec_id}-{task_slug}"
spec_dir = specs_dir / spec_name
spec_dir.mkdir(exist_ok=True)
Expand Down
4 changes: 4 additions & 0 deletions apps/backend/cli/main.py
Original file line number Diff line number Diff line change
Expand Up @@ -285,6 +285,10 @@ def parse_args() -> argparse.Namespace:

def main() -> None:
"""Main CLI entry point."""
# Preflight self-healing checks (token refresh, ollama, stale locks)
from preflight_hook import run_preflight
run_preflight()

# Set up environment first
setup_environment()

Expand Down
6 changes: 5 additions & 1 deletion apps/backend/core/worktree.py
Original file line number Diff line number Diff line change
Expand Up @@ -344,7 +344,11 @@ def get_worktree_path(self, spec_name: str) -> Path:

def get_branch_name(self, spec_name: str) -> str:
"""Get the branch name for a spec."""
return f"auto-claude/{spec_name}"
# Sanitize spec_name: remove characters invalid in git branch names
sanitized = re.sub(r'[\[\]~^:?*\\{}]', '', spec_name)
# Collapse repeated dashes from removal
sanitized = re.sub(r'-{2,}', '-', sanitized)
return f"auto-claude/{sanitized}"
Comment on lines 345 to +351
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

🧩 Analysis chain

🏁 Script executed:

# Find all calls to get_branch_name
rg -n 'get_branch_name\(' --type=py -B2 -A2

Repository: AndyMik90/Auto-Claude

Length of output: 2309


🏁 Script executed:

# Check for any validation of spec_name before it reaches this function
rg -n 'spec_name' --type=py -B2 -A2 | head -100

Repository: AndyMik90/Auto-Claude

Length of output: 6249


🏁 Script executed:

# Look for tests of the get_branch_name function
rg -n 'test.*get_branch_name|get_branch_name.*test' --type=py -B1 -A5

Repository: AndyMik90/Auto-Claude

Length of output: 47


🏁 Script executed:

# Examine the worktree.py file around the function
head -360 apps/backend/core/worktree.py | tail -60

Repository: AndyMik90/Auto-Claude

Length of output: 2603


🏁 Script executed:

# Find _generate_spec_name implementation
rg -n '_generate_spec_name' --type=py -A10 | head -60

Repository: AndyMik90/Auto-Claude

Length of output: 4293


🏁 Script executed:

# Check the full test_create_worktree_with_spec_name test
rg -n 'test_create_worktree_with_spec_name' --type=py -A20

Repository: AndyMik90/Auto-Claude

Length of output: 1429


🏁 Script executed:

# Look for any validation of spec_name in the codebase
rg -n 'spec_name.*validate|validate.*spec_name|spec_name.*check' --type=py -B2 -A2

Repository: AndyMik90/Auto-Claude

Length of output: 47


🏁 Script executed:

# Find the actual implementation of _generate_spec_name
rg -n 'def _generate_spec_name' --type=py -A30

Repository: AndyMik90/Auto-Claude

Length of output: 2753


🏁 Script executed:

# Check if spec_name can come from external sources (not just _generate_spec_name)
rg -n 'spec_name\s*=' --type=py | grep -v test | head -30

Repository: AndyMik90/Auto-Claude

Length of output: 1751


🏁 Script executed:

# Find the actual generate_spec_name implementation
rg -n 'def generate_spec_name' --type=py -A20

Repository: AndyMik90/Auto-Claude

Length of output: 1360


🏁 Script executed:

# Check batch_commands.py line 64 to see spec_name construction
sed -n '60,70p' apps/backend/cli/batch_commands.py

Repository: AndyMik90/Auto-Claude

Length of output: 511


🏁 Script executed:

# Get the complete generate_spec_name function
sed -n '123,180p' apps/backend/spec/pipeline/models.py

Repository: AndyMik90/Auto-Claude

Length of output: 1080


🏁 Script executed:

# Get more of the generate_spec_name function including the actual sanitization logic
sed -n '123,200p' apps/backend/spec/pipeline/models.py

Repository: AndyMik90/Auto-Claude

Length of output: 1552


🏁 Script executed:

# Get the complete generate_spec_name function
sed -n '123,210p' apps/backend/spec/pipeline/models.py

Repository: AndyMik90/Auto-Claude

Length of output: 1837


Branch name sanitization is technically incomplete but mitigated by upstream validation.

Git's branch naming rules forbid spaces, .. sequences, @{, trailing . or .lock, and control characters. However, all spec_name values reaching this function are pre-sanitized by generate_spec_name() in models.py, which strips non-alphanumeric characters (keeping only hyphens and alphanumerics). This upstream sanitization ensures that problematic patterns like "foo..bar" or "test.lock" cannot occur in practice.

The current implementation is safe and functional, but adding comprehensive git ref validation here would be good defensive practice—for instance, if spec_name ever comes from external or user-controlled input. If you want to harden this, consider extending the sanitization to handle edge cases (spaces, dots, etc.), or optionally use git check-ref-format --normalize for authoritative validation.

🤖 Prompt for AI Agents
In `@apps/backend/core/worktree.py` around lines 345 - 351, The get_branch_name
method currently only strips a limited set of characters which misses
Git-specific invalid patterns; update get_branch_name to defensively enforce git
ref rules by additionally removing/control-replacing spaces, control characters,
consecutive dots (".."), any "@{" sequence, trailing dots or ".lock", and
collapse multiple hyphens, or alternatively call out to git check-ref-format
--normalize to validate/normalize the sanitized name; ensure you reference
get_branch_name and account for upstream generate_spec_name() but still harden
get_branch_name so it never returns a branch string with forbidden patterns.


def worktree_exists(self, spec_name: str) -> bool:
"""Check if a worktree exists for a spec."""
Expand Down
276 changes: 276 additions & 0 deletions apps/backend/preflight.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,276 @@
#!/usr/bin/env python3
"""Auto-Claude Preflight Check & Self-Healing Script.

Run before any Auto-Claude command to detect and fix common issues.
Usage: python preflight.py [--fix]
"""
import json
import os

Check failure on line 8 in apps/backend/preflight.py

View workflow job for this annotation

GitHub Actions / Python (Ruff)

Ruff (F401)

apps/backend/preflight.py:8:8: F401 `os` imported but unused

Check notice

Code scanning / CodeQL

Unused import Note

Import of 'os' is not used.
import sys
import subprocess
import time

Check failure on line 11 in apps/backend/preflight.py

View workflow job for this annotation

GitHub Actions / Python (Ruff)

Ruff (F401)

apps/backend/preflight.py:11:8: F401 `time` imported but unused

Check notice

Code scanning / CodeQL

Unused import Note

Import of 'time' is not used.
from pathlib import Path

Check failure on line 12 in apps/backend/preflight.py

View workflow job for this annotation

GitHub Actions / Python (Ruff)

Ruff (I001)

apps/backend/preflight.py:7:1: I001 Import block is un-sorted or un-formatted
Comment on lines +7 to +12
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Fix linting errors: unsorted imports and unused os, time imports (pipeline failure).

The CI pipeline is failing on this file. os and time are imported but unused, and the import block is unsorted per Ruff I001.

Proposed fix
-import json
-import os
-import sys
-import subprocess
-import time
-from pathlib import Path
+import json
+import subprocess
+import sys
+from pathlib import Path
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
import json
import os
import sys
import subprocess
import time
from pathlib import Path
import json
import subprocess
import sys
from pathlib import Path
🧰 Tools
🪛 GitHub Actions: Lint

[error] 7-7: I001 Import block is un-sorted or un-formatted

🪛 GitHub Check: Python (Ruff)

[failure] 11-11: Ruff (F401)
apps/backend/preflight.py:11:8: F401 time imported but unused


[failure] 8-8: Ruff (F401)
apps/backend/preflight.py:8:8: F401 os imported but unused


[failure] 7-12: Ruff (I001)
apps/backend/preflight.py:7:1: I001 Import block is un-sorted or un-formatted

🤖 Prompt for AI Agents
In `@apps/backend/preflight.py` around lines 7 - 12, Remove the unused imports
`os` and `time` and reorder the remaining imports into a sorted, lint-compliant
block: keep `json`, `subprocess`, and `sys` as top-level imports and `Path` from
`pathlib` as a grouped import; update the import block at the top of
preflight.py (the lines importing json, os, sys, subprocess, time and Path) so
only `json`, `subprocess`, `sys`, and `from pathlib import Path` remain and are
alphabetized to satisfy Ruff I001.


BACKEND_DIR = Path(__file__).parent
ENV_FILE = BACKEND_DIR / ".env"
VENV_PYTHON = BACKEND_DIR / ".venv" / "bin" / "python"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

VENV_PYTHON path is Unix-only — breaks on Windows.

".venv" / "bin" / "python" doesn't exist on Windows where the path is .venv/Scripts/python.exe. As per coding guidelines, use cross-platform utilities or detect the platform.

Suggested fix
-VENV_PYTHON = BACKEND_DIR / ".venv" / "bin" / "python"
+import sysconfig
+VENV_PYTHON = BACKEND_DIR / ".venv" / ("Scripts" if sys.platform == "win32" else "bin") / "python"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
VENV_PYTHON = BACKEND_DIR / ".venv" / "bin" / "python"
from apps.backend.core.platform import isWindows
script_dir = "Scripts" if isWindows() else "bin"
VENV_PYTHON = BACKEND_DIR / ".venv" / script_dir / "python"
🤖 Prompt for AI Agents
In `@apps/backend/preflight.py` at line 16, VENV_PYTHON currently hardcodes the
Unix path (".venv"/"bin"/"python") which fails on Windows; update the
VENV_PYTHON definition to select the platform-appropriate executable by checking
the platform (e.g., sys.platform or os.name) and using ("Scripts","python.exe")
for Windows and ("bin","python") for others—replace the current VENV_PYTHON
expression (referencing VENV_PYTHON and BACKEND_DIR) with a conditional path
construction that uses pathlib to join the correct segments.


RED = "\033[91m"
GREEN = "\033[92m"
YELLOW = "\033[93m"
BLUE = "\033[94m"
RESET = "\033[0m"

def ok(msg):
print(f" {GREEN}✓{RESET} {msg}")

Check failure

Code scanning / CodeQL

Clear-text logging of sensitive information High

This expression logs
sensitive data (password)
as clear text.

def warn(msg):
print(f" {YELLOW}⚠{RESET} {msg}")

def fail(msg):
print(f" {RED}✗{RESET} {msg}")

def info(msg):
print(f" {BLUE}ℹ{RESET} {msg}")


class PreflightCheck:
def __init__(self, auto_fix=False):
self.auto_fix = auto_fix
self.issues = []
self.fixed = []

def check_env_file(self):
"""Verify .env exists and has required fields."""
print(f"\n{BLUE}[1/6] Checking .env configuration{RESET}")
if not ENV_FILE.exists():
fail(".env file not found")
self.issues.append("missing_env")
return

env = {}
with open(ENV_FILE) as f:
for line in f:
line = line.strip()
if line and not line.startswith("#") and "=" in line:
key, _, val = line.partition("=")
env[key.strip()] = val.strip()

# Check OAuth token
token = env.get("CLAUDE_CODE_OAUTH_TOKEN", "")
if not token or token.startswith("sk-ant-oat01-cnqsmZU"):
fail("OAuth token missing or known-expired")
self.issues.append("expired_token")
if self.auto_fix:
self._fix_token(env)
else:
ok(f"OAuth token present ({token[:20]}...)")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Security: Logging partial OAuth token to stdout (same issue as preflight_hook.py).

Line 67 prints token[:20] and line 107 prints new_token[:20]. Token prefixes are sensitive and should not be logged.

🤖 Prompt for AI Agents
In `@apps/backend/preflight.py` at line 67, The code logs sensitive OAuth token
prefixes using token and new_token in calls to ok(...)—remove any token content
from logs and replace with a non-sensitive message (e.g., "OAuth token present"
or "New OAuth token obtained") in the ok() calls; locate the uses of token and
new_token in preflight.py (the ok(...) invocations that include token[:20] and
new_token[:20]) and update them to avoid printing or formatting any part of the
token.


# Check Ollama URL
ollama_url = env.get("OLLAMA_BASE_URL", "")
if not ollama_url:
fail("OLLAMA_BASE_URL not set")
self.issues.append("missing_ollama_url")
else:
ok(f"Ollama URL: {ollama_url}")

# Check required providers
for key in ["GRAPHITI_LLM_PROVIDER", "GRAPHITI_EMBEDDER_PROVIDER"]:
if key in env:
ok(f"{key}={env[key]}")
else:
warn(f"{key} not set")

def _fix_token(self, env):
"""Auto-fix expired OAuth token from ~/.claude/.credentials.json."""
creds_file = Path.home() / ".claude" / ".credentials.json"
if not creds_file.exists():
fail("Cannot auto-fix: ~/.claude/.credentials.json not found")
return

try:
with open(creds_file) as f:
creds = json.load(f)
new_token = creds.get("claudeAiOauth", {}).get("accessToken", "")
if not new_token:
fail("No access token in credentials file")
return

# Read and update .env
content = ENV_FILE.read_text()
old_token = env.get("CLAUDE_CODE_OAUTH_TOKEN", "")
if old_token:
content = content.replace(old_token, new_token)
else:
content = f"CLAUDE_CODE_OAUTH_TOKEN={new_token}\n" + content
ENV_FILE.write_text(content)

Check failure

Code scanning / CodeQL

Clear-text storage of sensitive information High

This expression stores
sensitive data (password)
as clear text.
Comment on lines +99 to +106
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Fragile token replacement: content.replace(old_token, new_token) may corrupt .env.

If the old token string appears in a comment or another value, str.replace will replace all occurrences. Use line-by-line replacement targeting only the CLAUDE_CODE_OAUTH_TOKEN= key, as preflight_hook.py already does.

Suggested approach (matching preflight_hook.py pattern)
-            if old_token:
-                content = content.replace(old_token, new_token)
-            else:
-                content = f"CLAUDE_CODE_OAUTH_TOKEN={new_token}\n" + content
+            lines = content.split("\n")
+            updated = False
+            for i, line in enumerate(lines):
+                if line.strip().startswith("CLAUDE_CODE_OAUTH_TOKEN="):
+                    lines[i] = f"CLAUDE_CODE_OAUTH_TOKEN={new_token}"
+                    updated = True
+                    break
+            if not updated:
+                lines.insert(0, f"CLAUDE_CODE_OAUTH_TOKEN={new_token}")
+            content = "\n".join(lines)
🤖 Prompt for AI Agents
In `@apps/backend/preflight.py` around lines 99 - 106, The current token update
uses content.replace(old_token, new_token) which can mangle other lines; change
the logic in the routine around ENV_FILE/content/old_token/new_token to perform
a line-by-line update that only touches the CLAUDE_CODE_OAUTH_TOKEN entry: read
ENV_FILE.read_text() as lines, iterate and if a line startswith
"CLAUDE_CODE_OAUTH_TOKEN=" replace that whole line with
"CLAUDE_CODE_OAUTH_TOKEN=<new_token>", otherwise keep the line, and if no such
key exists prepend the new key line; then write the joined lines back with
ENV_FILE.write_text(content).

ok(f"Token auto-fixed from ~/.claude/.credentials.json ({new_token[:20]}...)")
self.fixed.append("expired_token")
except Exception as e:
fail(f"Auto-fix failed: {e}")

def check_ollama(self):
"""Verify Ollama is reachable and models are available."""
print(f"\n{BLUE}[2/6] Checking Ollama connectivity{RESET}")

# Read URL from .env
ollama_url = "http://192.168.0.234:11434"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This hardcoded IP address http://192.168.0.234:11434 is not portable and will likely fail for other developers. A more sensible default would be http://localhost:11434, which is the standard for running Ollama locally. Alternatively, you could make the OLLAMA_BASE_URL a required environment variable and not provide a default here.

Suggested change
ollama_url = "http://192.168.0.234:11434"
ollama_url = "http://localhost:11434"

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Same hardcoded private IP 192.168.0.234 as in preflight_hook.py.

This should default to http://localhost:11434 (standard Ollama default). Also uses curl subprocess which isn't cross-platform.

🤖 Prompt for AI Agents
In `@apps/backend/preflight.py` at line 117, The hardcoded Ollama URL variable
ollama_url should default to "http://localhost:11434" instead of the private IP
and the code should avoid using a curl subprocess for portability; update the
ollama_url assignment in preflight.py (and mirror the same change in
preflight_hook.py) to "http://localhost:11434" and replace any subprocess curl
call that probes that URL with a cross-platform HTTP request using the project's
HTTP client (e.g., requests or urllib) or the existing HTTP helper function so
status checks use a direct HTTP GET and handle timeouts/errors instead of
shelling out.

if ENV_FILE.exists():
with open(ENV_FILE) as f:
for line in f:
if line.strip().startswith("OLLAMA_BASE_URL="):
ollama_url = line.strip().split("=", 1)[1]

try:
result = subprocess.run(
["curl", "-s", "-m", "5", f"{ollama_url}/api/tags"],
capture_output=True, text=True, timeout=10
)
Comment on lines +125 to +128
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Using subprocess to call curl introduces a dependency on an external command-line tool that might not be available on all systems, especially on Windows. It's more robust and portable to use a Python HTTP library like the built-in urllib.request or a third-party library like requests (if it's already a dependency) to make this API call.

if result.returncode != 0:
fail(f"Ollama unreachable at {ollama_url}")
self.issues.append("ollama_unreachable")
return

data = json.loads(result.stdout)
models = [m["name"] for m in data.get("models", [])]
ok(f"Ollama responding ({len(models)} models)")

# Check required models
required = ["qwen2.5-coder:14b", "nomic-embed-text"]
for model in required:
found = any(model in m for m in models)
if found:
ok(f"Model available: {model}")
else:
fail(f"Model missing: {model}")
self.issues.append(f"missing_model_{model}")
except Exception as e:
fail(f"Ollama check failed: {e}")
self.issues.append("ollama_error")

def check_venv(self):
"""Verify Python venv and dependencies."""
print(f"\n{BLUE}[3/6] Checking Python environment{RESET}")
if not VENV_PYTHON.exists():
fail(f"venv not found at {VENV_PYTHON}")
self.issues.append("missing_venv")
if self.auto_fix:
info("Run: cd apps/backend && python3 -m venv .venv && pip install -r requirements.txt")
return

ok(f"venv exists at {VENV_PYTHON}")

# Check key imports
try:
result = subprocess.run(
[str(VENV_PYTHON), "-c", "from core.client import create_client; print('OK')"],
capture_output=True, text=True, timeout=10, cwd=str(BACKEND_DIR)
)
if "OK" in result.stdout:
ok("Core imports working")
else:
fail(f"Import error: {result.stderr[:100]}")
self.issues.append("import_error")
except Exception as e:
fail(f"venv check failed: {e}")

def check_stuck_specs(self):
"""Find and optionally clear stuck specs/locks."""
print(f"\n{BLUE}[4/6] Checking for stuck specs/locks{RESET}")

# Check common project locations
project_dirs = [
Path.home() / "projects",
Path("/aidata/projects"),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The hardcoded path Path("/aidata/projects") is specific to a particular environment. This will cause issues for other developers. This path should be made configurable, perhaps through an environment variable, or removed if it's not essential for all users.

]
Comment on lines +182 to +185
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Hardcoded project paths ~/projects and /aidata/projects are environment-specific.

Same issue as in preflight_hook.py. These paths are developer-machine-specific and should be parameterized or derived from configuration.

🤖 Prompt for AI Agents
In `@apps/backend/preflight.py` around lines 182 - 185, The hardcoded project
paths in the project_dirs variable make the code environment-specific; update
the code that builds project_dirs in preflight.py to accept configurable
locations (e.g., read from an environment variable like PROJECT_DIRS, a config
file, or application settings) and fall back to sensible defaults if not
provided; modify the code that references project_dirs to parse a delimited list
or config value and construct Path objects (referencing the project_dirs
variable and any helper function you add, e.g., load_project_dirs or
get_project_dirs_from_env) so deployments and developer machines can supply
their own directories instead of relying on ~/projects and /aidata/projects.


stuck_count = 0
for pdir in project_dirs:
if not pdir.exists():
continue
for spec_dir in pdir.glob("*/.auto-claude/specs/*/.state"):
stuck_count += 1
warn(f"State cache: {spec_dir}")

for lock_file in pdir.glob("*/.auto-claude/specs/*/.lock"):
stuck_count += 1
warn(f"Lock file: {lock_file}")
if self.auto_fix:
lock_file.unlink()
ok(f"Removed stale lock: {lock_file}")
self.fixed.append(f"lock_{lock_file.name}")

if stuck_count == 0:
ok("No stuck specs or locks found")
else:
info(f"Found {stuck_count} items (use --fix to clean)")

def check_node(self):
"""Verify Node.js version for Claude Code."""
print(f"\n{BLUE}[5/6] Checking Node.js{RESET}")
try:
result = subprocess.run(
["node", "--version"], capture_output=True, text=True, timeout=5
)
version = result.stdout.strip()
major = int(version.lstrip("v").split(".")[0])
if major >= 24:
ok(f"Node.js {version}")
else:
warn(f"Node.js {version} - Auto-Claude needs v24+")
self.issues.append("old_node")
except Exception:
warn("Node.js not found in PATH")

def check_git_status(self):
"""Check for uncommitted Auto-Claude changes in projects."""
print(f"\n{BLUE}[6/6] Checking git status{RESET}")
try:
result = subprocess.run(
["git", "status", "--porcelain"], capture_output=True, text=True,
timeout=5, cwd=str(BACKEND_DIR)
)
if result.stdout.strip():
lines = result.stdout.strip().split("\n")
warn(f"Auto-Claude repo has {len(lines)} uncommitted changes")
else:
ok("Auto-Claude repo clean")
except Exception:
warn("Could not check git status")

def run(self):
print(f"\n{'='*60}")
print(f" Auto-Claude Preflight Check {'(+ Auto-Fix)' if self.auto_fix else ''}")
print(f"{'='*60}")

self.check_env_file()
self.check_ollama()
self.check_venv()
self.check_stuck_specs()
self.check_node()
self.check_git_status()

# Summary
print(f"\n{'='*60}")
if not self.issues:
print(f" {GREEN}All checks passed! Auto-Claude is ready.{RESET}")
else:
print(f" {YELLOW}{len(self.issues)} issue(s) found", end="")
if self.fixed:
print(f", {len(self.fixed)} auto-fixed", end="")
print(f"{RESET}")
remaining = [i for i in self.issues if i not in self.fixed]
if remaining:
print(f" {RED}Remaining: {', '.join(remaining)}{RESET}")
if not self.auto_fix:
print(f"\n Run with --fix to attempt auto-repair")

Check failure on line 266 in apps/backend/preflight.py

View workflow job for this annotation

GitHub Actions / Python (Ruff)

Ruff (F541)

apps/backend/preflight.py:266:27: F541 f-string without any placeholders
Comment on lines +264 to +266
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

F-string without placeholders on line 266 (Ruff F541).

-                    print(f"\n Run with --fix to attempt auto-repair")
+                    print("\n Run with --fix to attempt auto-repair")
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
print(f" {RED}Remaining: {', '.join(remaining)}{RESET}")
if not self.auto_fix:
print(f"\n Run with --fix to attempt auto-repair")
print(f" {RED}Remaining: {', '.join(remaining)}{RESET}")
if not self.auto_fix:
print("\n Run with --fix to attempt auto-repair")
🧰 Tools
🪛 GitHub Check: Python (Ruff)

[failure] 266-266: Ruff (F541)
apps/backend/preflight.py:266:27: F541 f-string without any placeholders

🤖 Prompt for AI Agents
In `@apps/backend/preflight.py` around lines 264 - 266, The print call uses an
unnecessary f-string (Ruff F541) on the branch that checks self.auto_fix;
replace the f-string print(f"\n Run with --fix to attempt auto-repair") with a
normal string print("\n Run with --fix to attempt auto-repair") (or add real
placeholders if intended) in the same block where self.auto_fix is evaluated
alongside the print of Remaining so the behavior is unchanged.

print(f"{'='*60}\n")

return len(self.issues) - len(self.fixed) == 0


if __name__ == "__main__":
auto_fix = "--fix" in sys.argv
checker = PreflightCheck(auto_fix=auto_fix)
success = checker.run()
sys.exit(0 if success else 1)
Loading
Loading