Problem
When a cron or webhook trigger fires while a user-initiated run is already active in the same project directory, Untether spawns a second engine process in the same working directory. Both processes run concurrently with no coordination.
Observed in production (2026-04-14): Scout project had a user-initiated Claude Code session (PID 1782558, phase 3 implementation) running when the shield-validation cron fired at 08:00:00 and spawned a second Claude Code process (PID 1796812) in the same /home/nathan/claude-code-tools/lba/scout directory. Both ran simultaneously with separate sessions.
In this case it was fine — the cron run was making API calls while the user run was editing source code, so they touched different files. But this is a data integrity risk when both runs might edit files:
- Git conflicts: both instances commit to the same branch, one fails
- File clobbering: simultaneous writes to the same file, last write wins silently
- Lock file contention:
uv lock, npm install, etc. fight over lock files
- Build artefact corruption: concurrent builds overwrite each other's output
Proposed solution: two complementary approaches
Approach 1: concurrent project setting (run queue)
A simple boolean on projects that serialises runs:
[projects.scout]
path = "/home/nathan/claude-code-tools/lba/scout"
default_engine = "claude"
chat_id = -5243261989
concurrent = false # queue runs instead of running simultaneously
Behaviour when concurrent = false:
- First run starts normally
- Second run (user-initiated, cron, or webhook) is queued with a notification:
⏳ Queued — waiting for active run to finish (position 1)
- When the first run completes, the queued run starts automatically (FIFO order)
/cancel can cancel queued runs before they start
- Queue depth visible via
/ping or the progress message
Default: concurrent = true (backwards compatible — existing behaviour unchanged)
Implementation:
- Add
concurrent: bool = True to ProjectConfig in settings.py
- In
RunnerBridge.handle_message(), check if a run is already active for the resolved project
- If
concurrent = false and a run is active, enqueue the new run with a "queued" progress message
- Use
anyio.Semaphore(1) per project path (keyed by resolved cwd) — the SessionLockMixin pattern already exists for session-level locking
- When the active run completes, release the semaphore → next queued run starts
- Queued runs show
queued · claude · 0s progress (this pattern already exists for chat-level queuing)
- Drain integration: queued runs are cancelled on shutdown (same as
/at pending delays)
Edge cases:
- Multiple chats routing to the same project: the semaphore is keyed by
cwd, not chat_id, so cross-chat queueing works
- Resume runs: should acquire the semaphore before spawning (already happens implicitly since they go through
handle_message)
- Cron with
run_once = true + queued: the cron fires, gets queued, run_once is consumed — if the run eventually starts, it works; if cancelled, the cron can be re-enabled via reload
Approach 2: automatic worktree isolation for triggers
For git-backed projects with worktrees_dir configured, trigger-initiated runs (crons and webhooks) can automatically use a temporary worktree instead of the main working directory:
[projects.scout]
path = "/home/nathan/claude-code-tools/lba/scout"
worktrees_dir = ".worktrees"
worktree_base = "main"
default_engine = "claude"
trigger_worktree = true # cron/webhook runs use a worktree automatically
Behaviour when trigger_worktree = true:
- User-initiated runs use the main working directory (normal behaviour)
- Cron/webhook runs create a temporary worktree before spawning the engine
- The worktree is cleaned up after the run completes
- Both run in parallel safely — isolated git working trees
Advantages over queuing:
- No waiting — both runs execute immediately
- Full git isolation — no conflict risk
- Existing
worktrees_dir infrastructure can be reused
Limitations:
- Only works for git repos (not arbitrary directories)
- Some operations (database migrations, API calls with state) don't benefit from worktrees
- Worktree creation adds a few seconds of overhead
- Changes made in the worktree need to be merged back to the main branch
Recommendation
Both approaches are complementary:
| Scenario |
Best approach |
| Non-git project |
concurrent = false (queue) |
| Git project, crons edit files |
trigger_worktree = true (worktree isolation) |
| Git project, crons only call APIs |
concurrent = true (default — no conflict) |
| Safety-first for any project |
concurrent = false (queue — always safe) |
For v0.35.2: Implement concurrent = false (Approach 1) — it's simpler, universal, and covers all cases. The worktree approach (Approach 2) can be added later if there's demand for parallel trigger execution.
Documentation updates
Add to docs/how-to/webhooks-and-cron.md:
Concurrent runs in the same directory: When a cron fires while a user-initiated run is already active in the same project, both engine instances run simultaneously in the same working directory. This works if the runs touch different files (e.g., one makes API calls while the other edits code), but risks git conflicts or file clobbering if they edit the same files.
To prevent this, set concurrent = false on the project:
[projects.myapp]
concurrent = false
Subsequent runs will queue and wait for the active run to finish.
Add to docs/how-to/troubleshooting.md:
Git conflicts from concurrent cron/user runs: If a cron trigger and a manual run both edit files in the same project simultaneously, you may see git merge conflicts or lost changes. Set concurrent = false on the project to serialise runs, or use worktrees_dir for parallel-safe execution.
Files to modify
src/untether/settings.py — add concurrent: bool = True to ProjectConfig
src/untether/runner_bridge.py — add per-project semaphore, queue logic, progress notifications
src/untether/progress.py — queue position in progress state (if needed)
tests/test_exec_bridge.py — concurrent queueing tests
tests/test_settings.py — concurrent field validation
docs/how-to/webhooks-and-cron.md — concurrent runs warning + concurrent setting
docs/how-to/troubleshooting.md — git conflict diagnostic
docs/reference/config.md — concurrent field in projects table
CLAUDE.md — mention the feature
Related
Problem
When a cron or webhook trigger fires while a user-initiated run is already active in the same project directory, Untether spawns a second engine process in the same working directory. Both processes run concurrently with no coordination.
Observed in production (2026-04-14): Scout project had a user-initiated Claude Code session (PID 1782558, phase 3 implementation) running when the
shield-validationcron fired at 08:00:00 and spawned a second Claude Code process (PID 1796812) in the same/home/nathan/claude-code-tools/lba/scoutdirectory. Both ran simultaneously with separate sessions.In this case it was fine — the cron run was making API calls while the user run was editing source code, so they touched different files. But this is a data integrity risk when both runs might edit files:
uv lock,npm install, etc. fight over lock filesProposed solution: two complementary approaches
Approach 1:
concurrentproject setting (run queue)A simple boolean on projects that serialises runs:
Behaviour when
concurrent = false:/cancelcan cancel queued runs before they start/pingor the progress messageDefault:
concurrent = true(backwards compatible — existing behaviour unchanged)Implementation:
concurrent: bool = TruetoProjectConfiginsettings.pyRunnerBridge.handle_message(), check if a run is already active for the resolved projectconcurrent = falseand a run is active, enqueue the new run with a "queued" progress messageanyio.Semaphore(1)per project path (keyed by resolvedcwd) — theSessionLockMixinpattern already exists for session-level lockingqueued · claude · 0sprogress (this pattern already exists for chat-level queuing)/atpending delays)Edge cases:
cwd, notchat_id, so cross-chat queueing workshandle_message)run_once = true+ queued: the cron fires, gets queued,run_onceis consumed — if the run eventually starts, it works; if cancelled, the cron can be re-enabled via reloadApproach 2: automatic worktree isolation for triggers
For git-backed projects with
worktrees_dirconfigured, trigger-initiated runs (crons and webhooks) can automatically use a temporary worktree instead of the main working directory:Behaviour when
trigger_worktree = true:Advantages over queuing:
worktrees_dirinfrastructure can be reusedLimitations:
Recommendation
Both approaches are complementary:
concurrent = false(queue)trigger_worktree = true(worktree isolation)concurrent = true(default — no conflict)concurrent = false(queue — always safe)For v0.35.2: Implement
concurrent = false(Approach 1) — it's simpler, universal, and covers all cases. The worktree approach (Approach 2) can be added later if there's demand for parallel trigger execution.Documentation updates
Add to
docs/how-to/webhooks-and-cron.md:Add to
docs/how-to/troubleshooting.md:Files to modify
src/untether/settings.py— addconcurrent: bool = TruetoProjectConfigsrc/untether/runner_bridge.py— add per-project semaphore, queue logic, progress notificationssrc/untether/progress.py— queue position in progress state (if needed)tests/test_exec_bridge.py— concurrent queueing teststests/test_settings.py—concurrentfield validationdocs/how-to/webhooks-and-cron.md— concurrent runs warning +concurrentsettingdocs/how-to/troubleshooting.md— git conflict diagnosticdocs/reference/config.md—concurrentfield in projects tableCLAUDE.md— mention the featureRelated