Conversation
Generations now return immediately with a 'generating' status and appear
in history right away. TTS runs in a serial background queue to avoid
GPU contention. Users can kick off multiple generations without blocking.
- Async POST /generate creates DB record immediately, queues TTS work
- Serial generation queue prevents concurrent GPU access (Metal/CUDA/CPU)
- SSE endpoint GET /generate/{id}/status for real-time completion tracking
- Retry endpoint POST /generate/{id}/retry for failed generations
- Store engine and model_size on generation records for retry support
- History cards show animated loader (react-loaders) for generating/playing
- Failed generations show retry button instead of actions menu
- Model downloads happen inline in the queue instead of rejecting with 202
- Stale 'generating' records marked as failed on server startup
- Autoplay on generate setting (default: on)
- Show engine name on generation cards
- Remove sidebar generation spinner
- Checkbox alignment fix in settings
|
Caution Review failedThe pull request is closed. ℹ️ Recent review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: ⛔ Files ignored due to path filters (1)
📒 Files selected for processing (10)
📝 WalkthroughWalkthroughAdds asynchronous queued TTS generation with SSE status streaming, retry/status endpoints, DB fields for engine/model/status/error, frontend pending-generation tracking and SSE hook, UI status indicators (loader/retry), and related API/types/store updates. Changes
Sequence DiagramsequenceDiagram
participant Client as Frontend (Client)
participant Backend as Backend API
participant DB as Database
participant Queue as Generation Queue
participant Worker as Async Worker
participant SSE as SSE Stream
Client->>Backend: POST /generate (text) → returns generation_id (202)
Backend->>DB: create generation (status: "generating")
Backend->>Queue: enqueue generation task
Backend->>Client: return generation_id
Note over Client,SSE: Client subscribes for updates
Client->>SSE: GET /generate/{id}/status (SSE)
SSE->>Client: stream status updates ("generating" / "completed" / "failed")
Queue->>Worker: dispatch task
Worker->>Worker: load model, generate audio
Worker->>DB: save audio_path,duration,status
Worker->>SSE: emit update
alt completed
SSE->>Client: "completed"
Client->>Client: remove loader, show playable audio
else failed
Worker->>DB: update status "failed", set error
SSE->>Client: "failed"
Client->>Backend: POST /generate/{id}/retry
Backend->>Queue: enqueue retry task
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 6
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
app/src/lib/hooks/useRestoreActiveTasks.tsx (1)
66-66:⚠️ Potential issue | 🟡 MinorMissing
addPendingGenerationin dependency array.The
addPendingGenerationfunction is used insidefetchActiveTasksbut not listed in theuseCallbackdependencies. While Zustand actions are typically stable references, this should be included for correctness and to satisfy exhaustive-deps linting.🔧 Proposed fix
- }, [setIsGenerating, setActiveGenerationId]); + }, [setIsGenerating, setActiveGenerationId, addPendingGeneration]);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/lib/hooks/useRestoreActiveTasks.tsx` at line 66, The useCallback that defines fetchActiveTasks currently lists [setIsGenerating, setActiveGenerationId] but omits addPendingGeneration; update the dependency array of the useCallback that creates fetchActiveTasks to include addPendingGeneration so the hook depends on the action it uses (i.e., change the dependency list to include addPendingGeneration alongside setIsGenerating and setActiveGenerationId).backend/main.py (1)
65-92:⚠️ Potential issue | 🟠 Major
/generate/streamstill bypasses the serial queue.The new worker only protects jobs passed through
_enqueue_generation(), butstream_speech()at Lines 970-1060 still calls model load + inference directly. A streamed request can overlap with a queued generation and recreate the same GPU contention/crash path this worker is meant to remove.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/main.py` around lines 65 - 92, stream_speech currently performs model loading and inference directly and therefore bypasses the serial _generation_queue; wrap the entire GPU-using work (model load + inference + any cleanup) into an async coroutine and enqueue it via _enqueue_generation so it runs through _generation_worker, or alternatively schedule it with _create_background_task after placing it on the queue; update stream_speech to submit that coroutine instead of running the work inline, ensuring you reference stream_speech, _enqueue_generation, _generation_worker, _create_background_task and _generation_queue so the queued worker serializes all GPU access.
🧹 Nitpick comments (3)
app/src/lib/hooks/useRestoreActiveTasks.tsx (1)
36-42: Potential state desync when clearing generation state.When
tasks.generationsis empty butpendingGenerationIdsstill has entries (e.g., from a local submission not yet reflected in active tasks), callingsetIsGenerating(false)could incorrectly mark generation as complete while SSE subscriptions are still active.Consider checking
pendingGenerationIds.sizebefore clearing:♻️ Proposed improvement
} else { - // Only clear if we were tracking a generation - const currentId = useGenerationStore.getState().activeGenerationId; - if (currentId) { + // Only clear activeGenerationId; let pendingGenerationIds drive isGenerating + const state = useGenerationStore.getState(); + if (state.activeGenerationId && state.pendingGenerationIds.size === 0) { setIsGenerating(false); - setActiveGenerationId(null); } + setActiveGenerationId(null); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/lib/hooks/useRestoreActiveTasks.tsx` around lines 36 - 42, The current branch in useRestoreActiveTasks.tsx clears generation state whenever tasks.generations is empty, which can desync if pendingGenerationIds still contains entries; update the logic in the else branch (where useGenerationStore.getState().activeGenerationId is read and setIsGenerating/setActiveGenerationId are called) to first check pendingGenerationIds.size (or equivalent pending set) and only clear generation state when pendingGenerationIds.size === 0, otherwise leave isGenerating/activeGenerationId intact so SSE subscriptions for pending generations remain active.app/src/lib/hooks/useGenerationProgress.ts (1)
99-105: SSE error handler removes generation without distinguishing transient vs permanent failures.The
onerrorhandler immediately closes the connection and removes the generation from pending state. This could cause issues if there's a brief network hiccup, as the user would lose visibility into the generation's actual status.Consider adding a retry counter or checking
readyStatebefore giving up:♻️ Alternative with retry tracking
// Add error count tracking const errorCountsRef = useRef<Map<string, number>>(new Map()); source.onerror = () => { const errorCount = (errorCountsRef.current.get(id) || 0) + 1; errorCountsRef.current.set(id, errorCount); // Give up after 3 consecutive errors if (errorCount >= 3) { source.close(); currentSources.delete(id); errorCountsRef.current.delete(id); removePendingGeneration(id); } // Otherwise let EventSource auto-reconnect }; // Reset error count on successful message source.onmessage = (event) => { errorCountsRef.current.set(id, 0); // ... rest of handler };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/lib/hooks/useGenerationProgress.ts` around lines 99 - 105, The SSE onerror handler (source.onerror) currently closes the EventSource and calls currentSources.delete(id) and removePendingGeneration(id) immediately, which treats transient network hiccups as permanent failures; change this to track per-id consecutive error counts (e.g., add an errorCountsRef Map keyed by id), increment the count in source.onerror and only close/delete/remove after a threshold (e.g., 3) of consecutive errors, and reset the count to zero in source.onmessage (and on successful completion) so temporary reconnections are allowed to succeed before removing the pending generation.app/src/stores/generationStore.ts (1)
10-11: Consider deprecating or constrainingsetIsGeneratingto prevent state desync.The legacy
setIsGeneratingsetter allows direct override ofisGeneratingindependent ofpendingGenerationIds. This can cause inconsistent state:
setIsGenerating(false)whilependingGenerationIds.size > 0→ UI shows "not generating" but SSE subscriptions are activesetIsGenerating(true)whilependingGenerationIds.size === 0→ UI shows "generating" with no actual pending workSince
addPendingGenerationandremovePendingGenerationnow manageisGeneratingcorrectly, consider either:
- Making
setIsGeneratingonly settrue(neverfalse), or- Having it sync with
pendingGenerationIds♻️ Option: Make setIsGenerating safer
- setIsGenerating: (generating) => set({ isGenerating: generating }), + setIsGenerating: (generating) => + set((state) => ({ + // Only allow setting true; let removePendingGeneration handle false + isGenerating: generating || state.pendingGenerationIds.size > 0, + })),Also applies to: 35-35
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/stores/generationStore.ts` around lines 10 - 11, The legacy setter setIsGenerating currently allows direct writes that can desync from pendingGenerationIds; change it so it no longer blindly sets state: update setIsGenerating to only allow true (i.e., set isGenerating = true when called) or, if preserving false is required, make it authoritative by syncing with pendingGenerationIds (when called with false, check pendingGenerationIds.size and only set isGenerating = false if size === 0, otherwise ignore the false request); update/add comments marking setIsGenerating as deprecated and rely on addPendingGeneration/removePendingGeneration to drive isGenerating.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@app/src/components/History/HistoryTable.tsx`:
- Around line 362-366: The wrapper div in HistoryTable currently has onMouseDown
and onClick which makes a non-interactive element behave as interactive; remove
those handlers from the div and either move them to the actual trigger element
(the button/control inside the wrapper) or attach them as capture handlers on
that control so you stop propagation there (or use
onMouseDownCapture/onClickCapture on the real button). Update the element
referenced as the wrapper (className "w-10 shrink-0 flex justify-end
items-center") to be purely presentational and ensure the real trigger/control
receives the stopPropagation logic instead.
In `@app/src/lib/hooks/useGenerationProgress.ts`:
- Around line 110-117: The cleanup currently closes and clears currentSources on
every effect re-run (when pendingIds changes), causing unnecessary
reconnections; remove the per-run cleanup that closes/clears currentSources and
instead add a dedicated unmount-only cleanup: create a separate useEffect with
an empty dependency array that iterates currentSources.values(), calls
source.close() for each, and then clears the map—leave the existing effect (the
one depending on pendingIds) without clearing the ref so existing SSE
connections persist across re-runs; reference the currentSources ref and the
effect that returns the cleanup in useGenerationProgress.ts.
In `@backend/main.py`:
- Around line 837-850: The current read-check-write on DBGeneration (using gen =
db.query(DBGeneration).filter_by(id=generation_id).first()) allows race
conditions; change this to an atomic state flip or a row lock so only one
request can transition status 'failed' → 'generating'. Specifically, replace the
separate read-and-then-update with a single conditional update (e.g., UPDATE/ORM
query that sets status='generating', clears error/audio_path/duration WHERE
id=generation_id AND status='failed' and returns the row) or acquire a DB row
lock (e.g., SELECT ... FOR UPDATE on the DBGeneration row) before checking
status and enqueueing, and only proceed if the update/lock succeeded and
returned the row.
- Around line 785-799: _retry is dropping max_chunk_chars, crossfade_ms and the
normalization step when re-invoking generate_chunked; update _run_retry to
forward the same parameters (max_chunk_chars, crossfade_ms, seed, instruct,
language, voice_prompt, etc.) into generate_chunked and if the original call
applied normalization (data.normalize) re-apply normalize_audio to the retried
audio so the retry uses identical chunking and normalization behavior as the
original generate_chunked call.
- Around line 742-743: Replace the sentinel empty-string and zero values used to
indicate "no audio yet" with explicit None to avoid Path(generation.audio_path)
resolving to cwd; update the object/population sites that set audio_path="" and
duration=0 (e.g., where the Generation/record is created in backend/main.py
around the two occurrences) to use audio_path=None and duration=None, and ensure
downstream code (e.g., backend/history.py which calls
Path(generation.audio_path)) treats None as "not generated" (add a guard or
conditional before Path(...) if needed) so delete/export paths no longer blow
up.
---
Outside diff comments:
In `@app/src/lib/hooks/useRestoreActiveTasks.tsx`:
- Line 66: The useCallback that defines fetchActiveTasks currently lists
[setIsGenerating, setActiveGenerationId] but omits addPendingGeneration; update
the dependency array of the useCallback that creates fetchActiveTasks to include
addPendingGeneration so the hook depends on the action it uses (i.e., change the
dependency list to include addPendingGeneration alongside setIsGenerating and
setActiveGenerationId).
In `@backend/main.py`:
- Around line 65-92: stream_speech currently performs model loading and
inference directly and therefore bypasses the serial _generation_queue; wrap the
entire GPU-using work (model load + inference + any cleanup) into an async
coroutine and enqueue it via _enqueue_generation so it runs through
_generation_worker, or alternatively schedule it with _create_background_task
after placing it on the queue; update stream_speech to submit that coroutine
instead of running the work inline, ensuring you reference stream_speech,
_enqueue_generation, _generation_worker, _create_background_task and
_generation_queue so the queued worker serializes all GPU access.
---
Nitpick comments:
In `@app/src/lib/hooks/useGenerationProgress.ts`:
- Around line 99-105: The SSE onerror handler (source.onerror) currently closes
the EventSource and calls currentSources.delete(id) and
removePendingGeneration(id) immediately, which treats transient network hiccups
as permanent failures; change this to track per-id consecutive error counts
(e.g., add an errorCountsRef Map keyed by id), increment the count in
source.onerror and only close/delete/remove after a threshold (e.g., 3) of
consecutive errors, and reset the count to zero in source.onmessage (and on
successful completion) so temporary reconnections are allowed to succeed before
removing the pending generation.
In `@app/src/lib/hooks/useRestoreActiveTasks.tsx`:
- Around line 36-42: The current branch in useRestoreActiveTasks.tsx clears
generation state whenever tasks.generations is empty, which can desync if
pendingGenerationIds still contains entries; update the logic in the else branch
(where useGenerationStore.getState().activeGenerationId is read and
setIsGenerating/setActiveGenerationId are called) to first check
pendingGenerationIds.size (or equivalent pending set) and only clear generation
state when pendingGenerationIds.size === 0, otherwise leave
isGenerating/activeGenerationId intact so SSE subscriptions for pending
generations remain active.
In `@app/src/stores/generationStore.ts`:
- Around line 10-11: The legacy setter setIsGenerating currently allows direct
writes that can desync from pendingGenerationIds; change it so it no longer
blindly sets state: update setIsGenerating to only allow true (i.e., set
isGenerating = true when called) or, if preserving false is required, make it
authoritative by syncing with pendingGenerationIds (when called with false,
check pendingGenerationIds.size and only set isGenerating = false if size === 0,
otherwise ignore the false request); update/add comments marking setIsGenerating
as deprecated and rely on addPendingGeneration/removePendingGeneration to drive
isGenerating.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 97993d72-869a-44cb-ae74-2fe47715ad2e
⛔ Files ignored due to path filters (2)
bun.lockis excluded by!**/*.locktauri/src-tauri/gen/Assets.caris excluded by!**/gen/**
📒 Files selected for processing (20)
app/src/components/History/HistoryTable.tsxapp/src/components/ServerSettings/ConnectionForm.tsxapp/src/components/ServerSettings/GenerationSettings.tsxapp/src/components/Sidebar.tsxapp/src/components/ui/checkbox.tsxapp/src/index.cssapp/src/lib/api/client.tsapp/src/lib/api/types.tsapp/src/lib/hooks/useGenerationForm.tsapp/src/lib/hooks/useGenerationProgress.tsapp/src/lib/hooks/useRestoreActiveTasks.tsxapp/src/lib/utils/format.tsapp/src/router.tsxapp/src/stores/generationStore.tsapp/src/stores/serverStore.tsbackend/database.pybackend/history.pybackend/main.pybackend/models.pypackage.json
| const handleRetry = async (generationId: string) => { | ||
| try { | ||
| const result = await apiClient.retryGeneration(generationId); | ||
| addPendingGeneration(result.id); | ||
| queryClient.invalidateQueries({ queryKey: ['history'] }); |
There was a problem hiding this comment.
Retry only invalidates the query; it doesn't update the accumulated list.
handleRetry() relies on invalidation, but Lines 97-100 never replace existing IDs in allHistory. Once the user has paged past the first chunk, the row can stay rendered as failed even after the backend moved it back to generating, which also leaves the retry affordance visible for duplicate clicks.
| <div | ||
| className="w-10 shrink-0 flex justify-end" | ||
| className="w-10 shrink-0 flex justify-end items-center" | ||
| onMouseDown={(e) => e.stopPropagation()} | ||
| onClick={(e) => e.stopPropagation()} | ||
| > |
There was a problem hiding this comment.
Keep the action wrapper non-interactive.
This <div> now owns mouse handlers, so Biome is correctly flagging it as a static element with interactive behavior. Put the propagation stop on the actual trigger/button, or use a capture handler, so keyboard semantics stay with the real controls.
🧰 Tools
🪛 Biome (2.4.6)
[error] 362-366: Enforce to have the onClick mouse event with the onKeyUp, the onKeyDown, or the onKeyPress keyboard event.
(lint/a11y/useKeyWithClickEvents)
[error] 362-366: Static Elements should not be interactive.
(lint/a11y/noStaticElementInteractions)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/src/components/History/HistoryTable.tsx` around lines 362 - 366, The
wrapper div in HistoryTable currently has onMouseDown and onClick which makes a
non-interactive element behave as interactive; remove those handlers from the
div and either move them to the actual trigger element (the button/control
inside the wrapper) or attach them as capture handlers on that control so you
stop propagation there (or use onMouseDownCapture/onClickCapture on the real
button). Update the element referenced as the wrapper (className "w-10 shrink-0
flex justify-end items-center") to be purely presentational and ensure the real
trigger/control receives the stopPropagation logic instead.
| audio_path="", | ||
| duration=0, |
There was a problem hiding this comment.
Use None for “not generated yet” audio metadata.
audio_path="" and duration=0 look like valid values to the rest of the codebase. backend/history.py Lines 214-217 build Path(generation.audio_path), so '' resolves to the current working directory and failed/pending rows can blow up in delete/export paths instead of behaving like “no audio yet”.
🐛 Suggested fix
- audio_path="",
- duration=0,
+ audio_path=None,
+ duration=None,
...
- gen.audio_path = ""
- gen.duration = 0
+ gen.audio_path = None
+ gen.duration = NoneAlso applies to: 847-848
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/main.py` around lines 742 - 743, Replace the sentinel empty-string
and zero values used to indicate "no audio yet" with explicit None to avoid
Path(generation.audio_path) resolving to cwd; update the object/population sites
that set audio_path="" and duration=0 (e.g., where the Generation/record is
created in backend/main.py around the two occurrences) to use audio_path=None
and duration=None, and ensure downstream code (e.g., backend/history.py which
calls Path(generation.audio_path)) treats None as "not generated" (add a guard
or conditional before Path(...) if needed) so delete/export paths no longer blow
up.
| audio, sample_rate = await generate_chunked( | ||
| tts_model, | ||
| data.text, | ||
| voice_prompt, | ||
| language=data.language, | ||
| seed=data.seed, | ||
| instruct=data.instruct, | ||
| max_chunk_chars=data.max_chunk_chars, | ||
| crossfade_ms=data.crossfade_ms, | ||
| trim_fn=trim_fn, | ||
| ) | ||
|
|
||
| async def download_luxtts_background(): | ||
| try: | ||
| await tts_model.load_model() | ||
| except Exception as e: | ||
| task_manager.error_download(model_name, str(e)) | ||
|
|
||
| task_manager.start_download(model_name) | ||
| _create_background_task(download_luxtts_background()) | ||
|
|
||
| raise HTTPException( | ||
| status_code=202, | ||
| detail={ | ||
| "message": "LuxTTS model is being downloaded. Please wait and try again.", | ||
| "model_name": model_name, | ||
| "downloading": True, | ||
| }, | ||
| ) | ||
| if data.normalize: | ||
| from .utils.audio import normalize_audio | ||
| audio = normalize_audio(audio) |
There was a problem hiding this comment.
Retry is dropping part of the original request.
The initial worker passes max_chunk_chars, crossfade_ms, and optional normalization at Lines 785-799, but _run_retry() omits all three. Retrying long text can therefore chunk differently and a previously normalized generation comes back unnormalized, which breaks the “retry with the same params” promise from this PR.
Also applies to: 888-896
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/main.py` around lines 785 - 799, _retry is dropping max_chunk_chars,
crossfade_ms and the normalization step when re-invoking generate_chunked;
update _run_retry to forward the same parameters (max_chunk_chars, crossfade_ms,
seed, instruct, language, voice_prompt, etc.) into generate_chunked and if the
original call applied normalization (data.normalize) re-apply normalize_audio to
the retried audio so the retry uses identical chunking and normalization
behavior as the original generate_chunked call.
| gen = db.query(DBGeneration).filter_by(id=generation_id).first() | ||
| if not gen: | ||
| raise HTTPException(status_code=404, detail="Generation not found") | ||
|
|
||
| # Calculate duration | ||
| duration = len(audio) / sample_rate | ||
| if (gen.status or "completed") != "failed": | ||
| raise HTTPException(status_code=400, detail="Only failed generations can be retried") | ||
|
|
||
| # Save audio | ||
| audio_path = config.get_generations_dir() / f"{generation_id}.wav" | ||
| # Reset the record to generating | ||
| gen.status = "generating" | ||
| gen.error = None | ||
| gen.audio_path = "" | ||
| gen.duration = 0 | ||
| db.commit() | ||
| db.refresh(gen) |
There was a problem hiding this comment.
Guard retry with an atomic state flip.
This read/check/write sequence lets two concurrent retry requests both observe failed and both enqueue work for the same generation_id. Collapse it to a single conditional update, or lock the row before enqueueing, so only one request can transition failed -> generating.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/main.py` around lines 837 - 850, The current read-check-write on
DBGeneration (using gen =
db.query(DBGeneration).filter_by(id=generation_id).first()) allows race
conditions; change this to an atomic state flip or a row lock so only one
request can transition status 'failed' → 'generating'. Specifically, replace the
separate read-and-then-update with a single conditional update (e.g., UPDATE/ORM
query that sets status='generating', clears error/audio_path/duration WHERE
id=generation_id AND status='failed' and returns the row) or acquire a DB row
lock (e.g., SELECT ... FOR UPDATE on the DBGeneration row) before checking
status and enqueueing, and only proceed if the update/lock succeeded and
returned the row.
…tor, fix item placement per-track
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
app/src/components/StoriesTab/StoryContent.tsx (1)
55-64:⚠️ Potential issue | 🟠 MajorOnly offer completed generations in this picker.
This list now pulls from history, and history includes
generating/failedrows in this PR. Without a status filter, users can still add an in-flight or failed generation to a story and bypass the deferred completion flow.Suggested change
return historyData.items.filter( (gen) => + gen.status === 'completed' && !storyGenerationIds.has(gen.id) && (gen.text.toLowerCase().includes(query) || gen.profile_name.toLowerCase().includes(query)), );🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/components/StoriesTab/StoryContent.tsx` around lines 55 - 64, The availableGenerations list currently includes history items regardless of completion state; update the useMemo filter in availableGenerations to only include historyData.items whose status indicates completion (e.g., gen.status === 'completed' or the appropriate completed enum value), while still excluding existing storyGenerationIds and applying the searchQuery match; refer to availableGenerations, historyData.items, gen (history item), storyGenerationIds, and searchQuery when making this change.
♻️ Duplicate comments (1)
app/src/lib/hooks/useGenerationProgress.ts (1)
136-150:⚠️ Potential issue | 🟠 MajorThis cleanup still tears down healthy SSE subscriptions on every re-run.
Because this effect depends on
pendingIdsand other values, the cleanup on Line 136 runs before every re-run, not just on unmount. Closing and clearing the whole map here forces all active generations to reconnect whenever one ID is added or removed.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/lib/hooks/useGenerationProgress.ts` around lines 136 - 150, The effect in useGenerationProgress is closing and clearing the entire currentSources map on every re-run (because it depends on pendingIds and other values), which tears down healthy SSE subscriptions; change the cleanup so it only removes subscriptions that are no longer needed instead of closing all sources. Concretely, in the effect that creates/updates subscriptions (useGenerationProgress / the effect returning the cleanup), stop calling currentSources.clear() and avoid closing every source; instead detect which keys in currentSources are not present in the latest pendingIds (or only close the specific source(s) for IDs that were removed) and close/remove those entries, or move the full-map cleanup to a separate effect with an empty dependency array so that the full teardown only runs on unmount; ensure functions like removePendingGeneration/removePendingStoryAdd handle closing their corresponding source from currentSources when a single ID is removed.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@app/src/lib/hooks/useGenerationProgress.ts`:
- Around line 9-13: Add 'not_found' to the GenerationStatusEvent status union
and update the onmessage handler in useGenerationProgress (the message handling
block that inspects status values) to treat 'not_found' as a terminal state the
same way 'failed' is handled: run the same cleanup logic that clears the pending
generation ID, stop progress tracking, and surface a user-facing
notification/error for the missing generation. Ensure this handling does not
route through the transport onerror path (which should remain for network
errors) and that any reconnection/backoff logic continues to apply to transient
errors while 'not_found' immediately finalizes the generation entry.
In `@app/src/stores/generationStore.ts`:
- Around line 6-15: The isGenerating boolean is currently mutable and can drift
from the single source of truth pendingGenerationIds; make isGenerating a
derived value (compute as pendingGenerationIds.size > 0) instead of a separately
mutatable property and stop allowing external mutation via setIsGenerating;
update addPendingGeneration/removePendingGeneration to only modify
pendingGenerationIds and remove or deprecate setIsGenerating (or implement it as
a no-op with a TODO) so callers must rely on pendingGenerationIds, and ensure
any UI reads isGenerating from the derived getter rather than from a state you
can set independently.
In `@backend/stories.py`:
- Around line 273-275: The assignment "track = data.track if data.track is not
None else 0" allows negative values; after that assignment validate that track
>= 0 and reject negatives (e.g., raise an HTTPException/ValueError with a clear
message or return a 400) so downstream timeline logic never receives negative
lanes. Alternatively enforce this in the request model (pydantic Field(ge=0))
but if fixing here, add an explicit guard using the local variable track and
abort the request when track < 0.
---
Outside diff comments:
In `@app/src/components/StoriesTab/StoryContent.tsx`:
- Around line 55-64: The availableGenerations list currently includes history
items regardless of completion state; update the useMemo filter in
availableGenerations to only include historyData.items whose status indicates
completion (e.g., gen.status === 'completed' or the appropriate completed enum
value), while still excluding existing storyGenerationIds and applying the
searchQuery match; refer to availableGenerations, historyData.items, gen
(history item), storyGenerationIds, and searchQuery when making this change.
---
Duplicate comments:
In `@app/src/lib/hooks/useGenerationProgress.ts`:
- Around line 136-150: The effect in useGenerationProgress is closing and
clearing the entire currentSources map on every re-run (because it depends on
pendingIds and other values), which tears down healthy SSE subscriptions; change
the cleanup so it only removes subscriptions that are no longer needed instead
of closing all sources. Concretely, in the effect that creates/updates
subscriptions (useGenerationProgress / the effect returning the cleanup), stop
calling currentSources.clear() and avoid closing every source; instead detect
which keys in currentSources are not present in the latest pendingIds (or only
close the specific source(s) for IDs that were removed) and close/remove those
entries, or move the full-map cleanup to a separate effect with an empty
dependency array so that the full teardown only runs on unmount; ensure
functions like removePendingGeneration/removePendingStoryAdd handle closing
their corresponding source from currentSources when a single ID is removed.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 95d1d4a3-f604-4db8-8083-57ac35b73d9f
📒 Files selected for processing (5)
app/src/components/Generation/FloatingGenerateBox.tsxapp/src/components/StoriesTab/StoryContent.tsxapp/src/lib/hooks/useGenerationProgress.tsapp/src/stores/generationStore.tsbackend/stories.py
app/src/stores/generationStore.ts
Outdated
| /** Whether any generation is in progress (derived convenience) */ | ||
| isGenerating: boolean; | ||
| activeGenerationId: string | null; | ||
| /** Map of generationId → storyId for deferred story additions */ | ||
| pendingStoryAdds: Map<string, string>; | ||
| addPendingGeneration: (id: string) => void; | ||
| removePendingGeneration: (id: string) => void; | ||
| addPendingStoryAdd: (generationId: string, storyId: string) => void; | ||
| removePendingStoryAdd: (generationId: string) => string | undefined; | ||
| /** Legacy setter for backward compat with useRestoreActiveTasks */ | ||
| setIsGenerating: (generating: boolean) => void; |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
isGenerating can drift from the pending set.
The interface says this flag is derived, but Line 59 still lets callers mutate it independently of pendingGenerationIds. That leaves two sources of truth for the same state and can desync any UI still reading isGenerating. Prefer deriving it from pendingGenerationIds.size and retiring the legacy setter once callers are migrated.
Also applies to: 59-59
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/src/stores/generationStore.ts` around lines 6 - 15, The isGenerating
boolean is currently mutable and can drift from the single source of truth
pendingGenerationIds; make isGenerating a derived value (compute as
pendingGenerationIds.size > 0) instead of a separately mutatable property and
stop allowing external mutation via setIsGenerating; update
addPendingGeneration/removePendingGeneration to only modify pendingGenerationIds
and remove or deprecate setIsGenerating (or implement it as a no-op with a TODO)
so callers must rely on pendingGenerationIds, and ensure any UI reads
isGenerating from the derived getter rather than from a state you can set
independently.
| # Get track from data or default to 0 | ||
| track = data.track if data.track is not None else 0 | ||
|
|
There was a problem hiding this comment.
Validate track lower bound before creating the item.
track can currently be negative, which can create invalid lane assignments in downstream timeline logic. Add a guard here (or enforce it in the request model) so only track >= 0 is accepted.
Suggested fix
# Get track from data or default to 0
track = data.track if data.track is not None else 0
+if track < 0:
+ return None🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/stories.py` around lines 273 - 275, The assignment "track =
data.track if data.track is not None else 0" allows negative values; after that
assignment validate that track >= 0 and reject negatives (e.g., raise an
HTTPException/ValueError with a clear message or return a 400) so downstream
timeline logic never receives negative lanes. Alternatively enforce this in the
request model (pydantic Field(ge=0)) but if fixing here, add an explicit guard
using the local variable track and abort the request when track < 0.
… hide player title at small widths
…ating from pending set, handle not_found status
Summary
generatingstatus and the item shows up in history right away with an animated loader. Users can queue multiple generations without waiting.addCompletedHandlercrash, CUDA conflicts, etc). Jobs execute one at a time.generatingrecords from a killed server process are cleaned up on startup.UI changes
react-loadersanimated bars:line-scalewhile generating,line-scale-pulse-out-rapidwhile playing, static gray when idleSummary by CodeRabbit
New Features
Improvements
Bug Fixes