Claude/client workflow save k qr90#102
Conversation
Adds browser-native export/import for workflows that works in cloud deployments (Railway, Vercel) where server filesystem is unavailable. Download uses existing saveWorkflow() store action (Blob download). Upload reuses existing loadWorkflow() with a dedicated file input. https://claude.ai/code/session_01BkAUwgYckP3bTQVqw3g4Mm
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds end-to-end multimodal prompt support (text + named image variables) and workflow import/export: new PromptPart types and resolveImageVars utility, namedImages propagated through connectedInputs and executors, parts wired to Gemini/Google providers, plus Header download/upload UI and related UI/node changes. Changes
Sequence Diagram(s)sequenceDiagram
participant Client as Client (UI)
participant Store as Workflow Store / Executors
participant Utils as resolveImageVars
participant API as Server (/api/llm or /api/generate)
participant Provider as LLM Provider (Google/Gemini)
Client->>Store: trigger node execution / generate
Store->>Store: getConnectedInputs(nodeId) (includes namedImages)
alt PromptConstructor has outputParts
Store->>API: POST { prompt, images?, parts: outputParts, ... }
else prompt contains `@vars` and namedImages exist
Store->>Utils: resolveImageVars(prompt, namedImages)
Utils-->>Store: parts (interleaved text/image)
Store->>API: POST { prompt, images?, parts, ... }
else no parts
Store->>API: POST { prompt, images?, ... }
end
API->>Provider: map parts -> inlineData/text and forward
Provider-->>API: response
API-->>Store: generation result
Store-->>Client: update node outputs / history
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment Tip You can disable the changed files summary in the walkthrough.Disable the |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
src/components/Header.tsx (1)
133-152: Extract shared file-import logic to avoid drift between upload/open paths.
handleUploadFileChangeduplicateshandleFileChangeparsing/validation/loading flow. A shared helper will keep behavior consistent as validation evolves.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/Header.tsx` around lines 133 - 152, Refactor the duplicated parsing/validation/loading logic in handleUploadFileChange and handleFileChange into a single helper (e.g., parseAndLoadWorkflow) that accepts a File or file text, performs JSON.parse to WorkflowFile, validates version/nodes/edges, calls loadWorkflow(workflow) and throws or returns errors for caller handling; update handleUploadFileChange to read the file text and pass it to parseAndLoadWorkflow (and update handleFileChange to use the same helper), preserve existing alert/error behavior in the callers and ensure e.target.value is still cleared after processing.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/components/Header.tsx`:
- Around line 133-143: The uploaded JSON is only loosely checked before calling
loadWorkflow, so replace the current post-JSON-parse branch in
handleUploadFileChange with a validation step: after parsing into WorkflowFile,
call the exported validateWorkflow(...) from workflowStore.ts and only await
loadWorkflow(workflow) if validateWorkflow returns success; also catch
JSON.parse errors and log/report them before returning, and ensure
validateWorkflow verifies nodes is an array and each edge has source, target,
and id so malformed payloads never reach loadWorkflow.
---
Nitpick comments:
In `@src/components/Header.tsx`:
- Around line 133-152: Refactor the duplicated parsing/validation/loading logic
in handleUploadFileChange and handleFileChange into a single helper (e.g.,
parseAndLoadWorkflow) that accepts a File or file text, performs JSON.parse to
WorkflowFile, validates version/nodes/edges, calls loadWorkflow(workflow) and
throws or returns errors for caller handling; update handleUploadFileChange to
read the file text and pass it to parseAndLoadWorkflow (and update
handleFileChange to use the same helper), preserve existing alert/error behavior
in the callers and ensure e.target.value is still cleared after processing.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: d35de475-6ccb-4e1f-9700-44c9db8bed88
⛔ Files ignored due to path filters (1)
package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (1)
src/components/Header.tsx
| const handleUploadFileChange = (e: React.ChangeEvent<HTMLInputElement>) => { | ||
| const file = e.target.files?.[0]; | ||
| if (!file) return; | ||
|
|
||
| const reader = new FileReader(); | ||
| reader.onload = async (event) => { | ||
| try { | ||
| const workflow = JSON.parse(event.target?.result as string) as WorkflowFile; | ||
| if (workflow.version && workflow.nodes && workflow.edges) { | ||
| await loadWorkflow(workflow); | ||
| } else { |
There was a problem hiding this comment.
Strengthen JSON schema checks before calling loadWorkflow.
At Line 141, truthy checks allow malformed payloads through (e.g., non-array nodes, edges missing source/target/id), which can crash inside loadWorkflow.
🔧 Proposed fix
+ const isValidWorkflowImport = (value: unknown): value is WorkflowFile => {
+ if (!value || typeof value !== "object") return false;
+ const wf = value as Partial<WorkflowFile> & { nodes?: unknown; edges?: unknown };
+ if (wf.version !== 1) return false;
+ if (typeof wf.name !== "string" || !Array.isArray(wf.nodes) || !Array.isArray(wf.edges)) return false;
+
+ const nodesValid = wf.nodes.every((node) => {
+ if (!node || typeof node !== "object") return false;
+ const n = node as { id?: unknown; type?: unknown };
+ return typeof n.id === "string" && typeof n.type === "string";
+ });
+
+ const edgesValid = wf.edges.every((edge) => {
+ if (!edge || typeof edge !== "object") return false;
+ const e = edge as { id?: unknown; source?: unknown; target?: unknown };
+ return (
+ typeof e.id === "string" &&
+ typeof e.source === "string" &&
+ typeof e.target === "string"
+ );
+ });
+
+ return nodesValid && edgesValid;
+ };
+
const handleUploadFileChange = (e: React.ChangeEvent<HTMLInputElement>) => {
const file = e.target.files?.[0];
if (!file) return;
@@
- const workflow = JSON.parse(event.target?.result as string) as WorkflowFile;
- if (workflow.version && workflow.nodes && workflow.edges) {
- await loadWorkflow(workflow);
+ const parsed = JSON.parse(event.target?.result as string);
+ if (isValidWorkflowImport(parsed)) {
+ await loadWorkflow(parsed);
} else {
alert("Invalid workflow file format");
}Based on learnings: Validate workflows using validateWorkflow() in workflowStore.ts before execution.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/components/Header.tsx` around lines 133 - 143, The uploaded JSON is only
loosely checked before calling loadWorkflow, so replace the current
post-JSON-parse branch in handleUploadFileChange with a validation step: after
parsing into WorkflowFile, call the exported validateWorkflow(...) from
workflowStore.ts and only await loadWorkflow(workflow) if validateWorkflow
returns success; also catch JSON.parse errors and log/report them before
returning, and ensure validateWorkflow verifies nodes is an array and each edge
has source, target, and id so malformed payloads never reach loadWorkflow.
ImageInput nodes can now have a variableName (like Prompt nodes) to be referenced as @varname in PromptConstructor templates and prompt text. When the prompt contains image variable references, the system builds interleaved multimodal parts (text + inline images) sent to Gemini API, enabling precise control over where images appear in the prompt. Key changes: - ImageInputNodeData.variableName field + variable naming dialog UI - PromptConstructorNode accepts image inputs, shows image vars in green - ConnectedInputs.namedImages tracks variable-name-to-image mappings - resolveImageVars utility splits text at @varname into PromptPart[] - PromptConstructor executor outputs outputParts for downstream nodes - nanoBanana/llmGenerate executors detect and forward multimodal parts - Gemini API routes accept interleaved parts for native multimodal requests https://claude.ai/code/session_01BkAUwgYckP3bTQVqw3g4Mm
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
src/store/execution/llmGenerateExecutor.ts (1)
32-32:⚠️ Potential issue | 🟠 MajorBug:
inputs.namedImagesis not destructured fromgetConnectedInputsresult.The code at line 77 accesses
inputs.namedImages, but line 32 only stores the result ofgetConnectedInputs(node.id)ininputswithout destructuringnamedImages. WhileConnectedInputsinterface includesnamedImages, the code needs to access it properly.Looking at the
nanoBananaExecutor.ts(line 46) for comparison, it correctly destructuresnamedImagesfromgetConnectedInputs:const { images: connectedImages, text: connectedText, namedImages, dynamicInputs } = getConnectedInputs(node.id);🐛 Proposed fix
- const inputs = getConnectedInputs(node.id); + const inputs = getConnectedInputs(node.id); + const { namedImages } = inputs;Or alternatively, access it directly as
inputs.namedImageswhich should work sinceinputsis typed asConnectedInputs. However, ensure this is consistent with how other executors access it.Also applies to: 77-79
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/store/execution/llmGenerateExecutor.ts` at line 32, The result of getConnectedInputs(node.id) is stored in inputs but namedImages isn't destructured, causing later access to inputs.namedImages to be inconsistent with other executors; update the call in llmGenerateExecutor (where getConnectedInputs is used) to destructure namedImages (e.g., const { images: connectedImages, text: connectedText, namedImages, dynamicInputs } = getConnectedInputs(node.id)) or consistently access inputs.namedImages wherever used, and ensure types still align with the ConnectedInputs interface and usages in the LLM generation code paths (references: getConnectedInputs, ConnectedInputs, inputs.namedImages).src/store/execution/simpleNodeExecutors.ts (1)
121-147:⚠️ Potential issue | 🟠 MajorResolve named images via
getConnectedInputs().This only sees immediate
imageInputparents. Image vars routed throughrouter/switch/other passthrough nodes will be missed, and bypassing the store helper also skips its filtering rules. PullnamedImagesfromctx.getConnectedInputs(node.id)instead of rebuilding it from rawedges/nodeshere.Minimal fix
- // Find all connected image nodes - const connectedImageNodes = edges - .filter((e) => e.target === node.id && e.targetHandle === "image") - .map((e) => nodes.find((n) => n.id === e.source)) - .filter((n): n is WorkflowNode => n !== undefined); - // Build variable map: named variables from Prompt nodes take precedence const variableMap: Record<string, string> = {}; connectedTextNodes.forEach((srcNode) => { if (srcNode.type === "prompt") { const promptData = srcNode.data as PromptNodeData; @@ - // Build named images map from connected ImageInput nodes with variableName - const namedImages: Record<string, string> = {}; - connectedImageNodes.forEach((srcNode) => { - if (srcNode.type === "imageInput") { - const imgData = srcNode.data as ImageInputNodeData; - if (imgData.variableName && imgData.image) { - namedImages[imgData.variableName] = imgData.image; - } - } - }); + const { namedImages } = ctx.getConnectedInputs(node.id);As per coding guidelines, "Retrieve upstream data for a node using
getConnectedInputs(nodeId)to get{ images: string[], text: string | null }."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/store/execution/simpleNodeExecutors.ts` around lines 121 - 147, The current block rebuilds namedImages from edges/nodes (connectedImageNodes) which misses images routed through passthroughs and bypasses store helper logic; instead call ctx.getConnectedInputs(node.id) (the provided helper) and extract its images list to populate namedImages (or use its returned structure directly) so upstream routing and filtering are respected; remove the manual connectedImageNodes/forEach logic and use getConnectedInputs(node.id) where 'namedImages' is needed, keeping existing variableMap logic for prompt variables intact.
🧹 Nitpick comments (4)
src/components/nodes/ImageInputNode.tsx (2)
21-23: State may become stale ifnodeData.variableNamechanges externally.
varNameInputis initialized fromnodeData.variableNamebut doesn't update when the node data changes (e.g., via undo/redo or external updates). Consider syncing state when the dialog opens.♻️ Proposed fix
+ // Sync input state when dialog opens + const handleOpenDialog = useCallback(() => { + setVarNameInput(nodeData.variableName || ""); + setShowVarDialog(true); + }, [nodeData.variableName]); // In the badge button: - onClick={() => setShowVarDialog(true)} + onClick={handleOpenDialog}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/nodes/ImageInputNode.tsx` around lines 21 - 23, The local state varNameInput is initialized from nodeData.variableName but can become stale; update varNameInput whenever the variable name may have changed (e.g., when the rename dialog opens or when nodeData.variableName changes) by adding a useEffect that watches showVarDialog and/or nodeData.variableName and calls setVarNameInput(nodeData.variableName || ""); target the ImageInputNode component and the variables varNameInput, setVarNameInput, showVarDialog, and nodeData.variableName when implementing this sync.
192-247: Consider adding keyboard accessibility for the modal dialog.The modal dialog lacks focus trap and
role="dialog"/aria-modal="true"attributes. Users navigating via keyboard may tab out of the modal unexpectedly.♻️ Suggested improvements
- <div className="fixed inset-0 bg-black/60 flex items-center justify-center z-[9999]"> - <div className="bg-neutral-800 border border-neutral-600 rounded-lg shadow-xl p-4 w-96"> + <div + className="fixed inset-0 bg-black/60 flex items-center justify-center z-[9999]" + onClick={(e) => e.target === e.currentTarget && setShowVarDialog(false)} + > + <div + className="bg-neutral-800 border border-neutral-600 rounded-lg shadow-xl p-4 w-96" + role="dialog" + aria-modal="true" + aria-labelledby="var-dialog-title" + > + <h3 id="var-dialog-title" className="text-sm font-semibold text-neutral-100 mb-3">🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/nodes/ImageInputNode.tsx` around lines 192 - 247, The modal created when showVarDialog is true (via createPortal) lacks dialog semantics and focus trapping; update the modal container (the element returned by createPortal) to include role="dialog" and aria-modal="true", set an accessible aria-label or aria-labelledby, and implement a simple keyboard/focus management around the existing input (autoFocus) and buttons so Tab/Shift+Tab are trapped inside until setShowVarDialog(false) is called (e.g., manage focusable elements and listen for Escape to call handleSaveVariableName or setShowVarDialog(false)); ensure handlers like handleSaveVariableName, handleClearVariableName, and the Cancel button correctly restore focus to the opener when the dialog closes.src/utils/resolveImageVars.ts (1)
50-56: Trimming text segments may inadvertently remove meaningful whitespace.The
.trim()calls on lines 52 and 65 could remove intentional whitespace around image variables. For example,"Hello@photoworld"becomes["Hello", image, "world"]instead of preserving the spacing that would appear as"Hello "before and" world"after.However, this may be intentional since the Gemini API likely doesn't require exact whitespace preservation between parts. If this is by design, consider adding a brief comment explaining the rationale.
Also applies to: 63-69
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/resolveImageVars.ts` around lines 50 - 56, The code currently trims text segments when slicing out textBefore and textAfter (the .trim() calls) which strips intentional whitespace around image variables; either remove the .trim() calls so parts.push uses the raw slices (preserving surrounding spaces) or, if trimming is intentional for Gemini API output, replace the .trim() calls with a concise comment explaining that exact whitespace is intentionally discarded; locate the logic handling textBefore/textAfter and the parts array in resolveImageVars (where text.slice(...).trim() is used) and apply one of these two fixes consistently for both the pre- and post-image handling blocks.src/store/execution/llmGenerateExecutor.ts (1)
62-79: Consider extracting shared upstream PromptConstructor lookup logic.The logic to find an upstream
promptConstructornode and extract itsoutputPartsis duplicated here and innanoBananaExecutor.ts(lines 100-118). This could be extracted into a shared helper function.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/store/execution/llmGenerateExecutor.ts` around lines 62 - 79, Duplicate logic that finds an upstream PromptConstructor node and reads its outputParts should be extracted into a shared helper; create a function (e.g., findUpstreamPromptConstructorParts(ctx, node): PromptPart[] | undefined) that uses ctx.getEdges and ctx.getNodes, locates a node with type "promptConstructor", casts its .data to PromptConstructorNodeData and returns .outputParts if present, then replace the duplicated block in llmGenerateExecutor.ts (the upstreamPcNode lookup and pcData.outputParts assignment) and the similar block in nanoBananaExecutor.ts with calls to this helper so both files reuse the same implementation.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/components/ConnectionDropMenu.tsx`:
- Around line 112-120: The menu item with type "promptConstructor" is being
added from the image-target menu but handleMenuSelect in WorkflowCanvas.tsx
doesn't map the combination handleType === "image" and nodeType ===
"promptConstructor" to a target handle, so the dropped node is created without a
connection; update handleMenuSelect (or the mapping it uses) to detect nodeType
"promptConstructor" when handleType is "image" and assign the correct target
handle id (the node's input/target handle name used by the Prompt Constructor
node), then create the edge/connection using the same code path used for other
image→node mappings so the connection is established on drop.
In `@src/components/nodes/PromptConstructorNode.tsx`:
- Around line 64-78: The code currently inserts "(image)" / "(no image)" into
the vars/availableVariables value for image variables (constructed in the
connectedImageNodes loop), which then pollutes text substitution maps and causes
literal markers to appear; instead stop putting a text value on image vars
(remove setting value for image variable entries or add a separate displayLabel
field) and ensure the substitution logic only builds maps from text variables
(use availableVariables.filter(v => v.variableType === "text") when creating the
textValueMap used for replacement). Update the connectedImageNodes handling that
pushes into vars and the code that builds the substitution map so image vars are
excluded from text replacement while still providing a UI-only label for
display.
---
Outside diff comments:
In `@src/store/execution/llmGenerateExecutor.ts`:
- Line 32: The result of getConnectedInputs(node.id) is stored in inputs but
namedImages isn't destructured, causing later access to inputs.namedImages to be
inconsistent with other executors; update the call in llmGenerateExecutor (where
getConnectedInputs is used) to destructure namedImages (e.g., const { images:
connectedImages, text: connectedText, namedImages, dynamicInputs } =
getConnectedInputs(node.id)) or consistently access inputs.namedImages wherever
used, and ensure types still align with the ConnectedInputs interface and usages
in the LLM generation code paths (references: getConnectedInputs,
ConnectedInputs, inputs.namedImages).
In `@src/store/execution/simpleNodeExecutors.ts`:
- Around line 121-147: The current block rebuilds namedImages from edges/nodes
(connectedImageNodes) which misses images routed through passthroughs and
bypasses store helper logic; instead call ctx.getConnectedInputs(node.id) (the
provided helper) and extract its images list to populate namedImages (or use its
returned structure directly) so upstream routing and filtering are respected;
remove the manual connectedImageNodes/forEach logic and use
getConnectedInputs(node.id) where 'namedImages' is needed, keeping existing
variableMap logic for prompt variables intact.
---
Nitpick comments:
In `@src/components/nodes/ImageInputNode.tsx`:
- Around line 21-23: The local state varNameInput is initialized from
nodeData.variableName but can become stale; update varNameInput whenever the
variable name may have changed (e.g., when the rename dialog opens or when
nodeData.variableName changes) by adding a useEffect that watches showVarDialog
and/or nodeData.variableName and calls setVarNameInput(nodeData.variableName ||
""); target the ImageInputNode component and the variables varNameInput,
setVarNameInput, showVarDialog, and nodeData.variableName when implementing this
sync.
- Around line 192-247: The modal created when showVarDialog is true (via
createPortal) lacks dialog semantics and focus trapping; update the modal
container (the element returned by createPortal) to include role="dialog" and
aria-modal="true", set an accessible aria-label or aria-labelledby, and
implement a simple keyboard/focus management around the existing input
(autoFocus) and buttons so Tab/Shift+Tab are trapped inside until
setShowVarDialog(false) is called (e.g., manage focusable elements and listen
for Escape to call handleSaveVariableName or setShowVarDialog(false)); ensure
handlers like handleSaveVariableName, handleClearVariableName, and the Cancel
button correctly restore focus to the opener when the dialog closes.
In `@src/store/execution/llmGenerateExecutor.ts`:
- Around line 62-79: Duplicate logic that finds an upstream PromptConstructor
node and reads its outputParts should be extracted into a shared helper; create
a function (e.g., findUpstreamPromptConstructorParts(ctx, node): PromptPart[] |
undefined) that uses ctx.getEdges and ctx.getNodes, locates a node with type
"promptConstructor", casts its .data to PromptConstructorNodeData and returns
.outputParts if present, then replace the duplicated block in
llmGenerateExecutor.ts (the upstreamPcNode lookup and pcData.outputParts
assignment) and the similar block in nanoBananaExecutor.ts with calls to this
helper so both files reuse the same implementation.
In `@src/utils/resolveImageVars.ts`:
- Around line 50-56: The code currently trims text segments when slicing out
textBefore and textAfter (the .trim() calls) which strips intentional whitespace
around image variables; either remove the .trim() calls so parts.push uses the
raw slices (preserving surrounding spaces) or, if trimming is intentional for
Gemini API output, replace the .trim() calls with a concise comment explaining
that exact whitespace is intentionally discarded; locate the logic handling
textBefore/textAfter and the parts array in resolveImageVars (where
text.slice(...).trim() is used) and apply one of these two fixes consistently
for both the pre- and post-image handling blocks.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 582c222b-bb97-450b-8c74-b5f14d070ecf
📒 Files selected for processing (21)
src/app/api/generate/providers/gemini.tssrc/app/api/generate/route.tssrc/app/api/llm/route.tssrc/components/ConnectionDropMenu.tsxsrc/components/WorkflowCanvas.tsxsrc/components/__tests__/ConnectionDropMenu.test.tsxsrc/components/modals/PromptConstructorEditorModal.tsxsrc/components/nodes/ImageInputNode.tsxsrc/components/nodes/PromptConstructorNode.tsxsrc/store/execution/__tests__/llmGenerateExecutor.test.tssrc/store/execution/__tests__/nanoBananaExecutor.test.tssrc/store/execution/__tests__/simpleNodeExecutors.test.tssrc/store/execution/llmGenerateExecutor.tssrc/store/execution/nanoBananaExecutor.tssrc/store/execution/simpleNodeExecutors.tssrc/store/utils/__tests__/connectedInputs.test.tssrc/store/utils/connectedInputs.tssrc/store/utils/nodeDefaults.tssrc/store/workflowStore.tssrc/types/nodes.tssrc/utils/resolveImageVars.ts
| { | ||
| type: "promptConstructor", | ||
| label: "Prompt Constructor", | ||
| icon: ( | ||
| <svg className="w-4 h-4" fill="none" viewBox="0 0 24 24" stroke="currentColor" strokeWidth={1.5}> | ||
| <path strokeLinecap="round" strokeLinejoin="round" d="M14.25 6.087c0-.355.186-.676.401-.959.221-.29.349-.634.349-1.003 0-1.036-1.007-1.875-2.25-1.875s-2.25.84-2.25 1.875c0 .369.128.713.349 1.003.215.283.401.604.401.959v0a.64.64 0 01-.657.643 48.39 48.39 0 01-4.163-.3c.186 1.613.293 3.25.315 4.907a.656.656 0 01-.658.663v0c-.355 0-.676-.186-.959-.401a1.647 1.647 0 00-1.003-.349c-1.036 0-1.875 1.007-1.875 2.25s.84 2.25 1.875 2.25c.369 0 .713-.128 1.003-.349.283-.215.604-.401.959-.401v0c.31 0 .555.26.532.57a48.039 48.039 0 01-.642 5.056c1.518.19 3.058.309 4.616.354a.64.64 0 00.657-.643v0c0-.355-.186-.676-.401-.959a1.647 1.647 0 01-.349-1.003c0-1.035 1.008-1.875 2.25-1.875 1.243 0 2.25.84 2.25 1.875 0 .369-.128.713-.349 1.003-.215.283-.4.604-.4.959v0c0 .333.277.599.61.58a48.1 48.1 0 005.427-.63 48.05 48.05 0 00.582-4.717.532.532 0 00-.533-.57v0c-.355 0-.676.186-.959.401-.29.221-.634.349-1.003.349-1.035 0-1.875-1.007-1.875-2.25s.84-2.25 1.875-2.25c.37 0 .713.128 1.003.349.283.215.604.401.96.401v0a.656.656 0 00.658-.663 48.422 48.422 0 00-.37-5.36c-1.886.342-3.81.574-5.766.689a.578.578 0 01-.61-.58v0z" /> | ||
| </svg> | ||
| ), | ||
| }, |
There was a problem hiding this comment.
Image→Prompt Constructor menu path currently creates an unconnected node
Line 112 exposes promptConstructor in the image-target menu, but handleMenuSelect (in src/components/WorkflowCanvas.tsx) does not map handleType === "image" + nodeType === "promptConstructor" to a target handle. The node is created, but the dropped connection is not created.
💡 Suggested fix (in src/components/WorkflowCanvas.tsx)
- } else if (handleType === "image") {
+ } else if (handleType === "image") {
if (nodeType === "annotation" || nodeType === "output" || nodeType === "splitGrid" || nodeType === "outputGallery" || nodeType === "imageCompare") {
targetHandleId = "image";
// annotation also has an image output
if (nodeType === "annotation") {
sourceHandleIdForNewNode = "image";
}
- } else if (nodeType === "nanoBanana" || nodeType === "generateVideo") {
+ } else if (nodeType === "nanoBanana" || nodeType === "generateVideo" || nodeType === "promptConstructor") {
targetHandleId = "image";
} else if (nodeType === "imageInput") {
sourceHandleIdForNewNode = "image";
}
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| { | |
| type: "promptConstructor", | |
| label: "Prompt Constructor", | |
| icon: ( | |
| <svg className="w-4 h-4" fill="none" viewBox="0 0 24 24" stroke="currentColor" strokeWidth={1.5}> | |
| <path strokeLinecap="round" strokeLinejoin="round" d="M14.25 6.087c0-.355.186-.676.401-.959.221-.29.349-.634.349-1.003 0-1.036-1.007-1.875-2.25-1.875s-2.25.84-2.25 1.875c0 .369.128.713.349 1.003.215.283.401.604.401.959v0a.64.64 0 01-.657.643 48.39 48.39 0 01-4.163-.3c.186 1.613.293 3.25.315 4.907a.656.656 0 01-.658.663v0c-.355 0-.676-.186-.959-.401a1.647 1.647 0 00-1.003-.349c-1.036 0-1.875 1.007-1.875 2.25s.84 2.25 1.875 2.25c.369 0 .713-.128 1.003-.349.283-.215.604-.401.959-.401v0c.31 0 .555.26.532.57a48.039 48.039 0 01-.642 5.056c1.518.19 3.058.309 4.616.354a.64.64 0 00.657-.643v0c0-.355-.186-.676-.401-.959a1.647 1.647 0 01-.349-1.003c0-1.035 1.008-1.875 2.25-1.875 1.243 0 2.25.84 2.25 1.875 0 .369-.128.713-.349 1.003-.215.283-.4.604-.4.959v0c0 .333.277.599.61.58a48.1 48.1 0 005.427-.63 48.05 48.05 0 00.582-4.717.532.532 0 00-.533-.57v0c-.355 0-.676.186-.959.401-.29.221-.634.349-1.003.349-1.035 0-1.875-1.007-1.875-2.25s.84-2.25 1.875-2.25c.37 0 .713.128 1.003.349.283.215.604.401.96.401v0a.656.656 0 00.658-.663 48.422 48.422 0 00-.37-5.36c-1.886.342-3.81.574-5.766.689a.578.578 0 01-.61-.58v0z" /> | |
| </svg> | |
| ), | |
| }, | |
| } else if (handleType === "image") { | |
| if (nodeType === "annotation" || nodeType === "output" || nodeType === "splitGrid" || nodeType === "outputGallery" || nodeType === "imageCompare") { | |
| targetHandleId = "image"; | |
| // annotation also has an image output | |
| if (nodeType === "annotation") { | |
| sourceHandleIdForNewNode = "image"; | |
| } | |
| } else if (nodeType === "nanoBanana" || nodeType === "generateVideo" || nodeType === "promptConstructor") { | |
| targetHandleId = "image"; | |
| } else if (nodeType === "imageInput") { | |
| sourceHandleIdForNewNode = "image"; | |
| } | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/components/ConnectionDropMenu.tsx` around lines 112 - 120, The menu item
with type "promptConstructor" is being added from the image-target menu but
handleMenuSelect in WorkflowCanvas.tsx doesn't map the combination handleType
=== "image" and nodeType === "promptConstructor" to a target handle, so the
dropped node is created without a connection; update handleMenuSelect (or the
mapping it uses) to detect nodeType "promptConstructor" when handleType is
"image" and assign the correct target handle id (the node's input/target handle
name used by the Prompt Constructor node), then create the edge/connection using
the same code path used for other image→node mappings so the connection is
established on drop.
| // Image variables from connected ImageInput nodes with variableName | ||
| connectedImageNodes.forEach((node) => { | ||
| if (node.type === "imageInput") { | ||
| const imgData = node.data as ImageInputNodeData; | ||
| if (imgData.variableName && !usedNames.has(imgData.variableName)) { | ||
| vars.push({ | ||
| name: imgData.variableName, | ||
| value: imgData.image ? "(image)" : "(no image)", | ||
| nodeId: node.id, | ||
| variableType: "image", | ||
| }); | ||
| usedNames.add(imgData.variableName); | ||
| } | ||
| } | ||
| }); |
There was a problem hiding this comment.
Keep image display labels out of AvailableVariable.value.
value feeds the substitution maps below. Writing "(image)" / "(no image)" here turns @foo into literal marker text, so the sync effect on Lines 153-159 can overwrite executor-produced outputText and hide unresolved image vars. Keep the UI label separate and exclude image vars from text replacement.
One way to separate display from resolution
- if (imgData.variableName && !usedNames.has(imgData.variableName)) {
+ if (imgData.variableName && imgData.image && !usedNames.has(imgData.variableName)) {
vars.push({
name: imgData.variableName,
- value: imgData.image ? "(image)" : "(no image)",
+ value: "",
nodeId: node.id,
variableType: "image",
});
usedNames.add(imgData.variableName);
}// Build preview/outputText maps from text vars only.
const textValueMap = new Map(
availableVariables
.filter((v) => v.variableType === "text")
.map((v) => [v.name, v.value])
);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/components/nodes/PromptConstructorNode.tsx` around lines 64 - 78, The
code currently inserts "(image)" / "(no image)" into the vars/availableVariables
value for image variables (constructed in the connectedImageNodes loop), which
then pollutes text substitution maps and causes literal markers to appear;
instead stop putting a text value on image vars (remove setting value for image
variable entries or add a separate displayLabel field) and ensure the
substitution logic only builds maps from text variables (use
availableVariables.filter(v => v.variableType === "text") when creating the
textValueMap used for replacement). Update the connectedImageNodes handling that
pushes into vars and the code that builds the substitution map so image vars are
excluded from text replacement while still providing a UI-only label for
display.
1. Add nodrag/nopan/nowheel classes and stopPropagation to autocomplete dropdown in PromptConstructorNode and PromptConstructorEditorModal so React Flow doesn't intercept mouse clicks on variable suggestions. 2. Change inlineParametersEnabled default from false to true so model selection and options appear directly in the node without needing to toggle a localStorage flag. https://claude.ai/code/session_01BkAUwgYckP3bTQVqw3g4Mm
There was a problem hiding this comment.
♻️ Duplicate comments (1)
src/components/nodes/PromptConstructorNode.tsx (1)
64-78:⚠️ Potential issue | 🔴 CriticalImage variable values still corrupt text substitution.
The
"(image)"/"(no image)"literal strings stored invalueat lines 69-74 flow into the substitution maps at lines 148-149 and 154-155. When@myImageappears in the template and an image variable namedmyImageexists, theoutputTextbecomes"...(image)..."instead of preserving the@myImagereference for downstream multimodal resolution.Filter image variables out of the text replacement logic:
Proposed fix
// Compute resolved text client-side for preview (single-pass to avoid prefix collisions) const resolvedPreview = useMemo(() => { - const valueMap = new Map(availableVariables.map(v => [v.name, v.value])); + const valueMap = new Map( + availableVariables + .filter(v => v.variableType === "text") + .map(v => [v.name, v.value]) + ); return localTemplate.replace(/@(\w+)/g, (match, name) => valueMap.get(name) ?? match); }, [localTemplate, availableVariables]); // Sync resolved text to outputText so downstream nodes can read it before execution useEffect(() => { - const valueMap = new Map(availableVariables.map(v => [v.name, v.value])); + const valueMap = new Map( + availableVariables + .filter(v => v.variableType === "text") + .map(v => [v.name, v.value]) + ); const resolved = nodeData.template.replace(/@(\w+)/g, (match, name) => valueMap.get(name) ?? match);,
Also applies to: 153-159
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/nodes/PromptConstructorNode.tsx` around lines 64 - 78, The image variables are being stored with value "(image)" and then included in the text substitution maps, causing `@myImage` tokens to be replaced with "(image)"; update PromptConstructorNode so image-type variables (those pushed from connectedImageNodes with variableType === "image") are excluded from the text replacement flow: keep building vars and usedNames as before but filter out entries where variableType === "image" when constructing the text substitution maps and when producing outputText (i.e., do not add image vars into the maps used for string replacements), ensuring `@variable` placeholders for images remain intact for downstream multimodal resolution.
🧹 Nitpick comments (1)
src/components/modals/PromptConstructorEditorModal.tsx (1)
215-224: Consider: Toolbar pills could also differentiate image variables.The autocomplete dropdown now visually distinguishes image variables (emerald) from text variables (blue), but the toolbar pills at lines 215-224 still use uniform
text-blue-400for all variable types. This inconsistency may confuse users who expect the same visual language across both UI elements.♻️ Optional: Apply consistent styling to toolbar pills
{/* Variable pills */} {availableVariables.map((v) => ( <button key={v.nodeId} onClick={() => handleVariablePillClick(v.name)} - className="px-2 py-0.5 text-[11px] text-blue-400 bg-blue-900/20 border border-blue-700/40 rounded hover:bg-blue-900/40 transition-colors" - title={v.value || "(empty)"} + className={`px-2 py-0.5 text-[11px] rounded transition-colors ${ + v.variableType === "image" + ? "text-emerald-400 bg-emerald-900/20 border border-emerald-700/40 hover:bg-emerald-900/40" + : "text-blue-400 bg-blue-900/20 border border-blue-700/40 hover:bg-blue-900/40" + }`} + title={v.variableType === "image" ? "(image)" : v.value || "(empty)"} > @{v.name} </button> ))}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/modals/PromptConstructorEditorModal.tsx` around lines 215 - 224, The toolbar pill rendering for availableVariables currently always uses blue styling; update the mapping in the JSX where availableVariables is rendered (the button inside the map that calls handleVariablePillClick) to apply a conditional CSS class based on the variable type (e.g., image vs text) — detect the variable discriminator (e.g., v.type or v.kind or v.format) and switch the classes from "text-blue-400 bg-blue-900/20 border-blue-700/40" to the corresponding emerald variants for image variables so the pills match the autocomplete dropdown's color language while preserving hover/transition behavior and title/@{v.name} text.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@src/components/nodes/PromptConstructorNode.tsx`:
- Around line 64-78: The image variables are being stored with value "(image)"
and then included in the text substitution maps, causing `@myImage` tokens to be
replaced with "(image)"; update PromptConstructorNode so image-type variables
(those pushed from connectedImageNodes with variableType === "image") are
excluded from the text replacement flow: keep building vars and usedNames as
before but filter out entries where variableType === "image" when constructing
the text substitution maps and when producing outputText (i.e., do not add image
vars into the maps used for string replacements), ensuring `@variable`
placeholders for images remain intact for downstream multimodal resolution.
---
Nitpick comments:
In `@src/components/modals/PromptConstructorEditorModal.tsx`:
- Around line 215-224: The toolbar pill rendering for availableVariables
currently always uses blue styling; update the mapping in the JSX where
availableVariables is rendered (the button inside the map that calls
handleVariablePillClick) to apply a conditional CSS class based on the variable
type (e.g., image vs text) — detect the variable discriminator (e.g., v.type or
v.kind or v.format) and switch the classes from "text-blue-400 bg-blue-900/20
border-blue-700/40" to the corresponding emerald variants for image variables so
the pills match the autocomplete dropdown's color language while preserving
hover/transition behavior and title/@{v.name} text.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 7f3c1a18-a50d-4ca2-9f2f-d689175a02dd
📒 Files selected for processing (3)
src/components/modals/PromptConstructorEditorModal.tsxsrc/components/nodes/PromptConstructorNode.tsxsrc/hooks/useInlineParameters.ts
1. Split Grid: Expanded presets from 7 to 12 layouts (added 1x2, 1x3, 1x4, 2x1, 3x4, 4x3, 4x4). Added custom rows/cols number inputs so users can specify any grid configuration (up to 10x10). 2. Stale images: Both nanoBanana and LLM executors now resolve image variables from fresh namedImages first, rather than trusting cached outputParts on upstream PromptConstructor nodes. The cached parts could be stale when only a single node is re-run via "Run node". 3. Drag selection: Fixed selectionOnDrag to work on all platforms (was macOS-only). Fixed panOnDrag to be false when panMode is "space" (was incorrectly true on non-macOS, blocking selection). https://claude.ai/code/session_01BkAUwgYckP3bTQVqw3g4Mm
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/store/execution/llmGenerateExecutor.ts (1)
69-96:⚠️ Potential issue | 🟠 MajorDon't let
partssuppress other connected images.
partshere only includes named/referenced images, but the Google/Gemini path insrc/app/api/llm/route.tsswitches topartsmode and ignoresimagesentirely. If this node also has plain image inputs, they never reach the provider oncepartsis populated.Possible fix
let parts: PromptPart[] | undefined; const { getEdges, getNodes } = ctx; @@ if (!parts) { const upstreamPcNode = getEdges() @@ } } + + const requestParts = parts + ? [ + ...images + .filter((img) => !parts.some((part) => part.type === "image" && part.value === img)) + .map((value) => ({ type: "image", value } as PromptPart)), + ...parts, + ] + : undefined; @@ body: JSON.stringify({ prompt: text, - ...(images.length > 0 && { images }), - ...(parts && { parts }), + ...(requestParts + ? { parts: requestParts } + : images.length > 0 + ? { images } + : {}), provider: nodeData.provider, model: nodeData.model,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/store/execution/llmGenerateExecutor.ts` around lines 69 - 96, The code currently sets parts (from resolveImageVars or upstream PromptConstructor output) which causes the API to ignore plain image inputs; before calling fetch("/api/llm") ensure plain images are not dropped by merging them into parts when parts is populated: detect images.length > 0 and if parts is truthy, convert the plain images into the same part format (or append them as additional part entries) and concat them onto parts (use the existing images and parts variables), then send the merged parts in the request body; update the body construction around parts/images so that when parts exists it contains both referenced (named) and plain images (keep resolveImageVars, hasImageVarReferences, upstreamPcNode/pcData.outputParts references to locate the code).
🧹 Nitpick comments (1)
src/components/__tests__/SplitGridSettingsModal.test.tsx (1)
465-489: Add a regression test for non-preset saved layouts (custom fallback).Given the new
-1preset fallback logic, add a case likegridRows=1, gridCols=5to assert custom mode initializes correctly and displays= 5 images.Example test case
+ it("should initialize custom mode for non-preset saved dimensions", () => { + const nodeData = createDefaultNodeData(); + nodeData.gridRows = 1; + nodeData.gridCols = 5; // not in preset list + + render( + <SplitGridSettingsModal + nodeId="test-node" + nodeData={nodeData} + onClose={vi.fn()} + /> + ); + + expect(screen.getByText(/= 5 images/)).toBeInTheDocument(); + });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/__tests__/SplitGridSettingsModal.test.tsx` around lines 465 - 489, The test suite for SplitGridSettingsModal needs a regression case for the new -1 preset fallback: in the "Initial Values" describe block, add a test that uses createDefaultNodeData() and sets nodeData.gridRows = 1, nodeData.gridCols = 5 and nodeData.targetCount = 5, render <SplitGridSettingsModal nodeId="test-node" nodeData={nodeData} .../> and assert the UI shows "= 5 images" and that the modal initializes in custom/non-preset mode (e.g., by checking the custom layout indicator or absence of a preset label). Target the SplitGridSettingsModal component and the gridRows/gridCols/targetCount fields to locate where to add this case.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/components/SplitGridSettingsModal.tsx`:
- Around line 50-55: Persisted grid dimensions used to initialize
initialPresetIdx/customRows/customCols can be out of range; clamp
nodeData.gridRows and nodeData.gridCols to allowed min/max before using them
(e.g., when calling findLayoutIndex, initializing selectedLayoutIndex, setting
customRows/customCols, and before computing targetCount and generating
clusters). Update the SplitGridSettingsModal initialization and any places that
compute targetCount or create node clusters (references: findLayoutIndex,
initialPresetIdx, selectedLayoutIndex, customRows, customCols, targetCount) to
coerce values into the valid range (min 1 and the component's max) so
invalid/huge counts are never produced.
In `@src/store/execution/nanoBananaExecutor.ts`:
- Around line 110-122: The fallback lookup that finds an upstream
PromptConstructor only checks edges targeting text handles, so PromptConstructor
nodes connected via the executor's dynamicInputs.prompt are missed; update the
edge filter in the getEdges() chain to also include edges whose targetHandle
equals "prompt" or startsWith("prompt" ) (in addition to the existing "text"
checks) so that the code that reads pcData.outputParts from
PromptConstructorNodeData will include multimodal outputParts coming in via
dynamicInputs.prompt.
---
Outside diff comments:
In `@src/store/execution/llmGenerateExecutor.ts`:
- Around line 69-96: The code currently sets parts (from resolveImageVars or
upstream PromptConstructor output) which causes the API to ignore plain image
inputs; before calling fetch("/api/llm") ensure plain images are not dropped by
merging them into parts when parts is populated: detect images.length > 0 and if
parts is truthy, convert the plain images into the same part format (or append
them as additional part entries) and concat them onto parts (use the existing
images and parts variables), then send the merged parts in the request body;
update the body construction around parts/images so that when parts exists it
contains both referenced (named) and plain images (keep resolveImageVars,
hasImageVarReferences, upstreamPcNode/pcData.outputParts references to locate
the code).
---
Nitpick comments:
In `@src/components/__tests__/SplitGridSettingsModal.test.tsx`:
- Around line 465-489: The test suite for SplitGridSettingsModal needs a
regression case for the new -1 preset fallback: in the "Initial Values" describe
block, add a test that uses createDefaultNodeData() and sets nodeData.gridRows =
1, nodeData.gridCols = 5 and nodeData.targetCount = 5, render
<SplitGridSettingsModal nodeId="test-node" nodeData={nodeData} .../> and assert
the UI shows "= 5 images" and that the modal initializes in custom/non-preset
mode (e.g., by checking the custom layout indicator or absence of a preset
label). Target the SplitGridSettingsModal component and the
gridRows/gridCols/targetCount fields to locate where to add this case.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: fde13fba-0d1d-41c5-8716-2a61047b2e87
📒 Files selected for processing (5)
src/components/SplitGridSettingsModal.tsxsrc/components/WorkflowCanvas.tsxsrc/components/__tests__/SplitGridSettingsModal.test.tsxsrc/store/execution/llmGenerateExecutor.tssrc/store/execution/nanoBananaExecutor.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- src/components/WorkflowCanvas.tsx
| const initialPresetIdx = findLayoutIndex(nodeData.gridRows, nodeData.gridCols); | ||
| const [selectedLayoutIndex, setSelectedLayoutIndex] = useState( | ||
| findLayoutIndex(nodeData.gridRows, nodeData.gridCols) | ||
| initialPresetIdx >= 0 ? initialPresetIdx : -1 | ||
| ); | ||
| const [customRows, setCustomRows] = useState(nodeData.gridRows); | ||
| const [customCols, setCustomCols] = useState(nodeData.gridCols); |
There was a problem hiding this comment.
Clamp persisted grid dimensions before computing targetCount.
Line 54 and Line 55 trust nodeData.gridRows/gridCols as-is. If imported/saved data is out of range (e.g., 0, negative, or very large), Line 65 can produce invalid/huge counts and Line 97 will create the wrong number of node clusters.
Proposed fix
+const clampGridSize = (value: number) =>
+ Number.isFinite(value) ? Math.max(1, Math.min(10, Math.floor(value))) : 1;
+
- const initialPresetIdx = findLayoutIndex(nodeData.gridRows, nodeData.gridCols);
+ const initialRows = clampGridSize(nodeData.gridRows);
+ const initialCols = clampGridSize(nodeData.gridCols);
+ const initialPresetIdx = findLayoutIndex(initialRows, initialCols);
- const [customRows, setCustomRows] = useState(nodeData.gridRows);
- const [customCols, setCustomCols] = useState(nodeData.gridCols);
+ const [customRows, setCustomRows] = useState(initialRows);
+ const [customCols, setCustomCols] = useState(initialCols);
- const rows = selectedLayoutIndex >= 0 ? LAYOUT_OPTIONS[selectedLayoutIndex].rows : customRows;
- const cols = selectedLayoutIndex >= 0 ? LAYOUT_OPTIONS[selectedLayoutIndex].cols : customCols;
+ const rows = selectedLayoutIndex >= 0
+ ? LAYOUT_OPTIONS[selectedLayoutIndex].rows
+ : clampGridSize(customRows);
+ const cols = selectedLayoutIndex >= 0
+ ? LAYOUT_OPTIONS[selectedLayoutIndex].cols
+ : clampGridSize(customCols);Also applies to: 63-66, 97-99
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/components/SplitGridSettingsModal.tsx` around lines 50 - 55, Persisted
grid dimensions used to initialize initialPresetIdx/customRows/customCols can be
out of range; clamp nodeData.gridRows and nodeData.gridCols to allowed min/max
before using them (e.g., when calling findLayoutIndex, initializing
selectedLayoutIndex, setting customRows/customCols, and before computing
targetCount and generating clusters). Update the SplitGridSettingsModal
initialization and any places that compute targetCount or create node clusters
(references: findLayoutIndex, initialPresetIdx, selectedLayoutIndex, customRows,
customCols, targetCount) to coerce values into the valid range (min 1 and the
component's max) so invalid/huge counts are never produced.
| // Fallback: check upstream PromptConstructor's outputParts if we couldn't resolve locally | ||
| if (!parts) { | ||
| const upstreamPcNode = getEdges() | ||
| .filter((e) => e.target === node.id && (e.targetHandle === "text" || e.targetHandle?.startsWith("text"))) | ||
| .map((e) => getNodes().find((n) => n.id === e.source)) | ||
| .find((n) => n?.type === "promptConstructor"); | ||
| if (upstreamPcNode) { | ||
| const pcData = upstreamPcNode.data as PromptConstructorNodeData; | ||
| if (pcData.outputParts && pcData.outputParts.length > 0) { | ||
| parts = pcData.outputParts; | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
Include the dynamic prompt input in the outputParts fallback lookup.
This executor can take its prompt from dynamicInputs.prompt, but the new upstream search only traverses text* edges. A promptConstructor connected through the dynamic prompt handle will therefore lose its multimodal outputParts and fall back to plain text/image mode.
Possible fix
if (!parts) {
const upstreamPcNode = getEdges()
- .filter((e) => e.target === node.id && (e.targetHandle === "text" || e.targetHandle?.startsWith("text")))
+ .filter(
+ (e) =>
+ e.target === node.id &&
+ (
+ e.targetHandle === "text" ||
+ e.targetHandle?.startsWith("text") ||
+ e.targetHandle === "prompt" ||
+ e.targetHandle?.startsWith("prompt")
+ )
+ )
.map((e) => getNodes().find((n) => n.id === e.source))
.find((n) => n?.type === "promptConstructor");🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/store/execution/nanoBananaExecutor.ts` around lines 110 - 122, The
fallback lookup that finds an upstream PromptConstructor only checks edges
targeting text handles, so PromptConstructor nodes connected via the executor's
dynamicInputs.prompt are missed; update the edge filter in the getEdges() chain
to also include edges whose targetHandle equals "prompt" or startsWith("prompt"
) (in addition to the existing "text" checks) so that the code that reads
pcData.outputParts from PromptConstructorNodeData will include multimodal
outputParts coming in via dynamicInputs.prompt.
…and executor destructure bugs - Restore middle mouse button panning by using panOnDrag=[1,2] instead of false - Prevent stale stored inputs when nodes are fully disconnected (check hasIncomingEdges) - Add cell aspect ratio selection to Split Grid settings modal - Fix getEdges destructure ordering in llmGenerateExecutor and generateVideoExecutor - Fix video executor test edge target ID mismatch https://claude.ai/code/session_01BkAUwgYckP3bTQVqw3g4Mm
There was a problem hiding this comment.
Actionable comments posted: 7
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/store/execution/nanoBananaExecutor.ts (1)
127-138:⚠️ Potential issue | 🟠 Major
partsdo not reach/api/generateyet.This payload now includes
parts, butsrc/app/api/generate/route.tscurrently only reads that field and does not forward it to any provider. The new image-variable path will therefore be ignored for Nano Banana requests until the backend is wired through.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/store/execution/nanoBananaExecutor.ts` around lines 127 - 138, The frontend is now sending a parts field in requestPayload (see requestPayload and parts in nanoBananaExecutor.ts) but the backend handler (src/app/api/generate/route.ts) only reads parts and does not forward it to the provider, so the image-variable path is ignored; update route.ts to extract parts from the incoming body and pass that parts value into the provider invocation (the same argument slot where images, prompt, model, parameters, etc. are forwarded) and then update the provider-facing function/signature that builds Nano Banana requests to accept and use parts when constructing the image-variable path so Nano Banana requests receive the parts data end-to-end.
♻️ Duplicate comments (1)
src/store/execution/nanoBananaExecutor.ts (1)
115-118:⚠️ Potential issue | 🟠 MajorInclude
prompt*handles in the Prompt Constructor fallback lookup.This still only scans
text*target handles, but this executor also accepts prompt content throughdynamicInputs.prompt. A Prompt Constructor connected throughprompt/prompt-*will loseoutputPartson rerun and fall back to plain text.Possible fix
- .filter((e) => e.target === node.id && (e.targetHandle === "text" || e.targetHandle?.startsWith("text"))) + .filter( + (e) => + e.target === node.id && + ( + e.targetHandle === "text" || + e.targetHandle?.startsWith("text") || + e.targetHandle === "prompt" || + e.targetHandle?.startsWith("prompt") + ) + )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/store/execution/nanoBananaExecutor.ts` around lines 115 - 118, The filter that finds upstream Prompt Constructor nodes (used in upstreamPcNode) currently only checks targetHandle values starting with "text"; update that filter to also accept "prompt" and handles starting with "prompt" (e.g., "prompt" or "prompt-*") so connections supplying dynamicInputs.prompt are recognized; locate the upstreamPcNode declaration using getEdges(), getNodes(), node.id and modify the targetHandle check to include both "text*" and "prompt*" forms while preserving the subsequent map/find that checks n?.type === "promptConstructor".
🧹 Nitpick comments (1)
src/components/SplitGridSettingsModal.tsx (1)
46-71: Consider adding defensive guards against zero dimensions.If
sourceHeightis zero (e.g., from corrupted data), Line 52 producesInfinity, causingbestErrorcomparisons to behave unexpectedly. This is unlikely with valid images but a guard would make the function more robust.🛡️ Proposed defensive guard
function findBestGrid( sourceWidth: number, sourceHeight: number, cellAR: number, maxDim: number = 10 ): { rows: number; cols: number } { + if (sourceHeight <= 0 || sourceWidth <= 0 || cellAR <= 0) { + return { rows: 1, cols: 1 }; + } const sourceAR = sourceWidth / sourceHeight;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/SplitGridSettingsModal.tsx` around lines 46 - 71, The function findBestGrid can divide by zero when sourceHeight (or sourceWidth) is zero or negative; add a defensive guard at the start of findBestGrid to validate sourceWidth and sourceHeight (e.g., if sourceWidth <= 0 || sourceHeight <= 0) and handle that case deterministically (return a safe default like { rows: 1, cols: 1 } or throw a clear error) so the subsequent arithmetic (sourceAR = sourceWidth / sourceHeight and actualCellAR calculation) never produces Infinity/NaN and breaks the bestError comparisons.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/components/SplitGridSettingsModal.tsx`:
- Around line 105-110: The useEffect that loads an Image when
nodeData.sourceImage changes should add error handling, timeout/abort and
cleanup to avoid setting state after unmount; replace or wrap the manual Image()
logic in SplitGridSettingsModal's useEffect with the existing getImageDimensions
utility from nodeDimensions.ts (or replicate its async pattern): call
getImageDimensions(nodeData.sourceImage) inside an async function, await it
inside try/catch to handle errors, guard setSourceDims behind a mounted/aborted
flag or AbortSignal so updates are not applied after unmount, and ensure any
timers or listeners are cleared in the effect cleanup to prevent stale
sourceDims or state updates on unmounted components.
In `@src/components/WorkflowCanvas.tsx`:
- Line 147: The new node type returns an "image" input but handleMenuSelect
still omits auto-connecting the Prompt Constructor for image drops; update
WorkflowCanvas.handleMenuSelect so that in the branch where handleType ===
"image" you include the promptConstructor case and programmatically create the
edge from the source node to the new node's "image" input (use the same
connect/edge-creation helper you use for text drops), ensuring the target handle
id or name matches the Prompt Constructor's "image" input name; this will
auto-wire the Prompt Constructor when selected from the drop menu.
In `@src/store/execution/generate3dExecutor.ts`:
- Around line 47-49: The fallback logic uses hasIncomingEdges (any modality)
which causes stale text or images to be reused; change it to detect incoming
image vs text edges separately (e.g., compute hasIncomingImageEdge and
hasIncomingTextEdge by inspecting getEdges().some(e => e.target === node.id &&
e.data?.type === 'image' / 'text' or the project’s edge metadata key like
e.data?.outputType) and then set images = connectedImages.length > 0 ?
connectedImages : (hasIncomingImageEdge ? nodeData.inputImages : []), and
promptText = connectedText ?? (hasIncomingTextEdge ? nodeData.inputPrompt :
null) so each modality only falls back when its specific incoming edge is
present.
In `@src/store/execution/generateVideoExecutor.ts`:
- Around line 48-50: The fallback logic currently uses a single hasIncomingEdges
flag causing stale reuse when one modality is disconnected; update the code to
compute modality-specific incoming-edge flags (e.g., hasIncomingImageEdges and
hasIncomingTextEdges) by inspecting getEdges().some(e => e.target === node.id &&
<edge indicates image/text>) and then use those flags separately: set images =
connectedImages.length > 0 ? connectedImages : (hasIncomingImageEdges ?
nodeData.inputImages : []), and set text = connectedText ??
(hasIncomingTextEdges ? nodeData.inputPrompt : null). Ensure you reference
getEdges(), node.id, connectedImages, connectedText, nodeData.inputImages and
nodeData.inputPrompt when implementing the checks.
In `@src/store/execution/llmGenerateExecutor.ts`:
- Around line 95-98: The request body is including `parts` for every LLM call
but only `generateWithGoogle` supports them; update the code in
llmGenerateExecutor.ts to only include `parts` when the downstream handler
supports it (e.g., when provider === 'google' or a new supportsParts flag is
true), or alternatively ensure route.ts forwards `parts` to other handlers;
specifically gate the spread ...(parts && { parts }) behind a provider check (or
propagate a supportsParts boolean from src/app/api/llm/route.ts) so
OpenAI/Anthropic calls do not receive `parts` and drop Prompt Constructor image
variables.
- Around line 42-44: The code currently uses a single hasIncomingEdges flag to
decide fallbacks for both images and text, which causes stale images or text to
be reused when only one input is disconnected; update getEdges usage to detect
connectivity per-port (e.g., compute hasImageIncomingEdges and
hasTextIncomingEdges by inspecting getEdges() for edges targeting node.id and
the specific port/name for images vs text) and then set images =
inputs.images.length > 0 ? inputs.images : (hasImageIncomingEdges ?
nodeData.inputImages : []), and text = inputs.text ?? (hasTextIncomingEdges ?
nodeData.inputPrompt : null), so each input uses its own incoming-edge check
(referencing hasIncomingEdges, images, inputs.images, nodeData.inputImages,
text, inputs.text, nodeData.inputPrompt, and getEdges()/node.id).
In `@src/store/execution/nanoBananaExecutor.ts`:
- Around line 59-61: The current fallback unconditionally restores
nodeData.inputPrompt when any incoming edge exists, which bypasses the live
dynamic prompt path; update the logic in the block that sets images and
promptText (the expressions using hasIncomingEdges, connectedImages,
connectedText, and nodeData.inputPrompt) to only fall back to
nodeData.inputPrompt if there is no dynamicInputs.prompt (or
connectedText/dynamic prompt source) present — i.e., check for the presence of
dynamicInputs.prompt / connectedText before using nodeData.inputPrompt so
sanitizedDynamicInputs.prompt is not overwritten and dynamic prompt reruns
remain live; adjust the promptText assignment accordingly and ensure
sanitizedDynamicInputs handling still preserves a present dynamic prompt.
---
Outside diff comments:
In `@src/store/execution/nanoBananaExecutor.ts`:
- Around line 127-138: The frontend is now sending a parts field in
requestPayload (see requestPayload and parts in nanoBananaExecutor.ts) but the
backend handler (src/app/api/generate/route.ts) only reads parts and does not
forward it to the provider, so the image-variable path is ignored; update
route.ts to extract parts from the incoming body and pass that parts value into
the provider invocation (the same argument slot where images, prompt, model,
parameters, etc. are forwarded) and then update the provider-facing
function/signature that builds Nano Banana requests to accept and use parts when
constructing the image-variable path so Nano Banana requests receive the parts
data end-to-end.
---
Duplicate comments:
In `@src/store/execution/nanoBananaExecutor.ts`:
- Around line 115-118: The filter that finds upstream Prompt Constructor nodes
(used in upstreamPcNode) currently only checks targetHandle values starting with
"text"; update that filter to also accept "prompt" and handles starting with
"prompt" (e.g., "prompt" or "prompt-*") so connections supplying
dynamicInputs.prompt are recognized; locate the upstreamPcNode declaration using
getEdges(), getNodes(), node.id and modify the targetHandle check to include
both "text*" and "prompt*" forms while preserving the subsequent map/find that
checks n?.type === "promptConstructor".
---
Nitpick comments:
In `@src/components/SplitGridSettingsModal.tsx`:
- Around line 46-71: The function findBestGrid can divide by zero when
sourceHeight (or sourceWidth) is zero or negative; add a defensive guard at the
start of findBestGrid to validate sourceWidth and sourceHeight (e.g., if
sourceWidth <= 0 || sourceHeight <= 0) and handle that case deterministically
(return a safe default like { rows: 1, cols: 1 } or throw a clear error) so the
subsequent arithmetic (sourceAR = sourceWidth / sourceHeight and actualCellAR
calculation) never produces Infinity/NaN and breaks the bestError comparisons.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 0fe75efa-151b-46c1-b43a-40fd97b7d26f
📒 Files selected for processing (10)
src/components/SplitGridSettingsModal.tsxsrc/components/WorkflowCanvas.tsxsrc/store/execution/__tests__/generateVideoExecutor.test.tssrc/store/execution/__tests__/llmGenerateExecutor.test.tssrc/store/execution/__tests__/nanoBananaExecutor.test.tssrc/store/execution/generate3dExecutor.tssrc/store/execution/generateAudioExecutor.tssrc/store/execution/generateVideoExecutor.tssrc/store/execution/llmGenerateExecutor.tssrc/store/execution/nanoBananaExecutor.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- src/store/execution/tests/llmGenerateExecutor.test.ts
| useEffect(() => { | ||
| if (!nodeData.sourceImage) return; | ||
| const img = new Image(); | ||
| img.onload = () => setSourceDims({ width: img.width, height: img.height }); | ||
| img.src = nodeData.sourceImage; | ||
| }, [nodeData.sourceImage]); |
There was a problem hiding this comment.
Add error handling and cleanup to image loading.
This effect lacks error handling, cleanup on unmount, and timeout protection. If the image fails to load or the component unmounts mid-load, sourceDims may remain stale or trigger a state update on an unmounted component.
The codebase has getImageDimensions in nodeDimensions.ts with proper error/timeout handling—consider reusing it or adding similar safeguards here.
🛠️ Proposed fix using async utility pattern
+import { getImageDimensions } from "@/utils/nodeDimensions";
// Load source image dimensions
useEffect(() => {
if (!nodeData.sourceImage) return;
- const img = new Image();
- img.onload = () => setSourceDims({ width: img.width, height: img.height });
- img.src = nodeData.sourceImage;
+ let cancelled = false;
+ getImageDimensions(nodeData.sourceImage).then((dims) => {
+ if (!cancelled && dims) {
+ setSourceDims(dims);
+ }
+ });
+ return () => { cancelled = true; };
}, [nodeData.sourceImage]);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/components/SplitGridSettingsModal.tsx` around lines 105 - 110, The
useEffect that loads an Image when nodeData.sourceImage changes should add error
handling, timeout/abort and cleanup to avoid setting state after unmount;
replace or wrap the manual Image() logic in SplitGridSettingsModal's useEffect
with the existing getImageDimensions utility from nodeDimensions.ts (or
replicate its async pattern): call getImageDimensions(nodeData.sourceImage)
inside an async function, await it inside try/catch to handle errors, guard
setSourceDims behind a mounted/aborted flag or AbortSignal so updates are not
applied after unmount, and ensure any timers or listeners are cleared in the
effect cleanup to prevent stale sourceDims or state updates on unmounted
components.
| return { inputs: ["text"], outputs: ["text"] }; | ||
| case "promptConstructor": | ||
| return { inputs: ["text"], outputs: ["text"] }; | ||
| return { inputs: ["text", "image"], outputs: ["text"] }; |
There was a problem hiding this comment.
Auto-wiring is missing for Prompt Constructor image drops.
After this adds an image input, handleMenuSelect() still omits promptConstructor from the handleType === "image" branch, so choosing Prompt Constructor from the connection drop menu creates the node but leaves it disconnected.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/components/WorkflowCanvas.tsx` at line 147, The new node type returns an
"image" input but handleMenuSelect still omits auto-connecting the Prompt
Constructor for image drops; update WorkflowCanvas.handleMenuSelect so that in
the branch where handleType === "image" you include the promptConstructor case
and programmatically create the edge from the source node to the new node's
"image" input (use the same connect/edge-creation helper you use for text
drops), ensuring the target handle id or name matches the Prompt Constructor's
"image" input name; this will auto-wire the Prompt Constructor when selected
from the drop menu.
| const hasIncomingEdges = getEdges().some((e) => e.target === node.id); | ||
| images = connectedImages.length > 0 ? connectedImages : (hasIncomingEdges ? nodeData.inputImages : []); | ||
| promptText = connectedText ?? (hasIncomingEdges ? nodeData.inputPrompt : null); |
There was a problem hiding this comment.
Fallback is too coarse for multimodal inputs.
Lines 47-49 reuse both stored images and stored prompt whenever any edge still targets the node. Disconnecting just the text edge while keeping an image connected will still resend the stale prompt, and vice versa.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/store/execution/generate3dExecutor.ts` around lines 47 - 49, The fallback
logic uses hasIncomingEdges (any modality) which causes stale text or images to
be reused; change it to detect incoming image vs text edges separately (e.g.,
compute hasIncomingImageEdge and hasIncomingTextEdge by inspecting
getEdges().some(e => e.target === node.id && e.data?.type === 'image' / 'text'
or the project’s edge metadata key like e.data?.outputType) and then set images
= connectedImages.length > 0 ? connectedImages : (hasIncomingImageEdge ?
nodeData.inputImages : []), and promptText = connectedText ??
(hasIncomingTextEdge ? nodeData.inputPrompt : null) so each modality only falls
back when its specific incoming edge is present.
| const hasIncomingEdges = getEdges().some((e) => e.target === node.id); | ||
| images = connectedImages.length > 0 ? connectedImages : (hasIncomingEdges ? nodeData.inputImages : []); | ||
| text = connectedText ?? (hasIncomingEdges ? nodeData.inputPrompt : null); |
There was a problem hiding this comment.
Fallback can reuse stale prompt/images after partial disconnects.
Lines 48-50 use hasIncomingEdges for both modalities. If an image edge remains after the prompt edge is removed, regenerate will still reuse nodeData.inputPrompt, and vice versa for stale images.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/store/execution/generateVideoExecutor.ts` around lines 48 - 50, The
fallback logic currently uses a single hasIncomingEdges flag causing stale reuse
when one modality is disconnected; update the code to compute modality-specific
incoming-edge flags (e.g., hasIncomingImageEdges and hasIncomingTextEdges) by
inspecting getEdges().some(e => e.target === node.id && <edge indicates
image/text>) and then use those flags separately: set images =
connectedImages.length > 0 ? connectedImages : (hasIncomingImageEdges ?
nodeData.inputImages : []), and set text = connectedText ??
(hasIncomingTextEdges ? nodeData.inputPrompt : null). Ensure you reference
getEdges(), node.id, connectedImages, connectedText, nodeData.inputImages and
nodeData.inputPrompt when implementing the checks.
| const hasIncomingEdges = getEdges().some((e) => e.target === node.id); | ||
| images = inputs.images.length > 0 ? inputs.images : (hasIncomingEdges ? nodeData.inputImages : []); | ||
| text = inputs.text ?? (hasIncomingEdges ? nodeData.inputPrompt : null); |
There was a problem hiding this comment.
Stored fallback can resurrect disconnected inputs.
Lines 42-44 gate both images and inputPrompt on one hasIncomingEdges flag. If a user disconnects only the image edge but keeps text connected, reruns will still reuse stale images from the previous execution, and the inverse happens for stale text.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/store/execution/llmGenerateExecutor.ts` around lines 42 - 44, The code
currently uses a single hasIncomingEdges flag to decide fallbacks for both
images and text, which causes stale images or text to be reused when only one
input is disconnected; update getEdges usage to detect connectivity per-port
(e.g., compute hasImageIncomingEdges and hasTextIncomingEdges by inspecting
getEdges() for edges targeting node.id and the specific port/name for images vs
text) and then set images = inputs.images.length > 0 ? inputs.images :
(hasImageIncomingEdges ? nodeData.inputImages : []), and text = inputs.text ??
(hasTextIncomingEdges ? nodeData.inputPrompt : null), so each input uses its own
incoming-edge check (referencing hasIncomingEdges, images, inputs.images,
nodeData.inputImages, text, inputs.text, nodeData.inputPrompt, and
getEdges()/node.id).
| body: JSON.stringify({ | ||
| prompt: text, | ||
| ...(images.length > 0 && { images }), | ||
| ...(parts && { parts }), |
There was a problem hiding this comment.
parts are still only supported by one downstream provider path.
This now sends parts for every LLM request, but src/app/api/llm/route.ts only forwards them to generateWithGoogle. OpenAI/Anthropic runs will silently drop Prompt Constructor image variables unless this is gated or the other handlers are wired through too.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/store/execution/llmGenerateExecutor.ts` around lines 95 - 98, The request
body is including `parts` for every LLM call but only `generateWithGoogle`
supports them; update the code in llmGenerateExecutor.ts to only include `parts`
when the downstream handler supports it (e.g., when provider === 'google' or a
new supportsParts flag is true), or alternatively ensure route.ts forwards
`parts` to other handlers; specifically gate the spread ...(parts && { parts })
behind a provider check (or propagate a supportsParts boolean from
src/app/api/llm/route.ts) so OpenAI/Anthropic calls do not receive `parts` and
drop Prompt Constructor image variables.
Three fixes to prevent old data from being used: 1. Expand clearStaleInputData (renamed from clearStaleInputImages) to also clear inputPrompt when text-source edges are removed, and clear outputParts on PromptConstructor nodes when their image inputs are disconnected. 2. In nanoBananaExecutor and llmGenerateExecutor, instead of blindly trusting cached outputParts from upstream PromptConstructor nodes, walk up to the PC's image input connections and re-resolve parts from fresh imageInput data. Only fall back to cached outputParts when no image inputs exist. 3. Add TEXT_SOURCE_NODE_TYPES set for proper text edge cleanup detection. https://claude.ai/code/session_01BkAUwgYckP3bTQVqw3g4Mm
Never use cached outputParts from PromptConstructor when there are no fresh image inputs to re-resolve from. This eliminates the last path where stale embedded images could leak into generation requests. https://claude.ai/code/session_01BkAUwgYckP3bTQVqw3g4Mm
…cution During full workflow execution, PromptConstructor has just re-executed and its outputParts are fresh — use them directly (restoring original behavior). Only re-resolve from upstream imageInput data during single-node regeneration (useStoredFallback=true), where PC hasn't re-executed and cached outputParts may be stale. https://claude.ai/code/session_01BkAUwgYckP3bTQVqw3g4Mm
…eration Instead of complex re-resolution logic inside each executor, regenerateNode now re-executes lightweight upstream nodes (prompt, promptConstructor) before running the target node. This ensures outputParts, outputText, and resolved variables are always fresh — both for full workflow and single-node runs. Simplified nanoBananaExecutor and llmGenerateExecutor by removing the useStoredFallback branching for outputParts resolution. https://claude.ai/code/session_01BkAUwgYckP3bTQVqw3g4Mm
New node type for masked image regeneration. Users paint a mask on a source image, provide a text prompt, and only the masked region gets regenerated. Gemini uses pseudo-inpaint via multimodal prompt; WaveSpeed uses real mask-based inpainting via the standard generation pipeline. https://claude.ai/code/session_01BkAUwgYckP3bTQVqw3g4Mm
The node was only reading data.inputImage which is set during execution. Now uses getConnectedInputsPure to reactively read the upstream image and text from connected nodes, syncing them into node data for the mask modal and executor. https://claude.ai/code/session_01BkAUwgYckP3bTQVqw3g4Mm
- Drawing now only occurs while mouse button is held (use ref instead of state for isDrawing to avoid stale closure issues) - Added touch support for mobile/tablet - Added keyboard shortcuts: B=brush, E=eraser, [/]=size, Esc=cancel - Fixed custom cursor offset alignment - Made prompt optional — uses sensible default when no text is connected - Added status hints showing what's missing (connect image / draw mask) - "Save Mask" button disabled when no strokes drawn - Renamed "Done" to "Save Mask" for clarity https://claude.ai/code/session_01BkAUwgYckP3bTQVqw3g4Mm
Summary by CodeRabbit