Skip to content

fix: increase generate API timeout to 15 minutes for video generation#100

Closed
paperplancallum wants to merge 8 commits intoshrimbly:masterfrom
paperplancallum:master
Closed

fix: increase generate API timeout to 15 minutes for video generation#100
paperplancallum wants to merge 8 commits intoshrimbly:masterfrom
paperplancallum:master

Conversation

@paperplancallum
Copy link
Copy Markdown

@paperplancallum paperplancallum commented Mar 12, 2026

Summary

  • Increases maxDuration from 300s (5 min) to 900s (15 min) in /api/generate route
  • Fixes 504 FUNCTION_INVOCATION_TIMEOUT errors during video generation on Vercel Pro

Test plan

  • Deploy to Vercel Pro
  • Generate a video that previously timed out
  • Verify generation completes without 504 error

🤖 Generated with Claude Code

Summary by CodeRabbit

  • New Features

    • Client-side image compression for uploads to reduce payload size and improve media generation reliability.
    • Web-deployment mode improvements: adjusted UI and save behavior to better support browser-based deployments.
  • Chores

    • Increased serverless function timeout to ~13 minutes and updated deployment configuration.
    • Bumped several package dependencies.
  • Documentation

    • Added guidance recommending the preferred web crawling approach for fetching site content.

Vercel Pro allows maxDuration up to 900 seconds. This prevents
504 FUNCTION_INVOCATION_TIMEOUT errors during video generation.

Co-Authored-By: Claude Opus 4.5 <[email protected]>
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 12, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Added client-side image compression and integrated it into the video generation executor; increased the generate API function timeout to 800s (and updated Vercel config); bumped two AI-related dependencies; added a short "Web Crawling" guidance in CLAUDE.md; ProjectSetupModal adjusted for web deployments.

Changes

Cohort / File(s) Summary
API Route
src/app/api/generate/route.ts
Updated exported maxDuration from 300 to 800 seconds and adjusted comment to match Vercel Pro limit.
Deployment Config
vercel.json
Added/updated functions mapping for src/app/api/generate/route.ts with maxDuration: 800.
Image compression util
src/utils/imageCompression.ts
New utility exporting compressImageForUpload and compressImagesForUpload with MAX_PAYLOAD_SIZE (4MB), MAX_DIMENSION (2048), resizing and progressive JPEG quality reduction, plus batch compression.
Executor integration
src/store/execution/generateVideoExecutor.ts
Integrated compression workflow: compresses images and dynamicInputs containing data URLs before building the request payload and sending the API request (replaces images/dynamicInputs with compressed variants).
UI: Project setup
src/components/ProjectSetupModal.tsx
Added web-deployment detection (isWebDeployment) to alter validation, path behavior, and force embedded image storage for web deployments; shows web-mode banner and adjusts save flow.
Dependencies
package.json
Bumped @ai-sdk/react ^3.0.51 → ^3.0.118 and ai ^6.0.49 → ^6.0.116.
Docs / Guidance
CLAUDE.md
Added "Web Crawling" guidance recommending use of cf crawl for crawling/fetching website content.

Sequence Diagram(s)

sequenceDiagram
  participant UI as Client/UI
  participant Exec as generateVideoExecutor
  participant Util as imageCompression
  participant API as /api/generate (Server)
  participant Vercel as Vercel Function

  UI->>Exec: submit generate request (images, dynamicInputs)
  Exec->>Util: compressImagesForUpload(images)
  Exec->>Util: compressImagesForUpload(dynamicInputs with data URLs)
  Util-->>Exec: compressed images & inputs
  Exec->>API: POST compressed payload
  API->>Vercel: run function (maxDuration=800s)
  API-->>Exec: respond with job/result
  Exec-->>UI: update with response
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

Poem

🐰 I nibbled bytes and shrank each frame,
I lengthened clocks so long jobs came,
I bumped some deps and changed the scene —
a lighter feed for videos, keen. 🥕✨

🚥 Pre-merge checks | ✅ 1 | ❌ 2

❌ Failed checks (2 warnings)

Check name Status Explanation Resolution
Title check ⚠️ Warning The PR title claims a 15-minute increase, but the actual change is from 300 to 800 seconds (~13 minutes), not 900 seconds (15 minutes). The title is misleading about the exact timeout duration achieved. Update the PR title to accurately reflect the actual timeout: 'fix: increase generate API timeout to ~13 minutes for video generation' or verify if the intended change is 900 seconds instead of 800.
Docstring Coverage ⚠️ Warning Docstring coverage is 50.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (1 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
📝 Coding Plan for PR comments
  • Generate coding plan

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/app/api/generate/route.ts`:
- Line 25: The maxDuration constant in src/app/api/generate/route.ts was
increased to 900, violating the repository rule that the shared generate route
must use a 5-minute timeout; change the exported constant maxDuration back to
300 (5 minutes) so the generate route honors the file-specific contract, and if
longer video jobs are required, move them to a separate endpoint or workflow
instead of modifying maxDuration here.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 88dcd29e-5314-4192-89d3-b88ae5f25211

📥 Commits

Reviewing files that changed from the base of the PR and between de9dd02 and 8fb3066.

📒 Files selected for processing (1)
  • src/app/api/generate/route.ts

export const clearFalInputMappingCache = _clearFalInputMappingCache;

export const maxDuration = 300; // 5 minute timeout (Vercel hobby plan limit)
export const maxDuration = 900; // 15 minute timeout (Vercel Pro limit)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Keep this route at the repository’s 5-minute timeout.

Changing maxDuration to 900 breaks the file-specific contract for src/app/api/generate/route.ts. If video jobs need 15 minutes, they should move to a separate endpoint or workflow instead of changing the shared generate route’s timeout here.

As per coding guidelines, "**/api/generate/route.ts: Image generation API route must have a 5 minute timeout in src/app/api/generate."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/app/api/generate/route.ts` at line 25, The maxDuration constant in
src/app/api/generate/route.ts was increased to 900, violating the repository
rule that the shared generate route must use a 5-minute timeout; change the
exported constant maxDuration back to 300 (5 minutes) so the generate route
honors the file-specific contract, and if longer video jobs are required, move
them to a separate endpoint or workflow instead of modifying maxDuration here.

Ensures the generate route has 900s timeout at the Vercel project level.

Co-Authored-By: Claude Opus 4.5 <[email protected]>
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@vercel.json`:
- Around line 1-7: The vercel.json timeouts are inconsistent with CLAUDE.md:
change the maxDuration for generate/route.ts from 900 to 300 seconds (or, if 15
minutes is intentional, update CLAUDE.md to reflect 900s), and add a new entry
for llm/route.ts with "maxDuration": 60 to set the 1-minute timeout; update the
vercel.json functions object accordingly so it contains both generate/route.ts
with the corrected value and llm/route.ts with 60.
- Around line 3-5: Replace the specific function entry
"src/app/api/generate/route.ts" and its maxDuration value of 900 with a globbed
function path and 300-second limit: update the functions key to use
"src/app/api/**/route.ts" and set maxDuration to 300; locate the JSON entry
referencing "src/app/api/generate/route.ts" and change the key to the glob
pattern and the "maxDuration" property to 300 to comply with Vercel Next.js App
Router requirements.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 6be64d9e-faf1-4306-8d71-7c4a79f54fba

📥 Commits

Reviewing files that changed from the base of the PR and between 8fb3066 and ce7b5c0.

📒 Files selected for processing (1)
  • vercel.json

Comment on lines +1 to +7
{
"functions": {
"src/app/api/generate/route.ts": {
"maxDuration": 900
}
}
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Description: Check the documented timeout requirements in CLAUDE.md

# Search for timeout-related documentation
rg -n -C3 'timeout|maxDuration|generate' CLAUDE.md

Repository: shrimbly/node-banana

Length of output: 3509


🏁 Script executed:

cat -n vercel.json

Repository: shrimbly/node-banana

Length of output: 207


Configure API route timeouts per project documentation.

The vercel.json configuration has two issues:

  1. The src/app/api/generate/route.ts timeout is set to 900 seconds (15 minutes), but CLAUDE.md specifies it should be 5 minutes (300 seconds). If the use case has changed to require 15 minutes, update CLAUDE.md to document the new requirement.

  2. The src/app/api/llm/route.ts route is missing from the configuration. Per CLAUDE.md, it requires a 1-minute timeout. Add:

    "src/app/api/llm/route.ts": {
      "maxDuration": 60
    }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@vercel.json` around lines 1 - 7, The vercel.json timeouts are inconsistent
with CLAUDE.md: change the maxDuration for generate/route.ts from 900 to 300
seconds (or, if 15 minutes is intentional, update CLAUDE.md to reflect 900s),
and add a new entry for llm/route.ts with "maxDuration": 60 to set the 1-minute
timeout; update the vercel.json functions object accordingly so it contains both
generate/route.ts with the corrected value and llm/route.ts with 60.

Comment on lines +3 to +5
"src/app/api/generate/route.ts": {
"maxDuration": 900
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🌐 Web query:

vercel.json functions maxDuration Next.js App Router configuration

💡 Result:

To set Vercel Function timeouts for Next.js App Router route handlers (app/api/**/route.ts), use the functions map in vercel.json and match the built function entrypoints with a glob pattern. Vercel applies the config to every function file that matches the pattern (including App Router Route Handlers). [1] [2]

Example (app/ at repo root)

{
  "functions": {
    "app/api/**/route.ts": { "maxDuration": 60 }
  }
}

This works for App Router APIs like app/api/hello/route.ts. [3]

If you use a src/ directory

Prefix the pattern with src/:

{
  "functions": {
    "src/app/api/**/route.ts": { "maxDuration": 60 }
  }
}

Vercel notes the /src/ prefix is required for detection when your project uses a src directory. [1]

Alternative (per route/page/segment, in code)

Next.js App Router also supports:

export const maxDuration = 60;

directly from a Route Handler / Page / Layout via Route Segment Config. [4]

Notes

  • maxDuration is in seconds and must be within your plan’s allowed maximum. [2]

Sources
[1] https://vercel.com/docs/functions/configuring-functions/duration
[2] https://vercel.com/docs/project-configuration/vercel-json
[3] vercel/community#4391
[4] https://nextjs.org/docs/13/app/api-reference/file-conventions/route-segment-config


🏁 Script executed:

git log --oneline -n 20 | head -20

Repository: shrimbly/node-banana

Length of output: 130


Update maxDuration to 300 seconds (5 minutes) and use glob pattern for function path.

The configuration has two issues:

  1. maxDuration is set to 900 seconds (15 minutes) but the requirement specifies 5 minutes (300 seconds).
  2. Per Vercel documentation for Next.js App Router in src/ projects, use the glob pattern src/app/api/**/route.ts instead of an exact file path.
Suggested fix
{
  "functions": {
    "src/app/api/**/route.ts": {
      "maxDuration": 300
    }
  }
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@vercel.json` around lines 3 - 5, Replace the specific function entry
"src/app/api/generate/route.ts" and its maxDuration value of 900 with a globbed
function path and 300-second limit: update the functions key to use
"src/app/api/**/route.ts" and set maxDuration to 300; locate the JSON entry
referencing "src/app/api/generate/route.ts" and change the key to the glob
pattern and the "maxDuration" property to 300 to comply with Vercel Next.js App
Router requirements.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@package.json`:
- Around line 16-22: Revert the unrelated dependency version changes in
package.json (the bumps to "@ai-sdk/react" and "ai" shown in the diff) so this
PR only contains the /api/generate timeout change; restore those entries to the
versions from main (or remove the two updated entries) and regenerate the
lockfile (npm/yarn install) to match, then create a separate branch/PR that
contains the dependency bumps and related lockfile changes for review; ensure
tests/build pass after restoring the original package.json before merging the
hotfix.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 0668f8c1-d12a-4a26-88b4-4e2c894b9701

📥 Commits

Reviewing files that changed from the base of the PR and between ce7b5c0 and 3b8ddc6.

⛔ Files ignored due to path filters (1)
  • package-lock.json is excluded by !**/package-lock.json
📒 Files selected for processing (4)
  • CLAUDE.md
  • package.json
  • src/app/api/generate/route.ts
  • vercel.json
✅ Files skipped from review due to trivial changes (1)
  • CLAUDE.md
🚧 Files skipped from review as they are similar to previous changes (2)
  • vercel.json
  • src/app/api/generate/route.ts

Comment on lines +16 to +22
"@ai-sdk/react": "^3.0.118",
"@google/genai": "^1.30.0",
"@react-three/drei": "^10.7.7",
"@react-three/fiber": "^9.5.0",
"@tailwindcss/postcss": "^4.1.17",
"@xyflow/react": "^12.9.3",
"ai": "^6.0.49",
"ai": "^6.0.116",
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "== npm registry metadata =="
npm view [email protected] version peerDependencies dependencies --json
npm view `@ai-sdk/react`@3.0.118 version peerDependencies dependencies --json
npm view `@ai-sdk/google`@3.0.13 version peerDependencies dependencies --json

echo
echo "== lockfile entries =="
fd -HI '^(package-lock\.json|pnpm-lock\.yaml|yarn\.lock)$' . -x sh -c '
  echo ">>> {}"
  rg -n -C2 "\"ai\"|@ai-sdk/react|@ai-sdk/google" "{}" || true
'

Repository: shrimbly/node-banana

Length of output: 2523


Keep the timeout hotfix isolated; these SDK bumps appear unrelated.

This PR is scoped as a /api/generate timeout change, but these two dependency bumps widen the runtime surface area without clear justification. Since the bumps are not required for the timeout fix, split them into a separate PR. The lockfile does resolve cleanly, though it brings multiple versions of internal provider packages (@ai-sdk/[email protected] from Google, 3.0.8 from core AI SDK). This managed duplication is not a blocker, but the scope creep itself is worth avoiding in a hotfix.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@package.json` around lines 16 - 22, Revert the unrelated dependency version
changes in package.json (the bumps to "@ai-sdk/react" and "ai" shown in the
diff) so this PR only contains the /api/generate timeout change; restore those
entries to the versions from main (or remove the two updated entries) and
regenerate the lockfile (npm/yarn install) to match, then create a separate
branch/PR that contains the dependency bumps and related lockfile changes for
review; ensure tests/build pass after restoring the original package.json before
merging the hotfix.

Vercel has a 4.5MB body size limit for serverless functions.
Large images are now compressed on the client before sending.

Co-Authored-By: Claude Opus 4.5 <[email protected]>
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/store/execution/generateVideoExecutor.ts (1)

111-124: ⚠️ Potential issue | 🟠 Major

Check the final serialized body size before POSTing.

This only compresses each field independently. Two “safe” images plus dynamicInputs, selectedModel, and parameters can still exceed Vercel’s body cap once combined into the JSON request.

Proposed fix
   const requestPayload = {
     images: compressedImages,
     prompt: text,
     selectedModel: nodeData.selectedModel,
     parameters: nodeData.parameters,
     dynamicInputs: compressedDynamicInputs,
     mediaType: "video" as const,
   };
+
+  const requestBody = JSON.stringify(requestPayload);
+  const requestSize = new TextEncoder().encode(requestBody).length;
+  if (requestSize > 4 * 1024 * 1024) {
+    updateNodeData(node.id, {
+      status: "error",
+      error: "Request payload is still too large after compression",
+    });
+    throw new Error("Request payload is still too large after compression");
+  }

   const response = await fetch("/api/generate", {
     method: "POST",
     headers,
-    body: JSON.stringify(requestPayload),
+    body: requestBody,
     ...(signal ? { signal } : {}),
   });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/store/execution/generateVideoExecutor.ts` around lines 111 - 124, Before
calling fetch("/api/generate"), compute the final serialized body size (e.g.
const body = JSON.stringify(requestPayload); const size = new
TextEncoder().encode(body).length) and compare it against a defined
MAX_BODY_BYTES (Vercel limit); if the size exceeds the limit, avoid sending the
oversized payload from generateVideoExecutor by either uploading large binaries
(compressedImages and compressedDynamicInputs) to external storage and replacing
them in requestPayload with URLs, or reject/return a clear error to the caller;
update the code paths around requestPayload, compressedImages,
compressedDynamicInputs, and the fetch call to perform this size check and
fallback behavior before POSTing.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/store/execution/generateVideoExecutor.ts`:
- Around line 91-109: Move the image-compression awaits into the existing
try/error path or wrap them in their own try/catch so failures set the node to
"error" rather than leaving it stuck in "loading"; specifically, perform the
calls to compressImagesForUpload for images and for entries in dynamicInputs
(producing compressedImages and compressedDynamicInputs) inside the same try
block that handles the node lifecycle, or catch errors from
compressImagesForUpload, call the node error handler (set node status to "error"
or invoke the existing error path), and rethrow or return appropriately so the
outer logic can clean up; update code around compressImagesForUpload,
compressedImages, compressedDynamicInputs, images, and dynamicInputs to follow
this pattern.

In `@src/utils/imageCompression.ts`:
- Around line 17-19: The code currently computes estimatedSize as the decoded
binary size; instead measure the encoded HTTP payload size by using the byte
length of the base64 data URL (including the "data:...;base64," prefix and any
JSON wrapping/margin) instead of decoding to binary – replace the estimatedSize
calculation with Buffer.byteLength(base64DataUrl, 'utf8') (and/or add a small
safety margin for JSON overhead) wherever estimatedSize is computed (e.g., the
initial check using base64DataUrl/estimatedSize and the loop at lines
referencing the same logic) so the pre-check and compression loop compare the
actual encoded request bytes against MAX_PAYLOAD_SIZE.
- Around line 43-51: The current loop always re-encodes into JPEG (using
canvas.toDataURL) which silently drops alpha; detect transparent inputs before
converting by inspecting the drawn canvas pixels (use ctx.drawImage then
ctx.getImageData and check any alpha < 255) and branch: if no alpha proceed with
the existing JPEG quality loop (variables: ctx.drawImage, canvas.toDataURL,
quality, MAX_PAYLOAD_SIZE), but if alpha is present either (a) preserve
transparency by encoding as PNG (use canvas.toDataURL("image/png") and apply
resizing or PNG compression if still over MAX_PAYLOAD_SIZE) or (b) explicitly
flatten onto a configurable background color (draw a filled rect with
white/selected color behind the image before calling
canvas.toDataURL("image/jpeg", quality)) so the alpha handling is explicit and
not silently lost.

---

Outside diff comments:
In `@src/store/execution/generateVideoExecutor.ts`:
- Around line 111-124: Before calling fetch("/api/generate"), compute the final
serialized body size (e.g. const body = JSON.stringify(requestPayload); const
size = new TextEncoder().encode(body).length) and compare it against a defined
MAX_BODY_BYTES (Vercel limit); if the size exceeds the limit, avoid sending the
oversized payload from generateVideoExecutor by either uploading large binaries
(compressedImages and compressedDynamicInputs) to external storage and replacing
them in requestPayload with URLs, or reject/return a clear error to the caller;
update the code paths around requestPayload, compressedImages,
compressedDynamicInputs, and the fetch call to perform this size check and
fallback behavior before POSTing.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 55760bcd-534c-48dc-8679-eee4718d90bc

📥 Commits

Reviewing files that changed from the base of the PR and between 3b8ddc6 and 889f2f2.

📒 Files selected for processing (2)
  • src/store/execution/generateVideoExecutor.ts
  • src/utils/imageCompression.ts

Comment on lines +91 to +109
// Compress images to fit within Vercel's 4.5MB payload limit
const compressedImages = images.length > 0 ? await compressImagesForUpload(images) : [];

// Also compress any images in dynamicInputs
const compressedDynamicInputs: Record<string, string | string[]> = {};
for (const [key, value] of Object.entries(dynamicInputs)) {
if (typeof value === "string" && value.startsWith("data:image")) {
compressedDynamicInputs[key] = await compressImagesForUpload([value]).then(arr => arr[0]);
} else if (Array.isArray(value)) {
const hasImages = value.some(v => typeof v === "string" && v.startsWith("data:image"));
if (hasImages) {
compressedDynamicInputs[key] = await compressImagesForUpload(value);
} else {
compressedDynamicInputs[key] = value;
}
} else {
compressedDynamicInputs[key] = value;
}
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Wrap compression in the existing error path.

These awaits run after the node is set to "loading" but before Line 120 enters the try. If compression rejects, the node never gets reset to "error" and can stay stuck in loading.

Proposed fix
-  // Compress images to fit within Vercel's 4.5MB payload limit
-  const compressedImages = images.length > 0 ? await compressImagesForUpload(images) : [];
-
-  // Also compress any images in dynamicInputs
-  const compressedDynamicInputs: Record<string, string | string[]> = {};
-  for (const [key, value] of Object.entries(dynamicInputs)) {
-    if (typeof value === "string" && value.startsWith("data:image")) {
-      compressedDynamicInputs[key] = await compressImagesForUpload([value]).then(arr => arr[0]);
-    } else if (Array.isArray(value)) {
-      const hasImages = value.some(v => typeof v === "string" && v.startsWith("data:image"));
-      if (hasImages) {
-        compressedDynamicInputs[key] = await compressImagesForUpload(value);
-      } else {
-        compressedDynamicInputs[key] = value;
-      }
-    } else {
-      compressedDynamicInputs[key] = value;
-    }
-  }
-
-  const requestPayload = {
-    images: compressedImages,
-    prompt: text,
-    selectedModel: nodeData.selectedModel,
-    parameters: nodeData.parameters,
-    dynamicInputs: compressedDynamicInputs,
-    mediaType: "video" as const,
-  };
-
   try {
+    // Compress images to fit within Vercel's 4.5MB payload limit
+    const compressedImages = images.length > 0 ? await compressImagesForUpload(images) : [];
+
+    // Also compress any images in dynamicInputs
+    const compressedDynamicInputs: Record<string, string | string[]> = {};
+    for (const [key, value] of Object.entries(dynamicInputs)) {
+      if (typeof value === "string" && value.startsWith("data:image")) {
+        compressedDynamicInputs[key] = await compressImagesForUpload([value]).then(arr => arr[0]);
+      } else if (Array.isArray(value)) {
+        const hasImages = value.some(v => typeof v === "string" && v.startsWith("data:image"));
+        compressedDynamicInputs[key] = hasImages ? await compressImagesForUpload(value) : value;
+      } else {
+        compressedDynamicInputs[key] = value;
+      }
+    }
+
+    const requestPayload = {
+      images: compressedImages,
+      prompt: text,
+      selectedModel: nodeData.selectedModel,
+      parameters: nodeData.parameters,
+      dynamicInputs: compressedDynamicInputs,
+      mediaType: "video" as const,
+    };
+
     const response = await fetch("/api/generate", {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/store/execution/generateVideoExecutor.ts` around lines 91 - 109, Move the
image-compression awaits into the existing try/error path or wrap them in their
own try/catch so failures set the node to "error" rather than leaving it stuck
in "loading"; specifically, perform the calls to compressImagesForUpload for
images and for entries in dynamicInputs (producing compressedImages and
compressedDynamicInputs) inside the same try block that handles the node
lifecycle, or catch errors from compressImagesForUpload, call the node error
handler (set node status to "error" or invoke the existing error path), and
rethrow or return appropriately so the outer logic can clean up; update code
around compressImagesForUpload, compressedImages, compressedDynamicInputs,
images, and dynamicInputs to follow this pattern.

Comment on lines +17 to +19
// Check if already small enough
const estimatedSize = Math.ceil((base64DataUrl.length - base64DataUrl.indexOf(",") - 1) * 3 / 4);
if (estimatedSize < MAX_PAYLOAD_SIZE) return base64DataUrl;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Measure the encoded request bytes, not the decoded image bytes.

estimatedSize is the decoded binary size, but Vercel limits the encoded HTTP body. A ~4 MB image becomes ~5.3 MB once base64-encoded, and this loop still resolves anything below that threshold. Single images near this cutoff can still breach the body limit before JSON overhead is added.

Proposed fix
-const MAX_PAYLOAD_SIZE = 4 * 1024 * 1024; // 4MB to leave room for other request data
+const MAX_PAYLOAD_SIZE = 4 * 1024 * 1024; // encoded bytes, leaving room for other request data

 export async function compressImageForUpload(base64DataUrl: string): Promise<string> {
   // Not a data URL, return as-is
   if (!base64DataUrl.startsWith("data:")) return base64DataUrl;

   // Check if already small enough
-  const estimatedSize = Math.ceil((base64DataUrl.length - base64DataUrl.indexOf(",") - 1) * 3 / 4);
-  if (estimatedSize < MAX_PAYLOAD_SIZE) return base64DataUrl;
+  const encodedSize = new TextEncoder().encode(base64DataUrl).length;
+  if (encodedSize < MAX_PAYLOAD_SIZE) return base64DataUrl;
@@
-        while (result.length > MAX_PAYLOAD_SIZE * 1.33 && quality > 0.1) { // 1.33 accounts for base64 overhead
+        while (new TextEncoder().encode(result).length > MAX_PAYLOAD_SIZE && quality > 0.1) {
           quality -= 0.1;
           result = canvas.toDataURL("image/jpeg", quality);
         }
+
+        if (new TextEncoder().encode(result).length > MAX_PAYLOAD_SIZE) {
+          reject(new Error("Image is still too large after compression"));
+          return;
+        }

-        console.log(`[ImageCompression] Compressed from ${(estimatedSize / 1024 / 1024).toFixed(2)}MB to ${(result.length / 1024 / 1024).toFixed(2)}MB (quality: ${quality.toFixed(1)})`);
+        console.log(`[ImageCompression] Compressed from ${(encodedSize / 1024 / 1024).toFixed(2)}MB to ${(new TextEncoder().encode(result).length / 1024 / 1024).toFixed(2)}MB (quality: ${quality.toFixed(1)})`);

Also applies to: 49-55

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/imageCompression.ts` around lines 17 - 19, The code currently
computes estimatedSize as the decoded binary size; instead measure the encoded
HTTP payload size by using the byte length of the base64 data URL (including the
"data:...;base64," prefix and any JSON wrapping/margin) instead of decoding to
binary – replace the estimatedSize calculation with
Buffer.byteLength(base64DataUrl, 'utf8') (and/or add a small safety margin for
JSON overhead) wherever estimatedSize is computed (e.g., the initial check using
base64DataUrl/estimatedSize and the loop at lines referencing the same logic) so
the pre-check and compression loop compare the actual encoded request bytes
against MAX_PAYLOAD_SIZE.

Comment on lines +43 to +51
ctx.drawImage(img, 0, 0, width, height);

// Try progressively lower quality until under limit
let quality = 0.9;
let result = canvas.toDataURL("image/jpeg", quality);

while (result.length > MAX_PAYLOAD_SIZE * 1.33 && quality > 0.1) { // 1.33 accounts for base64 overhead
quality -= 0.1;
result = canvas.toDataURL("image/jpeg", quality);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Handle transparent inputs explicitly before JPEG conversion.

Oversized PNG/WebP inputs are always re-encoded as JPEG here. That silently drops alpha and changes the prompt image content, which can alter generation results.

Proposed fix
-        ctx.drawImage(img, 0, 0, width, height);
+        const sourceMime = base64DataUrl.slice(5, base64DataUrl.indexOf(";"));
+        if (sourceMime === "image/png" || sourceMime === "image/webp") {
+          ctx.fillStyle = "#fff";
+          ctx.fillRect(0, 0, width, height);
+        }
+        ctx.drawImage(img, 0, 0, width, height);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
ctx.drawImage(img, 0, 0, width, height);
// Try progressively lower quality until under limit
let quality = 0.9;
let result = canvas.toDataURL("image/jpeg", quality);
while (result.length > MAX_PAYLOAD_SIZE * 1.33 && quality > 0.1) { // 1.33 accounts for base64 overhead
quality -= 0.1;
result = canvas.toDataURL("image/jpeg", quality);
const sourceMime = base64DataUrl.slice(5, base64DataUrl.indexOf(";"));
if (sourceMime === "image/png" || sourceMime === "image/webp") {
ctx.fillStyle = "#fff";
ctx.fillRect(0, 0, width, height);
}
ctx.drawImage(img, 0, 0, width, height);
// Try progressively lower quality until under limit
let quality = 0.9;
let result = canvas.toDataURL("image/jpeg", quality);
while (result.length > MAX_PAYLOAD_SIZE * 1.33 && quality > 0.1) { // 1.33 accounts for base64 overhead
quality -= 0.1;
result = canvas.toDataURL("image/jpeg", quality);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/imageCompression.ts` around lines 43 - 51, The current loop always
re-encodes into JPEG (using canvas.toDataURL) which silently drops alpha; detect
transparent inputs before converting by inspecting the drawn canvas pixels (use
ctx.drawImage then ctx.getImageData and check any alpha < 255) and branch: if no
alpha proceed with the existing JPEG quality loop (variables: ctx.drawImage,
canvas.toDataURL, quality, MAX_PAYLOAD_SIZE), but if alpha is present either (a)
preserve transparency by encoding as PNG (use canvas.toDataURL("image/png") and
apply resizing or PNG compression if still over MAX_PAYLOAD_SIZE) or (b)
explicitly flatten onto a configurable background color (draw a filled rect with
white/selected color behind the image before calling
canvas.toDataURL("image/jpeg", quality)) so the alpha handling is explicit and
not silently lost.

- Detect web deployments (non-localhost) and skip directory requirement
- Force base64 image embedding for web mode
- Show informative message explaining web mode limitations

Co-Authored-By: Claude Opus 4.5 <[email protected]>
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (2)
src/components/ProjectSetupModal.tsx (2)

510-510: Simplify the toggle expression.

The ternary can be replaced with a simple negation for cleaner code.

✨ Proposed simplification
-                     onClick={() => setExternalStorage(externalStorage ? false : true)}
+                     onClick={() => setExternalStorage(!externalStorage)}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/components/ProjectSetupModal.tsx` at line 510, The onClick handler in
ProjectSetupModal.tsx uses a verbose ternary to toggle state; replace the
expression onClick={() => setExternalStorage(externalStorage ? false : true)}
with a simple negation and, preferably, the functional updater to avoid stale
closures: call setExternalStorage(prev => !prev) so the externalStorage state is
toggled concisely and safely.

147-151: Consider additional local development scenarios.

The detection logic handles localhost and 127.0.0.1 but may incorrectly classify other local development scenarios as web deployments:

  • IPv6 localhost (::1 or [::1])
  • Local network IPs (192.168.x.x, 10.x.x.x)
  • Custom local domains (*.local, *.localhost)

This could cause unexpected behavior when accessing the app from a local network IP during development.

🔧 Proposed fix to handle additional local scenarios
  // Detect web deployment (not localhost)
- const isWebDeployment = typeof window !== "undefined" &&
-   !window.location.hostname.includes("localhost") &&
-   !window.location.hostname.includes("127.0.0.1");
+ const isWebDeployment = typeof window !== "undefined" && (() => {
+   const hostname = window.location.hostname;
+   // Local development checks
+   if (hostname.includes("localhost") || hostname === "127.0.0.1" || hostname === "::1") {
+     return false;
+   }
+   // Local network IPs (192.168.x.x, 10.x.x.x, 172.16-31.x.x)
+   if (/^(192\.168\.|10\.|172\.(1[6-9]|2\d|3[01])\.)/.test(hostname)) {
+     return false;
+   }
+   return true;
+ })();
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/components/ProjectSetupModal.tsx` around lines 147 - 151, The current
isWebDeployment detection in ProjectSetupModal.tsx incorrectly treats some local
development hosts as production; update the isWebDeployment logic (the const
isWebDeployment using window.location.hostname) to treat IPv6 localhost (::1),
IPv4 local ranges (10., 192.168., 172.16–172.31), 0.0.0.0, and local dev domains
(hosts ending with .local or .localhost or *.localhost) as local—i.e., return
false for these cases; implement this via a single normalized hostname check
using a regex/CIDR helper or utility function to match those patterns and only
mark as web deployment when none of them match. Ensure the check still guards
for typeof window !== "undefined" and references window.location.hostname
exactly as before.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@src/components/ProjectSetupModal.tsx`:
- Line 510: The onClick handler in ProjectSetupModal.tsx uses a verbose ternary
to toggle state; replace the expression onClick={() =>
setExternalStorage(externalStorage ? false : true)} with a simple negation and,
preferably, the functional updater to avoid stale closures: call
setExternalStorage(prev => !prev) so the externalStorage state is toggled
concisely and safely.
- Around line 147-151: The current isWebDeployment detection in
ProjectSetupModal.tsx incorrectly treats some local development hosts as
production; update the isWebDeployment logic (the const isWebDeployment using
window.location.hostname) to treat IPv6 localhost (::1), IPv4 local ranges (10.,
192.168., 172.16–172.31), 0.0.0.0, and local dev domains (hosts ending with
.local or .localhost or *.localhost) as local—i.e., return false for these
cases; implement this via a single normalized hostname check using a regex/CIDR
helper or utility function to match those patterns and only mark as web
deployment when none of them match. Ensure the check still guards for typeof
window !== "undefined" and references window.location.hostname exactly as
before.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: b4e89553-5135-49ec-9daa-39babb447974

📥 Commits

Reviewing files that changed from the base of the PR and between 889f2f2 and 55908d7.

📒 Files selected for processing (1)
  • src/components/ProjectSetupModal.tsx

- Reduce max image size to 1.5MB per image
- Lower max dimension to 1280px
- Add fallback dimension reduction if quality isn't enough
- Add localStorage workflow persistence for web mode

Co-Authored-By: Claude Opus 4.5 <[email protected]>
@shrimbly
Copy link
Copy Markdown
Owner

These look like personal changes. Fork the repo or change upstream.

@shrimbly shrimbly closed this Mar 22, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants