Skip to content
Closed
Show file tree
Hide file tree
Changes from 6 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions CLAUDE.md
Original file line number Diff line number Diff line change
Expand Up @@ -231,3 +231,6 @@ All routes in `src/app/api/`:
- Each commit should be atomic and self-contained: one task = one commit.
- The .planning directory is untracked, do not attempt to commit any changes to the files in this directory.

## Web Crawling
- Always use `cf crawl` when needing to crawl or fetch website content.

127 changes: 107 additions & 20 deletions package-lock.json

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

4 changes: 2 additions & 2 deletions package.json
Original file line number Diff line number Diff line change
Expand Up @@ -13,13 +13,13 @@
},
"dependencies": {
"@ai-sdk/google": "^3.0.13",
"@ai-sdk/react": "^3.0.51",
"@ai-sdk/react": "^3.0.118",
"@google/genai": "^1.30.0",
"@react-three/drei": "^10.7.7",
"@react-three/fiber": "^9.5.0",
"@tailwindcss/postcss": "^4.1.17",
"@xyflow/react": "^12.9.3",
"ai": "^6.0.49",
"ai": "^6.0.116",
Comment on lines +16 to +22
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "== npm registry metadata =="
npm view [email protected] version peerDependencies dependencies --json
npm view `@ai-sdk/react`@3.0.118 version peerDependencies dependencies --json
npm view `@ai-sdk/google`@3.0.13 version peerDependencies dependencies --json

echo
echo "== lockfile entries =="
fd -HI '^(package-lock\.json|pnpm-lock\.yaml|yarn\.lock)$' . -x sh -c '
  echo ">>> {}"
  rg -n -C2 "\"ai\"|@ai-sdk/react|@ai-sdk/google" "{}" || true
'

Repository: shrimbly/node-banana

Length of output: 2523


Keep the timeout hotfix isolated; these SDK bumps appear unrelated.

This PR is scoped as a /api/generate timeout change, but these two dependency bumps widen the runtime surface area without clear justification. Since the bumps are not required for the timeout fix, split them into a separate PR. The lockfile does resolve cleanly, though it brings multiple versions of internal provider packages (@ai-sdk/[email protected] from Google, 3.0.8 from core AI SDK). This managed duplication is not a blocker, but the scope creep itself is worth avoiding in a hotfix.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@package.json` around lines 16 - 22, Revert the unrelated dependency version
changes in package.json (the bumps to "@ai-sdk/react" and "ai" shown in the
diff) so this PR only contains the /api/generate timeout change; restore those
entries to the versions from main (or remove the two updated entries) and
regenerate the lockfile (npm/yarn install) to match, then create a separate
branch/PR that contains the dependency bumps and related lockfile changes for
review; ensure tests/build pass after restoring the original package.json before
merging the hotfix.

"autoprefixer": "^10.4.22",
"jszip": "^3.10.1",
"konva": "^10.0.12",
Expand Down
2 changes: 1 addition & 1 deletion src/app/api/generate/route.ts
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ import { generateWithWaveSpeed } from "./providers/wavespeed";
// Re-export for backward compatibility (test file imports from route)
export const clearFalInputMappingCache = _clearFalInputMappingCache;

export const maxDuration = 300; // 5 minute timeout (Vercel hobby plan limit)
export const maxDuration = 800; // ~13 minute timeout (Vercel Pro limit)
export const dynamic = 'force-dynamic'; // Ensure this route is always dynamic


Expand Down
25 changes: 23 additions & 2 deletions src/store/execution/generateVideoExecutor.ts
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@
import type { GenerateVideoNodeData } from "@/types";
import { buildGenerateHeaders } from "@/store/utils/buildApiHeaders";
import type { NodeExecutionContext } from "./types";
import { compressImagesForUpload } from "@/utils/imageCompression";

export interface GenerateVideoOptions {
/** When true, falls back to stored inputImages/inputPrompt if no connections provide them. */
Expand Down Expand Up @@ -87,12 +88,32 @@ export async function executeGenerateVideo(
const provider = nodeData.selectedModel.provider;
const headers = buildGenerateHeaders(provider, providerSettings);

// Compress images to fit within Vercel's 4.5MB payload limit
const compressedImages = images.length > 0 ? await compressImagesForUpload(images) : [];

// Also compress any images in dynamicInputs
const compressedDynamicInputs: Record<string, string | string[]> = {};
for (const [key, value] of Object.entries(dynamicInputs)) {
if (typeof value === "string" && value.startsWith("data:image")) {
compressedDynamicInputs[key] = await compressImagesForUpload([value]).then(arr => arr[0]);
} else if (Array.isArray(value)) {
const hasImages = value.some(v => typeof v === "string" && v.startsWith("data:image"));
if (hasImages) {
compressedDynamicInputs[key] = await compressImagesForUpload(value);
} else {
compressedDynamicInputs[key] = value;
}
} else {
compressedDynamicInputs[key] = value;
}
}
Comment on lines +91 to +109
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Wrap compression in the existing error path.

These awaits run after the node is set to "loading" but before Line 120 enters the try. If compression rejects, the node never gets reset to "error" and can stay stuck in loading.

Proposed fix
-  // Compress images to fit within Vercel's 4.5MB payload limit
-  const compressedImages = images.length > 0 ? await compressImagesForUpload(images) : [];
-
-  // Also compress any images in dynamicInputs
-  const compressedDynamicInputs: Record<string, string | string[]> = {};
-  for (const [key, value] of Object.entries(dynamicInputs)) {
-    if (typeof value === "string" && value.startsWith("data:image")) {
-      compressedDynamicInputs[key] = await compressImagesForUpload([value]).then(arr => arr[0]);
-    } else if (Array.isArray(value)) {
-      const hasImages = value.some(v => typeof v === "string" && v.startsWith("data:image"));
-      if (hasImages) {
-        compressedDynamicInputs[key] = await compressImagesForUpload(value);
-      } else {
-        compressedDynamicInputs[key] = value;
-      }
-    } else {
-      compressedDynamicInputs[key] = value;
-    }
-  }
-
-  const requestPayload = {
-    images: compressedImages,
-    prompt: text,
-    selectedModel: nodeData.selectedModel,
-    parameters: nodeData.parameters,
-    dynamicInputs: compressedDynamicInputs,
-    mediaType: "video" as const,
-  };
-
   try {
+    // Compress images to fit within Vercel's 4.5MB payload limit
+    const compressedImages = images.length > 0 ? await compressImagesForUpload(images) : [];
+
+    // Also compress any images in dynamicInputs
+    const compressedDynamicInputs: Record<string, string | string[]> = {};
+    for (const [key, value] of Object.entries(dynamicInputs)) {
+      if (typeof value === "string" && value.startsWith("data:image")) {
+        compressedDynamicInputs[key] = await compressImagesForUpload([value]).then(arr => arr[0]);
+      } else if (Array.isArray(value)) {
+        const hasImages = value.some(v => typeof v === "string" && v.startsWith("data:image"));
+        compressedDynamicInputs[key] = hasImages ? await compressImagesForUpload(value) : value;
+      } else {
+        compressedDynamicInputs[key] = value;
+      }
+    }
+
+    const requestPayload = {
+      images: compressedImages,
+      prompt: text,
+      selectedModel: nodeData.selectedModel,
+      parameters: nodeData.parameters,
+      dynamicInputs: compressedDynamicInputs,
+      mediaType: "video" as const,
+    };
+
     const response = await fetch("/api/generate", {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/store/execution/generateVideoExecutor.ts` around lines 91 - 109, Move the
image-compression awaits into the existing try/error path or wrap them in their
own try/catch so failures set the node to "error" rather than leaving it stuck
in "loading"; specifically, perform the calls to compressImagesForUpload for
images and for entries in dynamicInputs (producing compressedImages and
compressedDynamicInputs) inside the same try block that handles the node
lifecycle, or catch errors from compressImagesForUpload, call the node error
handler (set node status to "error" or invoke the existing error path), and
rethrow or return appropriately so the outer logic can clean up; update code
around compressImagesForUpload, compressedImages, compressedDynamicInputs,
images, and dynamicInputs to follow this pattern.


const requestPayload = {
images,
images: compressedImages,
prompt: text,
selectedModel: nodeData.selectedModel,
parameters: nodeData.parameters,
dynamicInputs,
dynamicInputs: compressedDynamicInputs,
mediaType: "video" as const,
};

Expand Down
70 changes: 70 additions & 0 deletions src/utils/imageCompression.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,70 @@
/**
* Image compression utility to ensure images fit within API payload limits.
* Vercel serverless functions have a 4.5MB body size limit.
*/

const MAX_PAYLOAD_SIZE = 4 * 1024 * 1024; // 4MB to leave room for other request data
const MAX_DIMENSION = 2048; // Max width/height

/**
* Compress a base64 image to fit within payload limits.
* Returns the original if already small enough, otherwise compresses.
*/
export async function compressImageForUpload(base64DataUrl: string): Promise<string> {
// Not a data URL, return as-is
if (!base64DataUrl.startsWith("data:")) return base64DataUrl;

// Check if already small enough
const estimatedSize = Math.ceil((base64DataUrl.length - base64DataUrl.indexOf(",") - 1) * 3 / 4);
if (estimatedSize < MAX_PAYLOAD_SIZE) return base64DataUrl;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Measure the encoded request bytes, not the decoded image bytes.

estimatedSize is the decoded binary size, but Vercel limits the encoded HTTP body. A ~4 MB image becomes ~5.3 MB once base64-encoded, and this loop still resolves anything below that threshold. Single images near this cutoff can still breach the body limit before JSON overhead is added.

Proposed fix
-const MAX_PAYLOAD_SIZE = 4 * 1024 * 1024; // 4MB to leave room for other request data
+const MAX_PAYLOAD_SIZE = 4 * 1024 * 1024; // encoded bytes, leaving room for other request data

 export async function compressImageForUpload(base64DataUrl: string): Promise<string> {
   // Not a data URL, return as-is
   if (!base64DataUrl.startsWith("data:")) return base64DataUrl;

   // Check if already small enough
-  const estimatedSize = Math.ceil((base64DataUrl.length - base64DataUrl.indexOf(",") - 1) * 3 / 4);
-  if (estimatedSize < MAX_PAYLOAD_SIZE) return base64DataUrl;
+  const encodedSize = new TextEncoder().encode(base64DataUrl).length;
+  if (encodedSize < MAX_PAYLOAD_SIZE) return base64DataUrl;
@@
-        while (result.length > MAX_PAYLOAD_SIZE * 1.33 && quality > 0.1) { // 1.33 accounts for base64 overhead
+        while (new TextEncoder().encode(result).length > MAX_PAYLOAD_SIZE && quality > 0.1) {
           quality -= 0.1;
           result = canvas.toDataURL("image/jpeg", quality);
         }
+
+        if (new TextEncoder().encode(result).length > MAX_PAYLOAD_SIZE) {
+          reject(new Error("Image is still too large after compression"));
+          return;
+        }

-        console.log(`[ImageCompression] Compressed from ${(estimatedSize / 1024 / 1024).toFixed(2)}MB to ${(result.length / 1024 / 1024).toFixed(2)}MB (quality: ${quality.toFixed(1)})`);
+        console.log(`[ImageCompression] Compressed from ${(encodedSize / 1024 / 1024).toFixed(2)}MB to ${(new TextEncoder().encode(result).length / 1024 / 1024).toFixed(2)}MB (quality: ${quality.toFixed(1)})`);

Also applies to: 49-55

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/imageCompression.ts` around lines 17 - 19, The code currently
computes estimatedSize as the decoded binary size; instead measure the encoded
HTTP payload size by using the byte length of the base64 data URL (including the
"data:...;base64," prefix and any JSON wrapping/margin) instead of decoding to
binary – replace the estimatedSize calculation with
Buffer.byteLength(base64DataUrl, 'utf8') (and/or add a small safety margin for
JSON overhead) wherever estimatedSize is computed (e.g., the initial check using
base64DataUrl/estimatedSize and the loop at lines referencing the same logic) so
the pre-check and compression loop compare the actual encoded request bytes
against MAX_PAYLOAD_SIZE.


// Need to compress - use canvas
return new Promise((resolve, reject) => {
const img = new Image();
img.onload = () => {
try {
// Calculate new dimensions
let { width, height } = img;
if (width > MAX_DIMENSION || height > MAX_DIMENSION) {
const scale = Math.min(MAX_DIMENSION / width, MAX_DIMENSION / height);
width = Math.round(width * scale);
height = Math.round(height * scale);
}

// Draw to canvas
const canvas = document.createElement("canvas");
canvas.width = width;
canvas.height = height;
const ctx = canvas.getContext("2d");
if (!ctx) {
reject(new Error("Failed to get canvas context"));
return;
}
ctx.drawImage(img, 0, 0, width, height);

// Try progressively lower quality until under limit
let quality = 0.9;
let result = canvas.toDataURL("image/jpeg", quality);

while (result.length > MAX_PAYLOAD_SIZE * 1.33 && quality > 0.1) { // 1.33 accounts for base64 overhead
quality -= 0.1;
result = canvas.toDataURL("image/jpeg", quality);
Comment on lines +42 to +51
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Handle transparent inputs explicitly before JPEG conversion.

Oversized PNG/WebP inputs are always re-encoded as JPEG here. That silently drops alpha and changes the prompt image content, which can alter generation results.

Proposed fix
-        ctx.drawImage(img, 0, 0, width, height);
+        const sourceMime = base64DataUrl.slice(5, base64DataUrl.indexOf(";"));
+        if (sourceMime === "image/png" || sourceMime === "image/webp") {
+          ctx.fillStyle = "#fff";
+          ctx.fillRect(0, 0, width, height);
+        }
+        ctx.drawImage(img, 0, 0, width, height);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
ctx.drawImage(img, 0, 0, width, height);
// Try progressively lower quality until under limit
let quality = 0.9;
let result = canvas.toDataURL("image/jpeg", quality);
while (result.length > MAX_PAYLOAD_SIZE * 1.33 && quality > 0.1) { // 1.33 accounts for base64 overhead
quality -= 0.1;
result = canvas.toDataURL("image/jpeg", quality);
const sourceMime = base64DataUrl.slice(5, base64DataUrl.indexOf(";"));
if (sourceMime === "image/png" || sourceMime === "image/webp") {
ctx.fillStyle = "#fff";
ctx.fillRect(0, 0, width, height);
}
ctx.drawImage(img, 0, 0, width, height);
// Try progressively lower quality until under limit
let quality = 0.9;
let result = canvas.toDataURL("image/jpeg", quality);
while (result.length > MAX_PAYLOAD_SIZE * 1.33 && quality > 0.1) { // 1.33 accounts for base64 overhead
quality -= 0.1;
result = canvas.toDataURL("image/jpeg", quality);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/imageCompression.ts` around lines 43 - 51, The current loop always
re-encodes into JPEG (using canvas.toDataURL) which silently drops alpha; detect
transparent inputs before converting by inspecting the drawn canvas pixels (use
ctx.drawImage then ctx.getImageData and check any alpha < 255) and branch: if no
alpha proceed with the existing JPEG quality loop (variables: ctx.drawImage,
canvas.toDataURL, quality, MAX_PAYLOAD_SIZE), but if alpha is present either (a)
preserve transparency by encoding as PNG (use canvas.toDataURL("image/png") and
apply resizing or PNG compression if still over MAX_PAYLOAD_SIZE) or (b)
explicitly flatten onto a configurable background color (draw a filled rect with
white/selected color behind the image before calling
canvas.toDataURL("image/jpeg", quality)) so the alpha handling is explicit and
not silently lost.

}

console.log(`[ImageCompression] Compressed from ${(estimatedSize / 1024 / 1024).toFixed(2)}MB to ${(result.length / 1024 / 1024).toFixed(2)}MB (quality: ${quality.toFixed(1)})`);
resolve(result);
} catch (error) {
reject(error);
}
};
img.onerror = () => reject(new Error("Failed to load image for compression"));
img.src = base64DataUrl;
});
}

/**
* Compress multiple images
*/
export async function compressImagesForUpload(images: string[]): Promise<string[]> {
return Promise.all(images.map(compressImageForUpload));
}
7 changes: 7 additions & 0 deletions vercel.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
{
"functions": {
"src/app/api/generate/route.ts": {
"maxDuration": 800
}
}
}