fix: increase generate API timeout to 15 minutes for video generation#100
fix: increase generate API timeout to 15 minutes for video generation#100paperplancallum wants to merge 8 commits intoshrimbly:masterfrom
Conversation
Vercel Pro allows maxDuration up to 900 seconds. This prevents 504 FUNCTION_INVOCATION_TIMEOUT errors during video generation. Co-Authored-By: Claude Opus 4.5 <[email protected]>
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdded client-side image compression and integrated it into the video generation executor; increased the generate API function timeout to 800s (and updated Vercel config); bumped two AI-related dependencies; added a short "Web Crawling" guidance in CLAUDE.md; ProjectSetupModal adjusted for web deployments. Changes
Sequence Diagram(s)sequenceDiagram
participant UI as Client/UI
participant Exec as generateVideoExecutor
participant Util as imageCompression
participant API as /api/generate (Server)
participant Vercel as Vercel Function
UI->>Exec: submit generate request (images, dynamicInputs)
Exec->>Util: compressImagesForUpload(images)
Exec->>Util: compressImagesForUpload(dynamicInputs with data URLs)
Util-->>Exec: compressed images & inputs
Exec->>API: POST compressed payload
API->>Vercel: run function (maxDuration=800s)
API-->>Exec: respond with job/result
Exec-->>UI: update with response
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan for PR comments
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/app/api/generate/route.ts`:
- Line 25: The maxDuration constant in src/app/api/generate/route.ts was
increased to 900, violating the repository rule that the shared generate route
must use a 5-minute timeout; change the exported constant maxDuration back to
300 (5 minutes) so the generate route honors the file-specific contract, and if
longer video jobs are required, move them to a separate endpoint or workflow
instead of modifying maxDuration here.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 88dcd29e-5314-4192-89d3-b88ae5f25211
📒 Files selected for processing (1)
src/app/api/generate/route.ts
src/app/api/generate/route.ts
Outdated
| export const clearFalInputMappingCache = _clearFalInputMappingCache; | ||
|
|
||
| export const maxDuration = 300; // 5 minute timeout (Vercel hobby plan limit) | ||
| export const maxDuration = 900; // 15 minute timeout (Vercel Pro limit) |
There was a problem hiding this comment.
Keep this route at the repository’s 5-minute timeout.
Changing maxDuration to 900 breaks the file-specific contract for src/app/api/generate/route.ts. If video jobs need 15 minutes, they should move to a separate endpoint or workflow instead of changing the shared generate route’s timeout here.
As per coding guidelines, "**/api/generate/route.ts: Image generation API route must have a 5 minute timeout in src/app/api/generate."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/app/api/generate/route.ts` at line 25, The maxDuration constant in
src/app/api/generate/route.ts was increased to 900, violating the repository
rule that the shared generate route must use a 5-minute timeout; change the
exported constant maxDuration back to 300 (5 minutes) so the generate route
honors the file-specific contract, and if longer video jobs are required, move
them to a separate endpoint or workflow instead of modifying maxDuration here.
Ensures the generate route has 900s timeout at the Vercel project level. Co-Authored-By: Claude Opus 4.5 <[email protected]>
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@vercel.json`:
- Around line 1-7: The vercel.json timeouts are inconsistent with CLAUDE.md:
change the maxDuration for generate/route.ts from 900 to 300 seconds (or, if 15
minutes is intentional, update CLAUDE.md to reflect 900s), and add a new entry
for llm/route.ts with "maxDuration": 60 to set the 1-minute timeout; update the
vercel.json functions object accordingly so it contains both generate/route.ts
with the corrected value and llm/route.ts with 60.
- Around line 3-5: Replace the specific function entry
"src/app/api/generate/route.ts" and its maxDuration value of 900 with a globbed
function path and 300-second limit: update the functions key to use
"src/app/api/**/route.ts" and set maxDuration to 300; locate the JSON entry
referencing "src/app/api/generate/route.ts" and change the key to the glob
pattern and the "maxDuration" property to 300 to comply with Vercel Next.js App
Router requirements.
| { | ||
| "functions": { | ||
| "src/app/api/generate/route.ts": { | ||
| "maxDuration": 900 | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Description: Check the documented timeout requirements in CLAUDE.md
# Search for timeout-related documentation
rg -n -C3 'timeout|maxDuration|generate' CLAUDE.mdRepository: shrimbly/node-banana
Length of output: 3509
🏁 Script executed:
cat -n vercel.jsonRepository: shrimbly/node-banana
Length of output: 207
Configure API route timeouts per project documentation.
The vercel.json configuration has two issues:
-
The
src/app/api/generate/route.tstimeout is set to 900 seconds (15 minutes), butCLAUDE.mdspecifies it should be 5 minutes (300 seconds). If the use case has changed to require 15 minutes, updateCLAUDE.mdto document the new requirement. -
The
src/app/api/llm/route.tsroute is missing from the configuration. PerCLAUDE.md, it requires a 1-minute timeout. Add:"src/app/api/llm/route.ts": { "maxDuration": 60 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@vercel.json` around lines 1 - 7, The vercel.json timeouts are inconsistent
with CLAUDE.md: change the maxDuration for generate/route.ts from 900 to 300
seconds (or, if 15 minutes is intentional, update CLAUDE.md to reflect 900s),
and add a new entry for llm/route.ts with "maxDuration": 60 to set the 1-minute
timeout; update the vercel.json functions object accordingly so it contains both
generate/route.ts with the corrected value and llm/route.ts with 60.
| "src/app/api/generate/route.ts": { | ||
| "maxDuration": 900 | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
vercel.json functions maxDuration Next.js App Router configuration
💡 Result:
To set Vercel Function timeouts for Next.js App Router route handlers (app/api/**/route.ts), use the functions map in vercel.json and match the built function entrypoints with a glob pattern. Vercel applies the config to every function file that matches the pattern (including App Router Route Handlers). [1] [2]
Example (app/ at repo root)
{
"functions": {
"app/api/**/route.ts": { "maxDuration": 60 }
}
}This works for App Router APIs like app/api/hello/route.ts. [3]
If you use a src/ directory
Prefix the pattern with src/:
{
"functions": {
"src/app/api/**/route.ts": { "maxDuration": 60 }
}
}Vercel notes the /src/ prefix is required for detection when your project uses a src directory. [1]
Alternative (per route/page/segment, in code)
Next.js App Router also supports:
export const maxDuration = 60;directly from a Route Handler / Page / Layout via Route Segment Config. [4]
Notes
maxDurationis in seconds and must be within your plan’s allowed maximum. [2]
Sources
[1] https://vercel.com/docs/functions/configuring-functions/duration
[2] https://vercel.com/docs/project-configuration/vercel-json
[3] vercel/community#4391
[4] https://nextjs.org/docs/13/app/api-reference/file-conventions/route-segment-config
🏁 Script executed:
git log --oneline -n 20 | head -20Repository: shrimbly/node-banana
Length of output: 130
Update maxDuration to 300 seconds (5 minutes) and use glob pattern for function path.
The configuration has two issues:
maxDurationis set to 900 seconds (15 minutes) but the requirement specifies 5 minutes (300 seconds).- Per Vercel documentation for Next.js App Router in
src/projects, use the glob patternsrc/app/api/**/route.tsinstead of an exact file path.
Suggested fix
{
"functions": {
"src/app/api/**/route.ts": {
"maxDuration": 300
}
}
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@vercel.json` around lines 3 - 5, Replace the specific function entry
"src/app/api/generate/route.ts" and its maxDuration value of 900 with a globbed
function path and 300-second limit: update the functions key to use
"src/app/api/**/route.ts" and set maxDuration to 300; locate the JSON entry
referencing "src/app/api/generate/route.ts" and change the key to the glob
pattern and the "maxDuration" property to 300 to comply with Vercel Next.js App
Router requirements.
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@package.json`:
- Around line 16-22: Revert the unrelated dependency version changes in
package.json (the bumps to "@ai-sdk/react" and "ai" shown in the diff) so this
PR only contains the /api/generate timeout change; restore those entries to the
versions from main (or remove the two updated entries) and regenerate the
lockfile (npm/yarn install) to match, then create a separate branch/PR that
contains the dependency bumps and related lockfile changes for review; ensure
tests/build pass after restoring the original package.json before merging the
hotfix.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 0668f8c1-d12a-4a26-88b4-4e2c894b9701
⛔ Files ignored due to path filters (1)
package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (4)
CLAUDE.mdpackage.jsonsrc/app/api/generate/route.tsvercel.json
✅ Files skipped from review due to trivial changes (1)
- CLAUDE.md
🚧 Files skipped from review as they are similar to previous changes (2)
- vercel.json
- src/app/api/generate/route.ts
| "@ai-sdk/react": "^3.0.118", | ||
| "@google/genai": "^1.30.0", | ||
| "@react-three/drei": "^10.7.7", | ||
| "@react-three/fiber": "^9.5.0", | ||
| "@tailwindcss/postcss": "^4.1.17", | ||
| "@xyflow/react": "^12.9.3", | ||
| "ai": "^6.0.49", | ||
| "ai": "^6.0.116", |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "== npm registry metadata =="
npm view [email protected] version peerDependencies dependencies --json
npm view `@ai-sdk/react`@3.0.118 version peerDependencies dependencies --json
npm view `@ai-sdk/google`@3.0.13 version peerDependencies dependencies --json
echo
echo "== lockfile entries =="
fd -HI '^(package-lock\.json|pnpm-lock\.yaml|yarn\.lock)$' . -x sh -c '
echo ">>> {}"
rg -n -C2 "\"ai\"|@ai-sdk/react|@ai-sdk/google" "{}" || true
'Repository: shrimbly/node-banana
Length of output: 2523
Keep the timeout hotfix isolated; these SDK bumps appear unrelated.
This PR is scoped as a /api/generate timeout change, but these two dependency bumps widen the runtime surface area without clear justification. Since the bumps are not required for the timeout fix, split them into a separate PR. The lockfile does resolve cleanly, though it brings multiple versions of internal provider packages (@ai-sdk/[email protected] from Google, 3.0.8 from core AI SDK). This managed duplication is not a blocker, but the scope creep itself is worth avoiding in a hotfix.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@package.json` around lines 16 - 22, Revert the unrelated dependency version
changes in package.json (the bumps to "@ai-sdk/react" and "ai" shown in the
diff) so this PR only contains the /api/generate timeout change; restore those
entries to the versions from main (or remove the two updated entries) and
regenerate the lockfile (npm/yarn install) to match, then create a separate
branch/PR that contains the dependency bumps and related lockfile changes for
review; ensure tests/build pass after restoring the original package.json before
merging the hotfix.
Vercel has a 4.5MB body size limit for serverless functions. Large images are now compressed on the client before sending. Co-Authored-By: Claude Opus 4.5 <[email protected]>
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/store/execution/generateVideoExecutor.ts (1)
111-124:⚠️ Potential issue | 🟠 MajorCheck the final serialized body size before POSTing.
This only compresses each field independently. Two “safe” images plus
dynamicInputs,selectedModel, andparameterscan still exceed Vercel’s body cap once combined into the JSON request.Proposed fix
const requestPayload = { images: compressedImages, prompt: text, selectedModel: nodeData.selectedModel, parameters: nodeData.parameters, dynamicInputs: compressedDynamicInputs, mediaType: "video" as const, }; + + const requestBody = JSON.stringify(requestPayload); + const requestSize = new TextEncoder().encode(requestBody).length; + if (requestSize > 4 * 1024 * 1024) { + updateNodeData(node.id, { + status: "error", + error: "Request payload is still too large after compression", + }); + throw new Error("Request payload is still too large after compression"); + } const response = await fetch("/api/generate", { method: "POST", headers, - body: JSON.stringify(requestPayload), + body: requestBody, ...(signal ? { signal } : {}), });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/store/execution/generateVideoExecutor.ts` around lines 111 - 124, Before calling fetch("/api/generate"), compute the final serialized body size (e.g. const body = JSON.stringify(requestPayload); const size = new TextEncoder().encode(body).length) and compare it against a defined MAX_BODY_BYTES (Vercel limit); if the size exceeds the limit, avoid sending the oversized payload from generateVideoExecutor by either uploading large binaries (compressedImages and compressedDynamicInputs) to external storage and replacing them in requestPayload with URLs, or reject/return a clear error to the caller; update the code paths around requestPayload, compressedImages, compressedDynamicInputs, and the fetch call to perform this size check and fallback behavior before POSTing.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/store/execution/generateVideoExecutor.ts`:
- Around line 91-109: Move the image-compression awaits into the existing
try/error path or wrap them in their own try/catch so failures set the node to
"error" rather than leaving it stuck in "loading"; specifically, perform the
calls to compressImagesForUpload for images and for entries in dynamicInputs
(producing compressedImages and compressedDynamicInputs) inside the same try
block that handles the node lifecycle, or catch errors from
compressImagesForUpload, call the node error handler (set node status to "error"
or invoke the existing error path), and rethrow or return appropriately so the
outer logic can clean up; update code around compressImagesForUpload,
compressedImages, compressedDynamicInputs, images, and dynamicInputs to follow
this pattern.
In `@src/utils/imageCompression.ts`:
- Around line 17-19: The code currently computes estimatedSize as the decoded
binary size; instead measure the encoded HTTP payload size by using the byte
length of the base64 data URL (including the "data:...;base64," prefix and any
JSON wrapping/margin) instead of decoding to binary – replace the estimatedSize
calculation with Buffer.byteLength(base64DataUrl, 'utf8') (and/or add a small
safety margin for JSON overhead) wherever estimatedSize is computed (e.g., the
initial check using base64DataUrl/estimatedSize and the loop at lines
referencing the same logic) so the pre-check and compression loop compare the
actual encoded request bytes against MAX_PAYLOAD_SIZE.
- Around line 43-51: The current loop always re-encodes into JPEG (using
canvas.toDataURL) which silently drops alpha; detect transparent inputs before
converting by inspecting the drawn canvas pixels (use ctx.drawImage then
ctx.getImageData and check any alpha < 255) and branch: if no alpha proceed with
the existing JPEG quality loop (variables: ctx.drawImage, canvas.toDataURL,
quality, MAX_PAYLOAD_SIZE), but if alpha is present either (a) preserve
transparency by encoding as PNG (use canvas.toDataURL("image/png") and apply
resizing or PNG compression if still over MAX_PAYLOAD_SIZE) or (b) explicitly
flatten onto a configurable background color (draw a filled rect with
white/selected color behind the image before calling
canvas.toDataURL("image/jpeg", quality)) so the alpha handling is explicit and
not silently lost.
---
Outside diff comments:
In `@src/store/execution/generateVideoExecutor.ts`:
- Around line 111-124: Before calling fetch("/api/generate"), compute the final
serialized body size (e.g. const body = JSON.stringify(requestPayload); const
size = new TextEncoder().encode(body).length) and compare it against a defined
MAX_BODY_BYTES (Vercel limit); if the size exceeds the limit, avoid sending the
oversized payload from generateVideoExecutor by either uploading large binaries
(compressedImages and compressedDynamicInputs) to external storage and replacing
them in requestPayload with URLs, or reject/return a clear error to the caller;
update the code paths around requestPayload, compressedImages,
compressedDynamicInputs, and the fetch call to perform this size check and
fallback behavior before POSTing.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 55760bcd-534c-48dc-8679-eee4718d90bc
📒 Files selected for processing (2)
src/store/execution/generateVideoExecutor.tssrc/utils/imageCompression.ts
| // Compress images to fit within Vercel's 4.5MB payload limit | ||
| const compressedImages = images.length > 0 ? await compressImagesForUpload(images) : []; | ||
|
|
||
| // Also compress any images in dynamicInputs | ||
| const compressedDynamicInputs: Record<string, string | string[]> = {}; | ||
| for (const [key, value] of Object.entries(dynamicInputs)) { | ||
| if (typeof value === "string" && value.startsWith("data:image")) { | ||
| compressedDynamicInputs[key] = await compressImagesForUpload([value]).then(arr => arr[0]); | ||
| } else if (Array.isArray(value)) { | ||
| const hasImages = value.some(v => typeof v === "string" && v.startsWith("data:image")); | ||
| if (hasImages) { | ||
| compressedDynamicInputs[key] = await compressImagesForUpload(value); | ||
| } else { | ||
| compressedDynamicInputs[key] = value; | ||
| } | ||
| } else { | ||
| compressedDynamicInputs[key] = value; | ||
| } | ||
| } |
There was a problem hiding this comment.
Wrap compression in the existing error path.
These awaits run after the node is set to "loading" but before Line 120 enters the try. If compression rejects, the node never gets reset to "error" and can stay stuck in loading.
Proposed fix
- // Compress images to fit within Vercel's 4.5MB payload limit
- const compressedImages = images.length > 0 ? await compressImagesForUpload(images) : [];
-
- // Also compress any images in dynamicInputs
- const compressedDynamicInputs: Record<string, string | string[]> = {};
- for (const [key, value] of Object.entries(dynamicInputs)) {
- if (typeof value === "string" && value.startsWith("data:image")) {
- compressedDynamicInputs[key] = await compressImagesForUpload([value]).then(arr => arr[0]);
- } else if (Array.isArray(value)) {
- const hasImages = value.some(v => typeof v === "string" && v.startsWith("data:image"));
- if (hasImages) {
- compressedDynamicInputs[key] = await compressImagesForUpload(value);
- } else {
- compressedDynamicInputs[key] = value;
- }
- } else {
- compressedDynamicInputs[key] = value;
- }
- }
-
- const requestPayload = {
- images: compressedImages,
- prompt: text,
- selectedModel: nodeData.selectedModel,
- parameters: nodeData.parameters,
- dynamicInputs: compressedDynamicInputs,
- mediaType: "video" as const,
- };
-
try {
+ // Compress images to fit within Vercel's 4.5MB payload limit
+ const compressedImages = images.length > 0 ? await compressImagesForUpload(images) : [];
+
+ // Also compress any images in dynamicInputs
+ const compressedDynamicInputs: Record<string, string | string[]> = {};
+ for (const [key, value] of Object.entries(dynamicInputs)) {
+ if (typeof value === "string" && value.startsWith("data:image")) {
+ compressedDynamicInputs[key] = await compressImagesForUpload([value]).then(arr => arr[0]);
+ } else if (Array.isArray(value)) {
+ const hasImages = value.some(v => typeof v === "string" && v.startsWith("data:image"));
+ compressedDynamicInputs[key] = hasImages ? await compressImagesForUpload(value) : value;
+ } else {
+ compressedDynamicInputs[key] = value;
+ }
+ }
+
+ const requestPayload = {
+ images: compressedImages,
+ prompt: text,
+ selectedModel: nodeData.selectedModel,
+ parameters: nodeData.parameters,
+ dynamicInputs: compressedDynamicInputs,
+ mediaType: "video" as const,
+ };
+
const response = await fetch("/api/generate", {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/store/execution/generateVideoExecutor.ts` around lines 91 - 109, Move the
image-compression awaits into the existing try/error path or wrap them in their
own try/catch so failures set the node to "error" rather than leaving it stuck
in "loading"; specifically, perform the calls to compressImagesForUpload for
images and for entries in dynamicInputs (producing compressedImages and
compressedDynamicInputs) inside the same try block that handles the node
lifecycle, or catch errors from compressImagesForUpload, call the node error
handler (set node status to "error" or invoke the existing error path), and
rethrow or return appropriately so the outer logic can clean up; update code
around compressImagesForUpload, compressedImages, compressedDynamicInputs,
images, and dynamicInputs to follow this pattern.
src/utils/imageCompression.ts
Outdated
| // Check if already small enough | ||
| const estimatedSize = Math.ceil((base64DataUrl.length - base64DataUrl.indexOf(",") - 1) * 3 / 4); | ||
| if (estimatedSize < MAX_PAYLOAD_SIZE) return base64DataUrl; |
There was a problem hiding this comment.
Measure the encoded request bytes, not the decoded image bytes.
estimatedSize is the decoded binary size, but Vercel limits the encoded HTTP body. A ~4 MB image becomes ~5.3 MB once base64-encoded, and this loop still resolves anything below that threshold. Single images near this cutoff can still breach the body limit before JSON overhead is added.
Proposed fix
-const MAX_PAYLOAD_SIZE = 4 * 1024 * 1024; // 4MB to leave room for other request data
+const MAX_PAYLOAD_SIZE = 4 * 1024 * 1024; // encoded bytes, leaving room for other request data
export async function compressImageForUpload(base64DataUrl: string): Promise<string> {
// Not a data URL, return as-is
if (!base64DataUrl.startsWith("data:")) return base64DataUrl;
// Check if already small enough
- const estimatedSize = Math.ceil((base64DataUrl.length - base64DataUrl.indexOf(",") - 1) * 3 / 4);
- if (estimatedSize < MAX_PAYLOAD_SIZE) return base64DataUrl;
+ const encodedSize = new TextEncoder().encode(base64DataUrl).length;
+ if (encodedSize < MAX_PAYLOAD_SIZE) return base64DataUrl;
@@
- while (result.length > MAX_PAYLOAD_SIZE * 1.33 && quality > 0.1) { // 1.33 accounts for base64 overhead
+ while (new TextEncoder().encode(result).length > MAX_PAYLOAD_SIZE && quality > 0.1) {
quality -= 0.1;
result = canvas.toDataURL("image/jpeg", quality);
}
+
+ if (new TextEncoder().encode(result).length > MAX_PAYLOAD_SIZE) {
+ reject(new Error("Image is still too large after compression"));
+ return;
+ }
- console.log(`[ImageCompression] Compressed from ${(estimatedSize / 1024 / 1024).toFixed(2)}MB to ${(result.length / 1024 / 1024).toFixed(2)}MB (quality: ${quality.toFixed(1)})`);
+ console.log(`[ImageCompression] Compressed from ${(encodedSize / 1024 / 1024).toFixed(2)}MB to ${(new TextEncoder().encode(result).length / 1024 / 1024).toFixed(2)}MB (quality: ${quality.toFixed(1)})`);Also applies to: 49-55
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/utils/imageCompression.ts` around lines 17 - 19, The code currently
computes estimatedSize as the decoded binary size; instead measure the encoded
HTTP payload size by using the byte length of the base64 data URL (including the
"data:...;base64," prefix and any JSON wrapping/margin) instead of decoding to
binary – replace the estimatedSize calculation with
Buffer.byteLength(base64DataUrl, 'utf8') (and/or add a small safety margin for
JSON overhead) wherever estimatedSize is computed (e.g., the initial check using
base64DataUrl/estimatedSize and the loop at lines referencing the same logic) so
the pre-check and compression loop compare the actual encoded request bytes
against MAX_PAYLOAD_SIZE.
| ctx.drawImage(img, 0, 0, width, height); | ||
|
|
||
| // Try progressively lower quality until under limit | ||
| let quality = 0.9; | ||
| let result = canvas.toDataURL("image/jpeg", quality); | ||
|
|
||
| while (result.length > MAX_PAYLOAD_SIZE * 1.33 && quality > 0.1) { // 1.33 accounts for base64 overhead | ||
| quality -= 0.1; | ||
| result = canvas.toDataURL("image/jpeg", quality); |
There was a problem hiding this comment.
Handle transparent inputs explicitly before JPEG conversion.
Oversized PNG/WebP inputs are always re-encoded as JPEG here. That silently drops alpha and changes the prompt image content, which can alter generation results.
Proposed fix
- ctx.drawImage(img, 0, 0, width, height);
+ const sourceMime = base64DataUrl.slice(5, base64DataUrl.indexOf(";"));
+ if (sourceMime === "image/png" || sourceMime === "image/webp") {
+ ctx.fillStyle = "#fff";
+ ctx.fillRect(0, 0, width, height);
+ }
+ ctx.drawImage(img, 0, 0, width, height);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ctx.drawImage(img, 0, 0, width, height); | |
| // Try progressively lower quality until under limit | |
| let quality = 0.9; | |
| let result = canvas.toDataURL("image/jpeg", quality); | |
| while (result.length > MAX_PAYLOAD_SIZE * 1.33 && quality > 0.1) { // 1.33 accounts for base64 overhead | |
| quality -= 0.1; | |
| result = canvas.toDataURL("image/jpeg", quality); | |
| const sourceMime = base64DataUrl.slice(5, base64DataUrl.indexOf(";")); | |
| if (sourceMime === "image/png" || sourceMime === "image/webp") { | |
| ctx.fillStyle = "#fff"; | |
| ctx.fillRect(0, 0, width, height); | |
| } | |
| ctx.drawImage(img, 0, 0, width, height); | |
| // Try progressively lower quality until under limit | |
| let quality = 0.9; | |
| let result = canvas.toDataURL("image/jpeg", quality); | |
| while (result.length > MAX_PAYLOAD_SIZE * 1.33 && quality > 0.1) { // 1.33 accounts for base64 overhead | |
| quality -= 0.1; | |
| result = canvas.toDataURL("image/jpeg", quality); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/utils/imageCompression.ts` around lines 43 - 51, The current loop always
re-encodes into JPEG (using canvas.toDataURL) which silently drops alpha; detect
transparent inputs before converting by inspecting the drawn canvas pixels (use
ctx.drawImage then ctx.getImageData and check any alpha < 255) and branch: if no
alpha proceed with the existing JPEG quality loop (variables: ctx.drawImage,
canvas.toDataURL, quality, MAX_PAYLOAD_SIZE), but if alpha is present either (a)
preserve transparency by encoding as PNG (use canvas.toDataURL("image/png") and
apply resizing or PNG compression if still over MAX_PAYLOAD_SIZE) or (b)
explicitly flatten onto a configurable background color (draw a filled rect with
white/selected color behind the image before calling
canvas.toDataURL("image/jpeg", quality)) so the alpha handling is explicit and
not silently lost.
- Detect web deployments (non-localhost) and skip directory requirement - Force base64 image embedding for web mode - Show informative message explaining web mode limitations Co-Authored-By: Claude Opus 4.5 <[email protected]>
There was a problem hiding this comment.
🧹 Nitpick comments (2)
src/components/ProjectSetupModal.tsx (2)
510-510: Simplify the toggle expression.The ternary can be replaced with a simple negation for cleaner code.
✨ Proposed simplification
- onClick={() => setExternalStorage(externalStorage ? false : true)} + onClick={() => setExternalStorage(!externalStorage)}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/ProjectSetupModal.tsx` at line 510, The onClick handler in ProjectSetupModal.tsx uses a verbose ternary to toggle state; replace the expression onClick={() => setExternalStorage(externalStorage ? false : true)} with a simple negation and, preferably, the functional updater to avoid stale closures: call setExternalStorage(prev => !prev) so the externalStorage state is toggled concisely and safely.
147-151: Consider additional local development scenarios.The detection logic handles
localhostand127.0.0.1but may incorrectly classify other local development scenarios as web deployments:
- IPv6 localhost (
::1or[::1])- Local network IPs (
192.168.x.x,10.x.x.x)- Custom local domains (
*.local,*.localhost)This could cause unexpected behavior when accessing the app from a local network IP during development.
🔧 Proposed fix to handle additional local scenarios
// Detect web deployment (not localhost) - const isWebDeployment = typeof window !== "undefined" && - !window.location.hostname.includes("localhost") && - !window.location.hostname.includes("127.0.0.1"); + const isWebDeployment = typeof window !== "undefined" && (() => { + const hostname = window.location.hostname; + // Local development checks + if (hostname.includes("localhost") || hostname === "127.0.0.1" || hostname === "::1") { + return false; + } + // Local network IPs (192.168.x.x, 10.x.x.x, 172.16-31.x.x) + if (/^(192\.168\.|10\.|172\.(1[6-9]|2\d|3[01])\.)/.test(hostname)) { + return false; + } + return true; + })();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/ProjectSetupModal.tsx` around lines 147 - 151, The current isWebDeployment detection in ProjectSetupModal.tsx incorrectly treats some local development hosts as production; update the isWebDeployment logic (the const isWebDeployment using window.location.hostname) to treat IPv6 localhost (::1), IPv4 local ranges (10., 192.168., 172.16–172.31), 0.0.0.0, and local dev domains (hosts ending with .local or .localhost or *.localhost) as local—i.e., return false for these cases; implement this via a single normalized hostname check using a regex/CIDR helper or utility function to match those patterns and only mark as web deployment when none of them match. Ensure the check still guards for typeof window !== "undefined" and references window.location.hostname exactly as before.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@src/components/ProjectSetupModal.tsx`:
- Line 510: The onClick handler in ProjectSetupModal.tsx uses a verbose ternary
to toggle state; replace the expression onClick={() =>
setExternalStorage(externalStorage ? false : true)} with a simple negation and,
preferably, the functional updater to avoid stale closures: call
setExternalStorage(prev => !prev) so the externalStorage state is toggled
concisely and safely.
- Around line 147-151: The current isWebDeployment detection in
ProjectSetupModal.tsx incorrectly treats some local development hosts as
production; update the isWebDeployment logic (the const isWebDeployment using
window.location.hostname) to treat IPv6 localhost (::1), IPv4 local ranges (10.,
192.168., 172.16–172.31), 0.0.0.0, and local dev domains (hosts ending with
.local or .localhost or *.localhost) as local—i.e., return false for these
cases; implement this via a single normalized hostname check using a regex/CIDR
helper or utility function to match those patterns and only mark as web
deployment when none of them match. Ensure the check still guards for typeof
window !== "undefined" and references window.location.hostname exactly as
before.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: b4e89553-5135-49ec-9daa-39babb447974
📒 Files selected for processing (1)
src/components/ProjectSetupModal.tsx
- Reduce max image size to 1.5MB per image - Lower max dimension to 1280px - Add fallback dimension reduction if quality isn't enough - Add localStorage workflow persistence for web mode Co-Authored-By: Claude Opus 4.5 <[email protected]>
|
These look like personal changes. Fork the repo or change upstream. |
Summary
maxDurationfrom 300s (5 min) to 900s (15 min) in/api/generaterouteTest plan
🤖 Generated with Claude Code
Summary by CodeRabbit
New Features
Chores
Documentation