Skip to content

feat: add unique field constraints with MongoDB indexes and duplicate…#50

Open
Special7ka wants to merge 1 commit intoyash-pouranik:mainfrom
Special7ka:feat/unique-field-constraints
Open

feat: add unique field constraints with MongoDB indexes and duplicate…#50
Special7ka wants to merge 1 commit intoyash-pouranik:mainfrom
Special7ka:feat/unique-field-constraints

Conversation

@Special7ka
Copy link
Contributor

@Special7ka Special7ka commented Mar 21, 2026

🚀 Add Unique Field Constraints (Backend)

✅ Implemented

  • Added unique property to schema fields
  • Created MongoDB unique indexes on schema creation
  • Duplicate detection before index creation
  • Proper error handling for duplicate key errors (11000 → 409)

⚠️ Scope

  • Backend only
  • Supports top-level primitive fields (String, Number, Boolean, Date)
  • No composite or case-insensitive constraints (future work)

💡 Notes

  • Uses sparse indexes for non-required fields
  • Includes rollback if index creation fails

Summary by CodeRabbit

Release Notes

  • New Features

    • Added ability to mark fields as unique when creating or updating collection schemas.
    • Enabled automatic database-level unique index creation with pre-validation to prevent duplicates.
  • Bug Fixes

    • Improved error handling for duplicate entries—now returns a clearer 409 response instead of a generic server error.

@vercel
Copy link

vercel bot commented Mar 21, 2026

@Special7ka is attempting to deploy a commit to the Yash Pouranik's projects Team on Vercel.

A member of the Team first needs to authorize it.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 21, 2026

📝 Walkthrough

Walkthrough

Added support for per-field unique constraints across the backend. Implemented duplicate detection and MongoDB unique index creation via a new indexManager utility, integrated unique field validation into schemas, and added specific error handling for duplicate-key violations returning HTTP 409 responses.

Changes

Cohort / File(s) Summary
Duplicate Key Error Handling
backend/controllers/data.controller.js
Added handleDuplicateKeyError helper function to intercept MongoDB 11000 errors and return HTTP 409 with { success: false, error, code: "DUPLICATE_VALUE" } payload. Integrated into insertData and updateSingleData catch blocks.
Schema Unique Field Support
backend/controllers/schema.controller.js
Extended createSchema to: map unique field attribute during transformation, compile model post-persistence, create unique indexes via createUniqueIndexes, and handle index creation failures with rollback (cache clearing and collection removal).
Unique Index Creation
backend/utils/indexManager.js
New module exporting createUniqueIndexes(Model, fields) that pre-checks for duplicates via aggregation pipeline before creating sparse/unique MongoDB indexes, throwing error if duplicates detected.
Field Schema Validation
backend/utils/input.validation.js
Added optional unique: boolean property to dashboard and API field schemas. Reformatted validation definitions with multi-line chaining and explicit refine callbacks.
Storage Formatting
backend/controllers/storage.controller.js
Reformatted code with improved indentation and clearer expression bodies; no functional logic changes to upload, delete, or quota handling.

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant SchemaController as Schema Controller
    participant Model as Mongoose Model
    participant IndexManager as Index Manager
    participant MongoDB as MongoDB
    participant Cache as Compiled Model Cache
    
    Client->>SchemaController: POST /schema (with unique fields)
    SchemaController->>SchemaController: Transform fields + unique attr
    SchemaController->>SchemaController: Save to fullProject.collections
    SchemaController->>Model: getConnection + getCompiledModel
    SchemaController->>IndexManager: createUniqueIndexes(Model, fields)
    
    IndexManager->>MongoDB: Aggregation pipeline (find duplicates)
    alt Duplicates Detected
        MongoDB-->>IndexManager: Duplicate records found
        IndexManager-->>SchemaController: Error with duplicate count
        SchemaController->>Cache: clearCompiledModel(collectionName)
        SchemaController->>SchemaController: Remove collection from fullProject
        SchemaController->>SchemaController: Save rollback state
        SchemaController-->>Client: 400 Error Response
    else No Duplicates
        MongoDB-->>IndexManager: No duplicates
        IndexManager->>MongoDB: createIndex(field, {unique, sparse})
        MongoDB-->>IndexManager: Index created
        IndexManager-->>SchemaController: Success
        SchemaController->>Cache: Invalidate Redis cache
        SchemaController-->>Client: 201 Schema Created
    end
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related issues

  • Issue #49: This PR directly implements the backend support for per-field unique flags, duplicate detection via createUniqueIndexes, and MongoDB 11000 error handling as requested.

Possibly related PRs

  • PR #30: Both PRs modify error handling in the same data.controller.js catch blocks (insertData and updateSingleData), with potential interaction in error-handling patterns.

Poem

🐰 A field marked unique, so shiny and bright,
Duplicates detected with MongoDB's might,
Indexes created, no chaos in sight,
HTTP 409 when values collide—oof!
Constraints enforced, the database's proof! ✨

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 25.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately summarizes the main change: adding unique field constraints with MongoDB indexes and duplicate detection/handling.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Tip

CodeRabbit can enforce grammar and style rules using `languagetool`.

Configure the reviews.tools.languagetool setting to enable/disable rules and categories. Refer to the LanguageTool Community to learn more.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces unique field constraints to the backend, leveraging MongoDB indexes to ensure data integrity. It includes duplicate detection, index creation, and proper error handling for duplicate key violations. The changes are limited to backend functionality and support top-level primitive fields.

Highlights

  • Unique Field Constraints: Implemented the ability to enforce unique constraints on schema fields using MongoDB unique indexes.
  • Duplicate Detection: Added duplicate detection before creating unique indexes to prevent errors.
  • Error Handling: Improved error handling for duplicate key errors, converting MongoDB error code 11000 to HTTP 409.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a robust mechanism for enforcing unique field constraints in MongoDB by creating unique indexes. The implementation is well-rounded, including pre-validation for duplicate data before index creation, proper use of sparse indexes for optional fields, and a rollback mechanism if index creation fails. Additionally, it improves error handling by catching MongoDB duplicate key errors (11000) and returning a user-friendly 409 Conflict response instead of a generic 500 error. The changes are clean and significantly enhance data integrity. I've added one suggestion to improve the error message when duplicate values are found, which will aid in debugging.

Comment on lines +30 to +34
if (duplicates.length > 0) {
throw new Error(
`Cannot add unique constraint: ${duplicates.length} duplicate values found for field '${field.key}'`,
);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For better developer experience and easier debugging, consider enhancing the error message to include a few examples of the duplicate values found. This will help users quickly identify and correct the data that violates the new unique constraint.

Suggested change
if (duplicates.length > 0) {
throw new Error(
`Cannot add unique constraint: ${duplicates.length} duplicate values found for field '${field.key}'`,
);
}
if (duplicates.length > 0) {
const examples = duplicates.slice(0, 3).map(d => JSON.stringify(d._id)).join(', ');
throw new Error(
`Cannot create unique index on '${field.key}'. ${duplicates.length} duplicate values exist. Examples: ${examples}`
);
}

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (2)
backend/utils/input.validation.js (1)

164-164: Consider adding validation to prevent unique on unsupported types.

While indexManager.js silently skips unsupported types, it might be more user-friendly to surface a validation error upfront when unique: true is specified on Object, Array, or Ref types.

💡 Optional enhancement in refine block
 .refine(
   (data) => {
     const normalType =
       data.type.charAt(0).toUpperCase() + data.type.slice(1).toLowerCase();
+    // unique is only supported on primitive types
+    if (data.unique && ['Object', 'Array', 'Ref'].includes(normalType)) {
+      return false;
+    }
     if (
       normalType === "Object" &&

And update the error message to mention the unique constraint limitation.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/utils/input.validation.js` at line 164, Add validation to the Zod
schema that checks when unique is true that the field type is not one of the
unsupported types (Object, Array, Ref): modify the schema that declares the
unique property (the line with "unique: z.boolean().optional()") to include a
refine (or a transform + refine) that throws a validation error if unique ===
true and the field's "type" value is "Object", "Array" or "Ref"; update the
refine error message to explicitly state that unique constraints are not
supported for Object, Array or Ref types (and mention indexManager.js behavior
if you want consistency), so users get an upfront, descriptive validation error
instead of silent skipping.
backend/controllers/data.controller.js (1)

14-18: Consider whether exposing the duplicate value poses a minor information disclosure risk.

The error response includes the actual duplicate value (Value '${value}' already exists). While useful for debugging, this reveals data from other documents. For most use cases this is acceptable, but for sensitive fields it could be a concern.

💡 Alternative without value disclosure
     return res.status(409).json({
       success: false,
-      error: `Value '${value}' already exists for field '${field}'`,
+      error: `A duplicate value already exists for field '${field}'`,
       code: "DUPLICATE_VALUE",
+      field: field,
     });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/controllers/data.controller.js` around lines 14 - 18, The current 409
response exposes the duplicate value via the template string using variables
value and field; change the response to avoid leaking the actual value by
returning a generic message (e.g. "Duplicate value for field '<field>'") or by
redacting the value when building the response in the controller code that sends
the res.status(409).json(...), keep the error code "DUPLICATE_VALUE" for clients
to handle, and optionally add a toggle or whitelist (based on field name) so
only non‑sensitive fields include the raw value if explicitly allowed.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@backend/controllers/schema.controller.js`:
- Around line 90-121: Persisting fullProject.collections before creating unique
indexes can leave the DB in an inconsistent state if index creation fails;
instead, build a local collectionConfig object (matching the shape used by
getCompiledModel), call getConnection(...) and getCompiledModel(...) with that
transient config, run createUniqueIndexes(Model, collectionConfig.model), and
only after success push the collection into fullProject.collections and await
fullProject.save(); on error still call clearCompiledModel(...) using the
computed compiledCollectionName and return the error without persisting;
alternatively implement a "pending" flag on the collection and only mark it
active after createUniqueIndexes succeeds — update the logic around
fullProject.collections, getCompiledModel, createUniqueIndexes,
clearCompiledModel and getConnection accordingly.

In `@backend/controllers/storage.controller.js`:
- Around line 29-49: The quota reservation using Project.updateOne (the atomic
increment of storageUsed in the storage reserve branch) needs a compensating
rollback when subsequent steps fail (e.g., getStorage(), upload, or other
errors) so storageUsed isn't left inflated; modify the flow around the
reservation to track when a reservation was made (e.g., a boolean reserved=true
after successful update) and wrap subsequent operations (getStorage, the upload
logic around uploadError handling, etc.) in try/catch/finally so that on any
error you call Project.updateOne({ _id: project._id }, { $inc: { storageUsed:
-file.size } }) if reserved is true (and only if the original update matched),
then rethrow or return the error response—ensure the rollback logic mirrors the
reservation and is used in both the getStorage failure path and the existing
uploadError path (affecting the same code paths around Project.updateOne and the
upload functions).
- Around line 112-121: Replace the use of
supabase.storage.from(bucket).list(..., { search }) that returns partial matches
with the exact-metadata call supabase.storage.from(bucket).info(path) when
determining fileSize; specifically, in the block using fileSize (and the similar
block at lines referenced in the review), call .info(path), check for errors,
and set fileSize = fileInfo.metadata.size (or 0 if absent) before decrementing
storageUsed so the exact file size is subtracted. Ensure you update the variable
lookup that currently reads data[0].metadata.size to use the returned
fileInfo.metadata.size and keep existing error handling (throw on error).

In `@backend/utils/indexManager.js`:
- Around line 20-46: createUniqueIndexes can leave orphaned indexes if a later
index creation fails; modify createUniqueIndexes to track each created index
(e.g., push returned index name from Model.collection.createIndex into a local
array) and wrap the loop in try/catch so that on any error you iterate the
tracked index names and call Model.collection.dropIndex for each to roll back
partial progress before rethrowing the error; reference createUniqueIndexes,
findDuplicates, Model.collection.createIndex and Model.collection.dropIndex when
implementing the tracking and rollback (alternatively, if the MongoDB deployment
supports multi-document transactions and collection-level index operations in a
session, create indexes in a single transactional/batch operation instead).

---

Nitpick comments:
In `@backend/controllers/data.controller.js`:
- Around line 14-18: The current 409 response exposes the duplicate value via
the template string using variables value and field; change the response to
avoid leaking the actual value by returning a generic message (e.g. "Duplicate
value for field '<field>'") or by redacting the value when building the response
in the controller code that sends the res.status(409).json(...), keep the error
code "DUPLICATE_VALUE" for clients to handle, and optionally add a toggle or
whitelist (based on field name) so only non‑sensitive fields include the raw
value if explicitly allowed.

In `@backend/utils/input.validation.js`:
- Line 164: Add validation to the Zod schema that checks when unique is true
that the field type is not one of the unsupported types (Object, Array, Ref):
modify the schema that declares the unique property (the line with "unique:
z.boolean().optional()") to include a refine (or a transform + refine) that
throws a validation error if unique === true and the field's "type" value is
"Object", "Array" or "Ref"; update the refine error message to explicitly state
that unique constraints are not supported for Object, Array or Ref types (and
mention indexManager.js behavior if you want consistency), so users get an
upfront, descriptive validation error instead of silent skipping.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 2be05914-0aa1-4072-ba1a-293d9fa510d1

📥 Commits

Reviewing files that changed from the base of the PR and between 1377930 and 4221ed3.

⛔ Files ignored due to path filters (1)
  • package-lock.json is excluded by !**/package-lock.json
📒 Files selected for processing (5)
  • backend/controllers/data.controller.js
  • backend/controllers/schema.controller.js
  • backend/controllers/storage.controller.js
  • backend/utils/indexManager.js
  • backend/utils/input.validation.js

Comment on lines +90 to +121
fullProject.collections.push({ name: name, model: transformedFields });
await fullProject.save();

const projectObj = fullProject.toObject();
delete projectObj.publishableKey;
delete projectObj.secretKey;
delete projectObj.jwtSecret;
try {
const collectionConfig = fullProject.collections.find(
(c) => c.name === name,
);

const connection = await getConnection(fullProject._id);
const Model = getCompiledModel(
connection,
collectionConfig,
fullProject._id,
fullProject.resources.db.isExternal,
);

await createUniqueIndexes(Model, collectionConfig.model);
} catch (error) {
const compiledCollectionName = fullProject.resources.db.isExternal
? name
: `${fullProject._id}_${name}`;

const connection = await getConnection(fullProject._id);
clearCompiledModel(connection, compiledCollectionName);

fullProject.collections = fullProject.collections.filter(
(c) => c.name !== name,
);
await fullProject.save();

return res.status(400).json({ error: error.message });
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Schema is persisted before index creation, risking inconsistent state.

The collection config is saved to the database (line 91) before attempting to create unique indexes (line 106). If the process crashes between these operations, the schema exists without its unique indexes.

Consider deferring the save until after successful index creation, or marking the schema as "pending" until indexes are confirmed.

💡 Suggested reordering to improve consistency
-    fullProject.collections.push({ name: name, model: transformedFields });
-    await fullProject.save();
-
     try {
-      const collectionConfig = fullProject.collections.find(
-        (c) => c.name === name,
-      );
+      const collectionConfig = { name: name, model: transformedFields };

       const connection = await getConnection(fullProject._id);
       const Model = getCompiledModel(
         connection,
         collectionConfig,
         fullProject._id,
         fullProject.resources.db.isExternal,
       );

       await createUniqueIndexes(Model, collectionConfig.model);
+
+      // Only persist after successful index creation
+      fullProject.collections.push(collectionConfig);
+      await fullProject.save();
     } catch (error) {
       const compiledCollectionName = fullProject.resources.db.isExternal
         ? name
         : `${fullProject._id}_${name}`;

       const connection = await getConnection(fullProject._id);
       clearCompiledModel(connection, compiledCollectionName);

-      fullProject.collections = fullProject.collections.filter(
-        (c) => c.name !== name,
-      );
-      await fullProject.save();
-
       return res.status(400).json({ error: error.message });
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/controllers/schema.controller.js` around lines 90 - 121, Persisting
fullProject.collections before creating unique indexes can leave the DB in an
inconsistent state if index creation fails; instead, build a local
collectionConfig object (matching the shape used by getCompiledModel), call
getConnection(...) and getCompiledModel(...) with that transient config, run
createUniqueIndexes(Model, collectionConfig.model), and only after success push
the collection into fullProject.collections and await fullProject.save(); on
error still call clearCompiledModel(...) using the computed
compiledCollectionName and return the error without persisting; alternatively
implement a "pending" flag on the collection and only mark it active after
createUniqueIndexes succeeds — update the logic around fullProject.collections,
getCompiledModel, createUniqueIndexes, clearCompiledModel and getConnection
accordingly.

Comment on lines +29 to +49
// ATOMIC QUOTA RESERVATION
if (!external) {
const result = await Project.updateOne(
{
_id: project._id,
$expr: {
$lte: [{ $add: ["$storageUsed", file.size] }, "$storageLimit"],
},
},
{ $inc: { storageUsed: file.size } },
);

if (result.matchedCount === 0) {
return res
.status(403)
.json({ error: "Internal storage limit exceeded." });
}
}

const supabase = await getStorage(project);

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Rollback is missing for failures after quota reservation but before upload completes.

Line 31 reserves quota, but only uploadError triggers rollback. If Line 48 (getStorage) fails, storageUsed remains inflated.

Proposed fix
 module.exports.uploadFile = async (req, res) => {
+  let quotaReserved = false;
+  let reservedSize = 0;
+  let project;
+  let external = false;
   try {
     const file = req.file;
@@
-    const project = req.project;
-    const external = isProjectStorageExternal(project);
+    project = req.project;
+    external = isProjectStorageExternal(project);
@@
     if (!external) {
       const result = await Project.updateOne(
@@
       if (result.matchedCount === 0) {
         return res
           .status(403)
           .json({ error: "Internal storage limit exceeded." });
       }
+      quotaReserved = true;
+      reservedSize = file.size;
     }
@@
-    if (uploadError) {
-      // ROLLBACK QUOTA
-      if (!external) {
-        await Project.updateOne(
-          { _id: project._id },
-          { $inc: { storageUsed: -file.size } },
-        );
-      }
-      throw uploadError;
-    }
+    if (uploadError) throw uploadError;
@@
   } catch (err) {
+    if (quotaReserved && !external) {
+      await Project.updateOne(
+        { _id: project._id },
+        { $inc: { storageUsed: -reservedSize } },
+      );
+    }
     return res.status(500).json({
       error: "File upload failed",
       details: process.env.NODE_ENV === "development" ? err.message : undefined,
     });
   }
 };

Also applies to: 81-86

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/controllers/storage.controller.js` around lines 29 - 49, The quota
reservation using Project.updateOne (the atomic increment of storageUsed in the
storage reserve branch) needs a compensating rollback when subsequent steps fail
(e.g., getStorage(), upload, or other errors) so storageUsed isn't left
inflated; modify the flow around the reservation to track when a reservation was
made (e.g., a boolean reserved=true after successful update) and wrap subsequent
operations (getStorage, the upload logic around uploadError handling, etc.) in
try/catch/finally so that on any error you call Project.updateOne({ _id:
project._id }, { $inc: { storageUsed: -file.size } }) if reserved is true (and
only if the original update matched), then rethrow or return the error
response—ensure the rollback logic mirrors the reservation and is used in both
the getStorage failure path and the existing uploadError path (affecting the
same code paths around Project.updateOne and the upload functions).

Comment on lines +112 to +121
const { data, error } = await supabase.storage
.from(bucket)
.list(path.split("/")[0], {
search: path.split("/").slice(1).join("/"),
});

if (error) throw error;
if (data?.length) {
fileSize = data[0].metadata?.size || 0;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

In Supabase JavaScript Storage API, does storage.from(bucket).list(path, { search }) return exact filename matches or partial matches? What is the recommended way to fetch exact object metadata/size before delete?

💡 Result:

In the Supabase JavaScript Storage API, storage.from(bucket).list(path, { search }) returns partial matches for the search parameter. The search option filters objects whose names contain the search string (substring match), not exact filename matches. This is evidenced by the API implementation added in 2022 (storage-js PR #56 and storage-api PR #127), example usage searching for 'jon', and backend API docs describing search as an optional string for filtering within a prefix. The recommended way to fetch exact object metadata/size before delete is storage.from(bucket).info(path). This method retrieves detailed object information including id, metadata (with size), created_at, updated_at, etc., without downloading the file content. It uses the /object/info/{bucket}/{path} endpoint (added in storage-js PR #207, released in v2.7.0). The returned metadata includes file size for confirmation before calling remove(['path']). Example usage: const { data: fileInfo } = await supabase.storage.from('bucket').info('path/to/file'); console.log(fileInfo.metadata.size); // e.g., 12345 if (confirmDelete(fileInfo)) { await supabase.storage.from('bucket').remove(['path/to/file']); } For exact listing without partial search, use list(path) without search or with empty search: '', combined with client-side filtering on name for exact matches.

Citations:


Use .info() instead of .list() with search for exact file metadata before quota decrement.

The current implementation uses list() with a search parameter, which returns partial (substring) matches. This can cause the wrong file size to be subtracted from storageUsed. Use supabase.storage.from(bucket).info(path) to retrieve exact metadata:

const { data: fileInfo, error } = await supabase.storage
  .from(bucket)
  .info(path);

if (error) throw error;
if (fileInfo?.metadata?.size) {
  fileSize = fileInfo.metadata.size;
}

Also applies to: 130-134

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/controllers/storage.controller.js` around lines 112 - 121, Replace
the use of supabase.storage.from(bucket).list(..., { search }) that returns
partial matches with the exact-metadata call
supabase.storage.from(bucket).info(path) when determining fileSize;
specifically, in the block using fileSize (and the similar block at lines
referenced in the review), call .info(path), check for errors, and set fileSize
= fileInfo.metadata.size (or 0 if absent) before decrementing storageUsed so the
exact file size is subtracted. Ensure you update the variable lookup that
currently reads data[0].metadata.size to use the returned fileInfo.metadata.size
and keep existing error handling (throw on error).

Comment on lines +20 to +46
async function createUniqueIndexes(Model, fields = []) {
const supportedTypes = new Set(["String", "Number", "Boolean", "Date"]);

for (const field of fields) {
if (!field.unique) continue;
if (!supportedTypes.has(field.type)) continue;

// Check for duplicate values before creating the index
const duplicates = await findDuplicates(Model, field.key);

if (duplicates.length > 0) {
throw new Error(
`Cannot add unique constraint: ${duplicates.length} duplicate values found for field '${field.key}'`,
);
}

// Create MongoDB unique index
await Model.collection.createIndex(
{ [field.key]: 1 },
{
unique: true,
sparse: !field.required,
name: `${field.key}_1`,
},
);
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Partial index creation failure leaves database in inconsistent state.

If createUniqueIndexes fails on the second or later field, indexes created for earlier fields remain in MongoDB, but the rollback in schema.controller.js removes the collection config. This leaves orphaned indexes.

Consider either:

  1. Collecting all indexes to create and using a transaction/batch approach
  2. Dropping successfully created indexes on failure
🛡️ Proposed fix to track and rollback created indexes
 async function createUniqueIndexes(Model, fields = []) {
   const supportedTypes = new Set(["String", "Number", "Boolean", "Date"]);
+  const createdIndexes = [];

   for (const field of fields) {
     if (!field.unique) continue;
     if (!supportedTypes.has(field.type)) continue;

     // Check for duplicate values before creating the index
     const duplicates = await findDuplicates(Model, field.key);

     if (duplicates.length > 0) {
+      // Rollback previously created indexes
+      for (const indexName of createdIndexes) {
+        try {
+          await Model.collection.dropIndex(indexName);
+        } catch (e) {
+          console.error(`Failed to rollback index ${indexName}:`, e.message);
+        }
+      }
       throw new Error(
         `Cannot add unique constraint: ${duplicates.length} duplicate values found for field '${field.key}'`,
       );
     }

     // Create MongoDB unique index
-    await Model.collection.createIndex(
+    const indexName = `${field.key}_1`;
+    await Model.collection.createIndex(
       { [field.key]: 1 },
       {
         unique: true,
         sparse: !field.required,
-        name: `${field.key}_1`,
+        name: indexName,
       },
     );
+    createdIndexes.push(indexName);
   }
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
async function createUniqueIndexes(Model, fields = []) {
const supportedTypes = new Set(["String", "Number", "Boolean", "Date"]);
for (const field of fields) {
if (!field.unique) continue;
if (!supportedTypes.has(field.type)) continue;
// Check for duplicate values before creating the index
const duplicates = await findDuplicates(Model, field.key);
if (duplicates.length > 0) {
throw new Error(
`Cannot add unique constraint: ${duplicates.length} duplicate values found for field '${field.key}'`,
);
}
// Create MongoDB unique index
await Model.collection.createIndex(
{ [field.key]: 1 },
{
unique: true,
sparse: !field.required,
name: `${field.key}_1`,
},
);
}
}
async function createUniqueIndexes(Model, fields = []) {
const supportedTypes = new Set(["String", "Number", "Boolean", "Date"]);
const createdIndexes = [];
for (const field of fields) {
if (!field.unique) continue;
if (!supportedTypes.has(field.type)) continue;
// Check for duplicate values before creating the index
const duplicates = await findDuplicates(Model, field.key);
if (duplicates.length > 0) {
// Rollback previously created indexes
for (const indexName of createdIndexes) {
try {
await Model.collection.dropIndex(indexName);
} catch (e) {
console.error(`Failed to rollback index ${indexName}:`, e.message);
}
}
throw new Error(
`Cannot add unique constraint: ${duplicates.length} duplicate values found for field '${field.key}'`,
);
}
// Create MongoDB unique index
const indexName = `${field.key}_1`;
await Model.collection.createIndex(
{ [field.key]: 1 },
{
unique: true,
sparse: !field.required,
name: indexName,
},
);
createdIndexes.push(indexName);
}
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/utils/indexManager.js` around lines 20 - 46, createUniqueIndexes can
leave orphaned indexes if a later index creation fails; modify
createUniqueIndexes to track each created index (e.g., push returned index name
from Model.collection.createIndex into a local array) and wrap the loop in
try/catch so that on any error you iterate the tracked index names and call
Model.collection.dropIndex for each to roll back partial progress before
rethrowing the error; reference createUniqueIndexes, findDuplicates,
Model.collection.createIndex and Model.collection.dropIndex when implementing
the tracking and rollback (alternatively, if the MongoDB deployment supports
multi-document transactions and collection-level index operations in a session,
create indexes in a single transactional/batch operation instead).

@yash-pouranik yash-pouranik self-requested a review March 21, 2026 18:43
@yash-pouranik
Copy link
Owner

Hey @Special7ka ! Thank you so much for putting in the effort for this massive update! 🙏 We really appreciate these huge improvements.

I have to give you a quick heads-up: just a few hours ago, our entire repository went through a structural leap (Release - v0.3.0). We completely migrated from the old monolithic backend/frontend architecture to a modern NPM Workspaces Microservices Monorepo architecture.

Since there are over 1000 lines of changes here, merging this directly would unfortunately break our new apps/ ecosystem due to heavy conflicts. Could you please fetch the latest main branch, move your updated code into the new workspaces (apps/web-dashboard, apps/dashboard-api, or apps/public-api), and resolve the conflicts?

🛠️ How to run the new architecture locally:
It's actually much easier now! You don't need to open multiple terminals.

Run npm install directly at the root of the repository (this will symlink all workspaces and shared @urbackend/common packages).
Ensure your .env is placed at the root (refer to our updated .env.example).
Simply run npm run dev at the root. This single command will concurrently spin up the React web-dashboard, the dashboard-api, and the public-api!
Once your branch is synced with our robust new workspaces, we would absolutely love to review and merge this brilliant work! 🚀 Let us know if you need any help navigating the new repo structure!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants