feat: add unique field constraints with MongoDB indexes and duplicate…#50
feat: add unique field constraints with MongoDB indexes and duplicate…#50Special7ka wants to merge 1 commit intoyash-pouranik:mainfrom
Conversation
|
@Special7ka is attempting to deploy a commit to the Yash Pouranik's projects Team on Vercel. A member of the Team first needs to authorize it. |
📝 WalkthroughWalkthroughAdded support for per-field Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant SchemaController as Schema Controller
participant Model as Mongoose Model
participant IndexManager as Index Manager
participant MongoDB as MongoDB
participant Cache as Compiled Model Cache
Client->>SchemaController: POST /schema (with unique fields)
SchemaController->>SchemaController: Transform fields + unique attr
SchemaController->>SchemaController: Save to fullProject.collections
SchemaController->>Model: getConnection + getCompiledModel
SchemaController->>IndexManager: createUniqueIndexes(Model, fields)
IndexManager->>MongoDB: Aggregation pipeline (find duplicates)
alt Duplicates Detected
MongoDB-->>IndexManager: Duplicate records found
IndexManager-->>SchemaController: Error with duplicate count
SchemaController->>Cache: clearCompiledModel(collectionName)
SchemaController->>SchemaController: Remove collection from fullProject
SchemaController->>SchemaController: Save rollback state
SchemaController-->>Client: 400 Error Response
else No Duplicates
MongoDB-->>IndexManager: No duplicates
IndexManager->>MongoDB: createIndex(field, {unique, sparse})
MongoDB-->>IndexManager: Index created
IndexManager-->>SchemaController: Success
SchemaController->>Cache: Invalidate Redis cache
SchemaController-->>Client: 201 Schema Created
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related issues
Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment Tip CodeRabbit can enforce grammar and style rules using `languagetool`.Configure the |
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces unique field constraints to the backend, leveraging MongoDB indexes to ensure data integrity. It includes duplicate detection, index creation, and proper error handling for duplicate key violations. The changes are limited to backend functionality and support top-level primitive fields. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces a robust mechanism for enforcing unique field constraints in MongoDB by creating unique indexes. The implementation is well-rounded, including pre-validation for duplicate data before index creation, proper use of sparse indexes for optional fields, and a rollback mechanism if index creation fails. Additionally, it improves error handling by catching MongoDB duplicate key errors (11000) and returning a user-friendly 409 Conflict response instead of a generic 500 error. The changes are clean and significantly enhance data integrity. I've added one suggestion to improve the error message when duplicate values are found, which will aid in debugging.
| if (duplicates.length > 0) { | ||
| throw new Error( | ||
| `Cannot add unique constraint: ${duplicates.length} duplicate values found for field '${field.key}'`, | ||
| ); | ||
| } |
There was a problem hiding this comment.
For better developer experience and easier debugging, consider enhancing the error message to include a few examples of the duplicate values found. This will help users quickly identify and correct the data that violates the new unique constraint.
| if (duplicates.length > 0) { | |
| throw new Error( | |
| `Cannot add unique constraint: ${duplicates.length} duplicate values found for field '${field.key}'`, | |
| ); | |
| } | |
| if (duplicates.length > 0) { | |
| const examples = duplicates.slice(0, 3).map(d => JSON.stringify(d._id)).join(', '); | |
| throw new Error( | |
| `Cannot create unique index on '${field.key}'. ${duplicates.length} duplicate values exist. Examples: ${examples}` | |
| ); | |
| } |
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (2)
backend/utils/input.validation.js (1)
164-164: Consider adding validation to preventuniqueon unsupported types.While
indexManager.jssilently skips unsupported types, it might be more user-friendly to surface a validation error upfront whenunique: trueis specified on Object, Array, or Ref types.💡 Optional enhancement in refine block
.refine( (data) => { const normalType = data.type.charAt(0).toUpperCase() + data.type.slice(1).toLowerCase(); + // unique is only supported on primitive types + if (data.unique && ['Object', 'Array', 'Ref'].includes(normalType)) { + return false; + } if ( normalType === "Object" &&And update the error message to mention the unique constraint limitation.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/utils/input.validation.js` at line 164, Add validation to the Zod schema that checks when unique is true that the field type is not one of the unsupported types (Object, Array, Ref): modify the schema that declares the unique property (the line with "unique: z.boolean().optional()") to include a refine (or a transform + refine) that throws a validation error if unique === true and the field's "type" value is "Object", "Array" or "Ref"; update the refine error message to explicitly state that unique constraints are not supported for Object, Array or Ref types (and mention indexManager.js behavior if you want consistency), so users get an upfront, descriptive validation error instead of silent skipping.backend/controllers/data.controller.js (1)
14-18: Consider whether exposing the duplicate value poses a minor information disclosure risk.The error response includes the actual duplicate value (
Value '${value}' already exists). While useful for debugging, this reveals data from other documents. For most use cases this is acceptable, but for sensitive fields it could be a concern.💡 Alternative without value disclosure
return res.status(409).json({ success: false, - error: `Value '${value}' already exists for field '${field}'`, + error: `A duplicate value already exists for field '${field}'`, code: "DUPLICATE_VALUE", + field: field, });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/controllers/data.controller.js` around lines 14 - 18, The current 409 response exposes the duplicate value via the template string using variables value and field; change the response to avoid leaking the actual value by returning a generic message (e.g. "Duplicate value for field '<field>'") or by redacting the value when building the response in the controller code that sends the res.status(409).json(...), keep the error code "DUPLICATE_VALUE" for clients to handle, and optionally add a toggle or whitelist (based on field name) so only non‑sensitive fields include the raw value if explicitly allowed.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@backend/controllers/schema.controller.js`:
- Around line 90-121: Persisting fullProject.collections before creating unique
indexes can leave the DB in an inconsistent state if index creation fails;
instead, build a local collectionConfig object (matching the shape used by
getCompiledModel), call getConnection(...) and getCompiledModel(...) with that
transient config, run createUniqueIndexes(Model, collectionConfig.model), and
only after success push the collection into fullProject.collections and await
fullProject.save(); on error still call clearCompiledModel(...) using the
computed compiledCollectionName and return the error without persisting;
alternatively implement a "pending" flag on the collection and only mark it
active after createUniqueIndexes succeeds — update the logic around
fullProject.collections, getCompiledModel, createUniqueIndexes,
clearCompiledModel and getConnection accordingly.
In `@backend/controllers/storage.controller.js`:
- Around line 29-49: The quota reservation using Project.updateOne (the atomic
increment of storageUsed in the storage reserve branch) needs a compensating
rollback when subsequent steps fail (e.g., getStorage(), upload, or other
errors) so storageUsed isn't left inflated; modify the flow around the
reservation to track when a reservation was made (e.g., a boolean reserved=true
after successful update) and wrap subsequent operations (getStorage, the upload
logic around uploadError handling, etc.) in try/catch/finally so that on any
error you call Project.updateOne({ _id: project._id }, { $inc: { storageUsed:
-file.size } }) if reserved is true (and only if the original update matched),
then rethrow or return the error response—ensure the rollback logic mirrors the
reservation and is used in both the getStorage failure path and the existing
uploadError path (affecting the same code paths around Project.updateOne and the
upload functions).
- Around line 112-121: Replace the use of
supabase.storage.from(bucket).list(..., { search }) that returns partial matches
with the exact-metadata call supabase.storage.from(bucket).info(path) when
determining fileSize; specifically, in the block using fileSize (and the similar
block at lines referenced in the review), call .info(path), check for errors,
and set fileSize = fileInfo.metadata.size (or 0 if absent) before decrementing
storageUsed so the exact file size is subtracted. Ensure you update the variable
lookup that currently reads data[0].metadata.size to use the returned
fileInfo.metadata.size and keep existing error handling (throw on error).
In `@backend/utils/indexManager.js`:
- Around line 20-46: createUniqueIndexes can leave orphaned indexes if a later
index creation fails; modify createUniqueIndexes to track each created index
(e.g., push returned index name from Model.collection.createIndex into a local
array) and wrap the loop in try/catch so that on any error you iterate the
tracked index names and call Model.collection.dropIndex for each to roll back
partial progress before rethrowing the error; reference createUniqueIndexes,
findDuplicates, Model.collection.createIndex and Model.collection.dropIndex when
implementing the tracking and rollback (alternatively, if the MongoDB deployment
supports multi-document transactions and collection-level index operations in a
session, create indexes in a single transactional/batch operation instead).
---
Nitpick comments:
In `@backend/controllers/data.controller.js`:
- Around line 14-18: The current 409 response exposes the duplicate value via
the template string using variables value and field; change the response to
avoid leaking the actual value by returning a generic message (e.g. "Duplicate
value for field '<field>'") or by redacting the value when building the response
in the controller code that sends the res.status(409).json(...), keep the error
code "DUPLICATE_VALUE" for clients to handle, and optionally add a toggle or
whitelist (based on field name) so only non‑sensitive fields include the raw
value if explicitly allowed.
In `@backend/utils/input.validation.js`:
- Line 164: Add validation to the Zod schema that checks when unique is true
that the field type is not one of the unsupported types (Object, Array, Ref):
modify the schema that declares the unique property (the line with "unique:
z.boolean().optional()") to include a refine (or a transform + refine) that
throws a validation error if unique === true and the field's "type" value is
"Object", "Array" or "Ref"; update the refine error message to explicitly state
that unique constraints are not supported for Object, Array or Ref types (and
mention indexManager.js behavior if you want consistency), so users get an
upfront, descriptive validation error instead of silent skipping.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 2be05914-0aa1-4072-ba1a-293d9fa510d1
⛔ Files ignored due to path filters (1)
package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (5)
backend/controllers/data.controller.jsbackend/controllers/schema.controller.jsbackend/controllers/storage.controller.jsbackend/utils/indexManager.jsbackend/utils/input.validation.js
| fullProject.collections.push({ name: name, model: transformedFields }); | ||
| await fullProject.save(); | ||
|
|
||
| const projectObj = fullProject.toObject(); | ||
| delete projectObj.publishableKey; | ||
| delete projectObj.secretKey; | ||
| delete projectObj.jwtSecret; | ||
| try { | ||
| const collectionConfig = fullProject.collections.find( | ||
| (c) => c.name === name, | ||
| ); | ||
|
|
||
| const connection = await getConnection(fullProject._id); | ||
| const Model = getCompiledModel( | ||
| connection, | ||
| collectionConfig, | ||
| fullProject._id, | ||
| fullProject.resources.db.isExternal, | ||
| ); | ||
|
|
||
| await createUniqueIndexes(Model, collectionConfig.model); | ||
| } catch (error) { | ||
| const compiledCollectionName = fullProject.resources.db.isExternal | ||
| ? name | ||
| : `${fullProject._id}_${name}`; | ||
|
|
||
| const connection = await getConnection(fullProject._id); | ||
| clearCompiledModel(connection, compiledCollectionName); | ||
|
|
||
| fullProject.collections = fullProject.collections.filter( | ||
| (c) => c.name !== name, | ||
| ); | ||
| await fullProject.save(); | ||
|
|
||
| return res.status(400).json({ error: error.message }); | ||
| } |
There was a problem hiding this comment.
Schema is persisted before index creation, risking inconsistent state.
The collection config is saved to the database (line 91) before attempting to create unique indexes (line 106). If the process crashes between these operations, the schema exists without its unique indexes.
Consider deferring the save until after successful index creation, or marking the schema as "pending" until indexes are confirmed.
💡 Suggested reordering to improve consistency
- fullProject.collections.push({ name: name, model: transformedFields });
- await fullProject.save();
-
try {
- const collectionConfig = fullProject.collections.find(
- (c) => c.name === name,
- );
+ const collectionConfig = { name: name, model: transformedFields };
const connection = await getConnection(fullProject._id);
const Model = getCompiledModel(
connection,
collectionConfig,
fullProject._id,
fullProject.resources.db.isExternal,
);
await createUniqueIndexes(Model, collectionConfig.model);
+
+ // Only persist after successful index creation
+ fullProject.collections.push(collectionConfig);
+ await fullProject.save();
} catch (error) {
const compiledCollectionName = fullProject.resources.db.isExternal
? name
: `${fullProject._id}_${name}`;
const connection = await getConnection(fullProject._id);
clearCompiledModel(connection, compiledCollectionName);
- fullProject.collections = fullProject.collections.filter(
- (c) => c.name !== name,
- );
- await fullProject.save();
-
return res.status(400).json({ error: error.message });
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/controllers/schema.controller.js` around lines 90 - 121, Persisting
fullProject.collections before creating unique indexes can leave the DB in an
inconsistent state if index creation fails; instead, build a local
collectionConfig object (matching the shape used by getCompiledModel), call
getConnection(...) and getCompiledModel(...) with that transient config, run
createUniqueIndexes(Model, collectionConfig.model), and only after success push
the collection into fullProject.collections and await fullProject.save(); on
error still call clearCompiledModel(...) using the computed
compiledCollectionName and return the error without persisting; alternatively
implement a "pending" flag on the collection and only mark it active after
createUniqueIndexes succeeds — update the logic around fullProject.collections,
getCompiledModel, createUniqueIndexes, clearCompiledModel and getConnection
accordingly.
| // ATOMIC QUOTA RESERVATION | ||
| if (!external) { | ||
| const result = await Project.updateOne( | ||
| { | ||
| _id: project._id, | ||
| $expr: { | ||
| $lte: [{ $add: ["$storageUsed", file.size] }, "$storageLimit"], | ||
| }, | ||
| }, | ||
| { $inc: { storageUsed: file.size } }, | ||
| ); | ||
|
|
||
| if (result.matchedCount === 0) { | ||
| return res | ||
| .status(403) | ||
| .json({ error: "Internal storage limit exceeded." }); | ||
| } | ||
| } | ||
|
|
||
| const supabase = await getStorage(project); | ||
|
|
There was a problem hiding this comment.
Rollback is missing for failures after quota reservation but before upload completes.
Line 31 reserves quota, but only uploadError triggers rollback. If Line 48 (getStorage) fails, storageUsed remains inflated.
Proposed fix
module.exports.uploadFile = async (req, res) => {
+ let quotaReserved = false;
+ let reservedSize = 0;
+ let project;
+ let external = false;
try {
const file = req.file;
@@
- const project = req.project;
- const external = isProjectStorageExternal(project);
+ project = req.project;
+ external = isProjectStorageExternal(project);
@@
if (!external) {
const result = await Project.updateOne(
@@
if (result.matchedCount === 0) {
return res
.status(403)
.json({ error: "Internal storage limit exceeded." });
}
+ quotaReserved = true;
+ reservedSize = file.size;
}
@@
- if (uploadError) {
- // ROLLBACK QUOTA
- if (!external) {
- await Project.updateOne(
- { _id: project._id },
- { $inc: { storageUsed: -file.size } },
- );
- }
- throw uploadError;
- }
+ if (uploadError) throw uploadError;
@@
} catch (err) {
+ if (quotaReserved && !external) {
+ await Project.updateOne(
+ { _id: project._id },
+ { $inc: { storageUsed: -reservedSize } },
+ );
+ }
return res.status(500).json({
error: "File upload failed",
details: process.env.NODE_ENV === "development" ? err.message : undefined,
});
}
};Also applies to: 81-86
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/controllers/storage.controller.js` around lines 29 - 49, The quota
reservation using Project.updateOne (the atomic increment of storageUsed in the
storage reserve branch) needs a compensating rollback when subsequent steps fail
(e.g., getStorage(), upload, or other errors) so storageUsed isn't left
inflated; modify the flow around the reservation to track when a reservation was
made (e.g., a boolean reserved=true after successful update) and wrap subsequent
operations (getStorage, the upload logic around uploadError handling, etc.) in
try/catch/finally so that on any error you call Project.updateOne({ _id:
project._id }, { $inc: { storageUsed: -file.size } }) if reserved is true (and
only if the original update matched), then rethrow or return the error
response—ensure the rollback logic mirrors the reservation and is used in both
the getStorage failure path and the existing uploadError path (affecting the
same code paths around Project.updateOne and the upload functions).
| const { data, error } = await supabase.storage | ||
| .from(bucket) | ||
| .list(path.split("/")[0], { | ||
| search: path.split("/").slice(1).join("/"), | ||
| }); | ||
|
|
||
| if (error) throw error; | ||
| if (data?.length) { | ||
| fileSize = data[0].metadata?.size || 0; | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
In Supabase JavaScript Storage API, does storage.from(bucket).list(path, { search }) return exact filename matches or partial matches? What is the recommended way to fetch exact object metadata/size before delete?
💡 Result:
In the Supabase JavaScript Storage API, storage.from(bucket).list(path, { search }) returns partial matches for the search parameter. The search option filters objects whose names contain the search string (substring match), not exact filename matches. This is evidenced by the API implementation added in 2022 (storage-js PR #56 and storage-api PR #127), example usage searching for 'jon', and backend API docs describing search as an optional string for filtering within a prefix. The recommended way to fetch exact object metadata/size before delete is storage.from(bucket).info(path). This method retrieves detailed object information including id, metadata (with size), created_at, updated_at, etc., without downloading the file content. It uses the /object/info/{bucket}/{path} endpoint (added in storage-js PR #207, released in v2.7.0). The returned metadata includes file size for confirmation before calling remove(['path']). Example usage: const { data: fileInfo } = await supabase.storage.from('bucket').info('path/to/file'); console.log(fileInfo.metadata.size); // e.g., 12345 if (confirmDelete(fileInfo)) { await supabase.storage.from('bucket').remove(['path/to/file']); } For exact listing without partial search, use list(path) without search or with empty search: '', combined with client-side filtering on name for exact matches.
Citations:
- 1: https://zone-www-dot-ge97hnogn-supabase.vercel.app/docs/reference/javascript/storage-from-list
- 2: https://supabase.com/docs/reference/self-hosting-storage/search-for-objects-under-a-prefix
- 3: feat: add ability to search files and folders by prefix supabase/storage#127
- 4: feat: custom-metadata, exists, info methods supabase/storage-js#207
- 5: https://supabase.com/docs/reference/javascript/storage-from-info
- 6: https://supabase.com/docs/reference/self-hosting-storage/retrieve-object-info
- 7: file size supabase/storage#68
Use .info() instead of .list() with search for exact file metadata before quota decrement.
The current implementation uses list() with a search parameter, which returns partial (substring) matches. This can cause the wrong file size to be subtracted from storageUsed. Use supabase.storage.from(bucket).info(path) to retrieve exact metadata:
const { data: fileInfo, error } = await supabase.storage
.from(bucket)
.info(path);
if (error) throw error;
if (fileInfo?.metadata?.size) {
fileSize = fileInfo.metadata.size;
}Also applies to: 130-134
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/controllers/storage.controller.js` around lines 112 - 121, Replace
the use of supabase.storage.from(bucket).list(..., { search }) that returns
partial matches with the exact-metadata call
supabase.storage.from(bucket).info(path) when determining fileSize;
specifically, in the block using fileSize (and the similar block at lines
referenced in the review), call .info(path), check for errors, and set fileSize
= fileInfo.metadata.size (or 0 if absent) before decrementing storageUsed so the
exact file size is subtracted. Ensure you update the variable lookup that
currently reads data[0].metadata.size to use the returned fileInfo.metadata.size
and keep existing error handling (throw on error).
| async function createUniqueIndexes(Model, fields = []) { | ||
| const supportedTypes = new Set(["String", "Number", "Boolean", "Date"]); | ||
|
|
||
| for (const field of fields) { | ||
| if (!field.unique) continue; | ||
| if (!supportedTypes.has(field.type)) continue; | ||
|
|
||
| // Check for duplicate values before creating the index | ||
| const duplicates = await findDuplicates(Model, field.key); | ||
|
|
||
| if (duplicates.length > 0) { | ||
| throw new Error( | ||
| `Cannot add unique constraint: ${duplicates.length} duplicate values found for field '${field.key}'`, | ||
| ); | ||
| } | ||
|
|
||
| // Create MongoDB unique index | ||
| await Model.collection.createIndex( | ||
| { [field.key]: 1 }, | ||
| { | ||
| unique: true, | ||
| sparse: !field.required, | ||
| name: `${field.key}_1`, | ||
| }, | ||
| ); | ||
| } | ||
| } |
There was a problem hiding this comment.
Partial index creation failure leaves database in inconsistent state.
If createUniqueIndexes fails on the second or later field, indexes created for earlier fields remain in MongoDB, but the rollback in schema.controller.js removes the collection config. This leaves orphaned indexes.
Consider either:
- Collecting all indexes to create and using a transaction/batch approach
- Dropping successfully created indexes on failure
🛡️ Proposed fix to track and rollback created indexes
async function createUniqueIndexes(Model, fields = []) {
const supportedTypes = new Set(["String", "Number", "Boolean", "Date"]);
+ const createdIndexes = [];
for (const field of fields) {
if (!field.unique) continue;
if (!supportedTypes.has(field.type)) continue;
// Check for duplicate values before creating the index
const duplicates = await findDuplicates(Model, field.key);
if (duplicates.length > 0) {
+ // Rollback previously created indexes
+ for (const indexName of createdIndexes) {
+ try {
+ await Model.collection.dropIndex(indexName);
+ } catch (e) {
+ console.error(`Failed to rollback index ${indexName}:`, e.message);
+ }
+ }
throw new Error(
`Cannot add unique constraint: ${duplicates.length} duplicate values found for field '${field.key}'`,
);
}
// Create MongoDB unique index
- await Model.collection.createIndex(
+ const indexName = `${field.key}_1`;
+ await Model.collection.createIndex(
{ [field.key]: 1 },
{
unique: true,
sparse: !field.required,
- name: `${field.key}_1`,
+ name: indexName,
},
);
+ createdIndexes.push(indexName);
}
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| async function createUniqueIndexes(Model, fields = []) { | |
| const supportedTypes = new Set(["String", "Number", "Boolean", "Date"]); | |
| for (const field of fields) { | |
| if (!field.unique) continue; | |
| if (!supportedTypes.has(field.type)) continue; | |
| // Check for duplicate values before creating the index | |
| const duplicates = await findDuplicates(Model, field.key); | |
| if (duplicates.length > 0) { | |
| throw new Error( | |
| `Cannot add unique constraint: ${duplicates.length} duplicate values found for field '${field.key}'`, | |
| ); | |
| } | |
| // Create MongoDB unique index | |
| await Model.collection.createIndex( | |
| { [field.key]: 1 }, | |
| { | |
| unique: true, | |
| sparse: !field.required, | |
| name: `${field.key}_1`, | |
| }, | |
| ); | |
| } | |
| } | |
| async function createUniqueIndexes(Model, fields = []) { | |
| const supportedTypes = new Set(["String", "Number", "Boolean", "Date"]); | |
| const createdIndexes = []; | |
| for (const field of fields) { | |
| if (!field.unique) continue; | |
| if (!supportedTypes.has(field.type)) continue; | |
| // Check for duplicate values before creating the index | |
| const duplicates = await findDuplicates(Model, field.key); | |
| if (duplicates.length > 0) { | |
| // Rollback previously created indexes | |
| for (const indexName of createdIndexes) { | |
| try { | |
| await Model.collection.dropIndex(indexName); | |
| } catch (e) { | |
| console.error(`Failed to rollback index ${indexName}:`, e.message); | |
| } | |
| } | |
| throw new Error( | |
| `Cannot add unique constraint: ${duplicates.length} duplicate values found for field '${field.key}'`, | |
| ); | |
| } | |
| // Create MongoDB unique index | |
| const indexName = `${field.key}_1`; | |
| await Model.collection.createIndex( | |
| { [field.key]: 1 }, | |
| { | |
| unique: true, | |
| sparse: !field.required, | |
| name: indexName, | |
| }, | |
| ); | |
| createdIndexes.push(indexName); | |
| } | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/utils/indexManager.js` around lines 20 - 46, createUniqueIndexes can
leave orphaned indexes if a later index creation fails; modify
createUniqueIndexes to track each created index (e.g., push returned index name
from Model.collection.createIndex into a local array) and wrap the loop in
try/catch so that on any error you iterate the tracked index names and call
Model.collection.dropIndex for each to roll back partial progress before
rethrowing the error; reference createUniqueIndexes, findDuplicates,
Model.collection.createIndex and Model.collection.dropIndex when implementing
the tracking and rollback (alternatively, if the MongoDB deployment supports
multi-document transactions and collection-level index operations in a session,
create indexes in a single transactional/batch operation instead).
|
Hey @Special7ka ! Thank you so much for putting in the effort for this massive update! 🙏 We really appreciate these huge improvements. I have to give you a quick heads-up: just a few hours ago, our entire repository went through a structural leap (Release - v0.3.0). We completely migrated from the old monolithic backend/frontend architecture to a modern NPM Workspaces Microservices Monorepo architecture. Since there are over 1000 lines of changes here, merging this directly would unfortunately break our new apps/ ecosystem due to heavy conflicts. Could you please fetch the latest main branch, move your updated code into the new workspaces (apps/web-dashboard, apps/dashboard-api, or apps/public-api), and resolve the conflicts? 🛠️ How to run the new architecture locally: Run npm install directly at the root of the repository (this will symlink all workspaces and shared @urbackend/common packages). |
🚀 Add Unique Field Constraints (Backend)
✅ Implemented
uniqueproperty to schema fields💡 Notes
Summary by CodeRabbit
Release Notes
New Features
Bug Fixes