Skip to content

feat: user delete backend#8878

Open
csiyang wants to merge 20 commits intomainfrom
siyangcao/nes-1454-deleteuser-backend
Open

feat: user delete backend#8878
csiyang wants to merge 20 commits intomainfrom
siyangcao/nes-1454-deleteuser-backend

Conversation

@csiyang
Copy link
Copy Markdown
Contributor

@csiyang csiyang commented Mar 19, 2026

Add user deletion endpoints with interop pattern:

  • api-users: userDeleteCheck mutation, userDeleteConfirm subscription
  • api-journeys-modern: interop mutations for journeys data cleanup
  • Firebase-only account deletion support
  • Audit logging for all deletions
  • Comprehensive test coverage

Summary by CodeRabbit

  • New Features

    • User deletion workflow: pre-check summary, confirm execution, and real-time progress updates with streamed operation logs.
    • Automatic handling of associated journeys and teams with ownership transfer where applicable.
    • Delete users by database ID or email, with audit records persisted for traceability.
    • Safeguards and detailed logs to surface issues during deletion.
  • Chores

    • Adjusted supported messaging platform options.

Add user deletion endpoints with interop pattern:
- api-users: userDeleteCheck mutation, userDeleteConfirm subscription
- api-journeys-modern: interop mutations for journeys data cleanup
- Firebase-only account deletion support
- Audit logging for all deletions
- Comprehensive test coverage

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@csiyang csiyang self-assigned this Mar 19, 2026
@linear
Copy link
Copy Markdown

linear bot commented Mar 19, 2026

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Mar 19, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review

Walkthrough

Adds a cross-service user-deletion workflow: gateway schema additions, journeys-side evaluation & deletion services, users-side lookup/firebase/db deletion and subscription orchestration, Prisma audit-log model + migration, inter-service Apollo calls, and unit tests across services.

Changes

Cohort / File(s) Summary
API Gateway Schema
apis/api-gateway/schema.graphql
Added user-delete mutations and subscription fields, new result/log types, and UserDeleteIdType.
API Journeys Modern — schema & registration
apis/api-journeys-modern/schema.graphql, apis/api-journeys-modern/src/schema/schema.ts, apis/api-journeys-modern/src/schema/userDelete/index.ts
Registered new user-delete schema modules; removed discord, signal, weChat from MessagePlatform enum.
API Journeys Modern — services & types
apis/api-journeys-modern/src/schema/userDelete/service/*, .../userDelete/types.ts
Added checkJourneysData and deleteJourneysData with Log types, transaction ownership-transfer logic, cascading deletes, and unit tests.
API Journeys Modern — GraphQL modules
apis/api-journeys-modern/src/schema/userDelete/userDeleteJourneysCheck.ts, .../userDeleteJourneysConfirm.ts
Added GraphQL object types and mutations userDeleteJourneysCheck and userDeleteJourneysConfirm wired to services and auth rules.
API Users — schema, builder & registration
apis/api-users/schema.graphql, apis/api-users/src/schema/builder.ts, apis/api-users/src/schema/schema.ts, apis/api-users/src/schema/userDelete/index.ts
Enabled subscription support, added userDeleteCheck mutation and userDeleteConfirm subscription, and registered userDelete modules.
API Users — services & types
apis/api-users/src/schema/userDelete/service/*
Added lookupUser, deleteFirebaseUser, deleteUserData, journeysInterop (Apollo interop with timeout), logging utilities, barrels, and unit tests.
API Users — GraphQL resolvers & subscription
apis/api-users/src/schema/userDelete/userDeleteCheck.ts, apis/api-users/src/schema/userDelete/userDeleteConfirm.ts
Added userDeleteCheck mutation and userDeleteConfirm subscription that stream progress logs and orchestrate lookup → journeys confirm → Firebase/db deletion → audit-log updates.
Database — Prisma model & migration
libs/prisma/users/db/schema.prisma, libs/prisma/users/db/migrations/.../migration.sql, libs/prisma/users/db/migrations/migration_lock.toml
Added UserDeleteAuditLog Prisma model and migration SQL; minor migration_lock formatting tweak.
Infrastructure
apis/api-users/infrastructure/locals.tf
Added GATEWAY_URL to environment variables list.

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant APIUsers as API Users
    participant APIJourneys as API Journeys Modern
    participant PrismaDB as Prisma Database
    participant FirebaseAuth as Firebase Auth

    Client->>APIUsers: userDeleteCheck(idType,id)
    APIUsers->>PrismaDB: lookup user by id/email
    PrismaDB-->>APIUsers: user record or null
    APIUsers->>FirebaseAuth: check Firebase existence/uid
    FirebaseAuth-->>APIUsers: firebase metadata
    APIUsers->>APIJourneys: callJourneysCheck(userId)
    APIJourneys->>PrismaDB: evaluate journeys/teams (counts)
    PrismaDB-->>APIJourneys: counts & logs
    APIJourneys-->>APIUsers: check result + logs
    APIUsers-->>Client: UserDeleteCheckResult + logs

    Client->>APIUsers: subscribe userDeleteConfirm(idType,id)
    APIUsers->>PrismaDB: lookup user & get caller
    APIUsers->>APIJourneys: callJourneysConfirm(userId)
    APIJourneys->>PrismaDB: transfer/delete journeys & cleanup
    APIJourneys-->>APIUsers: confirm result + logs
    APIUsers->>FirebaseAuth: deleteFirebaseUser(uid)
    FirebaseAuth-->>APIUsers: deletion logs/result
    APIUsers->>PrismaDB: create/update UserDeleteAuditLog and delete user row
    APIUsers-->>Client: streamed progress entries (log, done, success)
Loading

Estimated Code Review Effort

🎯 4 (Complex) | ⏱️ ~60 minutes

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The pull request title 'feat: user delete backend' clearly and concisely summarizes the main change—adding backend infrastructure for user deletion across multiple subgraphs with mutations, subscriptions, and supporting services.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch siyangcao/nes-1454-deleteuser-backend

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Mar 19, 2026

Warnings
⚠️ ❗ Big PR (2434 changes)

(change count - 2434): Pull Request size seems relatively large. If Pull Request contains multiple changes, split each into separate PR will helps faster, easier review.

Generated by 🚫 dangerJS against 2b875d5

@nx-cloud
Copy link
Copy Markdown

nx-cloud bot commented Mar 19, 2026

View your CI Pipeline Execution ↗ for commit 2b875d5

Command Status Duration Result
nx run journeys-admin-e2e:e2e ✅ Succeeded 29s View ↗
nx run watch-e2e:e2e ✅ Succeeded 23s View ↗
nx run resources-e2e:e2e ✅ Succeeded 22s View ↗
nx run journeys-e2e:e2e ✅ Succeeded 21s View ↗
nx run player-e2e:e2e ✅ Succeeded 4s View ↗
nx run short-links-e2e:e2e ✅ Succeeded 4s View ↗
nx run videos-admin-e2e:e2e ✅ Succeeded 4s View ↗
nx run-many --target=vercel-alias --projects=jo... ✅ Succeeded 2s View ↗
Additional runs (20) ✅ Succeeded ... View ↗

☁️ Nx Cloud last updated this comment at 2026-03-30 21:50:39 UTC

@github-actions github-actions bot temporarily deployed to Preview - videos-admin March 19, 2026 01:52 Inactive
@github-actions github-actions bot temporarily deployed to Preview - short-links March 19, 2026 01:52 Inactive
@github-actions github-actions bot temporarily deployed to Preview - player March 19, 2026 01:52 Inactive
@github-actions github-actions bot temporarily deployed to Preview - watch March 19, 2026 01:52 Inactive
@github-actions github-actions bot requested a deployment to Preview - journeys-admin March 19, 2026 01:52 Pending
@github-actions github-actions bot temporarily deployed to Preview - journeys March 19, 2026 01:52 Inactive
@github-actions github-actions bot temporarily deployed to Preview - resources March 19, 2026 01:52 Inactive
@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Mar 19, 2026

The latest updates on your projects.

Name Status Preview Updated (UTC)
journeys ✅ Ready journeys preview Tue Mar 31 10:44:59 NZDT 2026

@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Mar 19, 2026

The latest updates on your projects.

Name Status Preview Updated (UTC)
short-links ✅ Ready short-links preview Tue Mar 31 10:44:35 NZDT 2026

@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Mar 19, 2026

The latest updates on your projects.

Name Status Preview Updated (UTC)
player ✅ Ready player preview Tue Mar 31 10:44:35 NZDT 2026

@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Mar 19, 2026

The latest updates on your projects.

Name Status Preview Updated (UTC)
videos-admin ✅ Ready videos-admin preview Tue Mar 31 10:44:50 NZDT 2026

@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Mar 19, 2026

The latest updates on your projects.

Name Status Preview Updated (UTC)
watch ✅ Ready watch preview Tue Mar 31 10:45:04 NZDT 2026

@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Mar 19, 2026

The latest updates on your projects.

Name Status Preview Updated (UTC)
resources ✅ Ready resources preview Tue Mar 31 10:44:55 NZDT 2026

@github-actions github-actions bot temporarily deployed to Preview - journeys March 19, 2026 01:59 Inactive
@github-actions github-actions bot temporarily deployed to Preview - journeys-admin March 19, 2026 01:59 Inactive
@github-actions github-actions bot temporarily deployed to Preview - videos-admin March 19, 2026 01:59 Inactive
@github-actions github-actions bot temporarily deployed to Preview - short-links March 19, 2026 01:59 Inactive
@github-actions github-actions bot temporarily deployed to Preview - resources March 19, 2026 01:59 Inactive
@github-actions github-actions bot temporarily deployed to Preview - watch March 19, 2026 01:59 Inactive
@github-actions github-actions bot temporarily deployed to Preview - player March 19, 2026 01:59 Inactive
@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Mar 19, 2026

The latest updates on your projects.

Name Status Preview Updated (UTC)
journeys-admin ✅ Ready journeys-admin preview Tue Mar 31 10:47:21 NZDT 2026

- Parallelize Phase 3 pre-deletions and Phase 5 cleanup with Promise.all
- Add 120s timeout on interop Apollo calls via AbortSignal
- Sanitize error messages in client-facing logs (log details server-side)
- Extract isFirebaseNotFound to shared types utility
- Eliminate redundant Phase 2 re-queries (IDs collected in Phase 1)
- Wrap subscription generator in try-catch for clean error terminal events
- Add self-deletion guard (prevent superAdmin deleting own account)
- Replace caller NOT_FOUND throw with yielded error (no unhandled throws)
- Use deleteMany for bulk journey deletion instead of loop

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@github-actions github-actions bot temporarily deployed to Preview - journeys-admin March 25, 2026 23:17 Inactive
@blacksmith-sh

This comment has been minimized.

@jesus-film-bot
Copy link
Copy Markdown

Ran Plan for dir: infrastructure workspace: default

Plan Failed: This project is currently locked by an unapplied plan from pull #8875. To continue, delete the lock from #8875 or apply that plan and merge the pull request.

Once the lock is released, comment atlantis plan here to re-plan.

@github-actions github-actions bot temporarily deployed to Preview - journeys-admin March 26, 2026 00:04 Inactive
@github-actions github-actions bot temporarily deployed to Preview - videos-admin March 26, 2026 00:04 Inactive
@github-actions github-actions bot temporarily deployed to Preview - resources March 26, 2026 00:04 Inactive
@github-actions github-actions bot temporarily deployed to Preview - short-links March 26, 2026 00:04 Inactive
@github-actions github-actions bot temporarily deployed to Preview - journeys March 26, 2026 00:04 Inactive
@jesus-film-bot
Copy link
Copy Markdown

Ran Plan for dir: infrastructure workspace: default

Plan Failed: This project is currently locked by an unapplied plan from pull #8875. To continue, delete the lock from #8875 or apply that plan and merge the pull request.

Once the lock is released, comment atlantis plan here to re-plan.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (1)
apis/api-users/src/schema/userDelete/userDeleteConfirm.ts (1)

236-244: Consider sanitizing the error message before exposing it to clients.

The error.message is directly included in the client-facing log entry. If the underlying error originates from Prisma, Firebase, or other services, it may contain connection strings, internal identifiers, or query details that shouldn't be exposed to clients.

🛡️ Suggested improvement
       } catch (error) {
         const message = error instanceof Error ? error.message : 'Unknown error'
         console.error('userDeleteConfirm subscription error:', error)
         yield {
-          log: createLog(`❌ Unexpected error: ${message}`, 'error'),
+          log: createLog('❌ An unexpected error occurred during deletion', 'error'),
           done: true,
           success: false
         }
       }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apis/api-users/src/schema/userDelete/userDeleteConfirm.ts` around lines 236 -
244, In the catch block inside userDeleteConfirm replace direct inclusion of
error.message in the client-facing createLog entry with a sanitized/generic
message (e.g., "Unexpected server error while deleting user") and optionally a
short safe identifier (e.g., error code or truncated hash) to aid support; keep
full error details (error object and message) in server logs via console.error
or processLogger for debugging. Specifically, edit the catch in
userDeleteConfirm where createLog is called so the client gets a non-sensitive
message while full error details remain logged server-side.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In
`@apis/api-journeys-modern/src/schema/userDelete/service/deleteJourneysData.ts`:
- Around line 55-72: The logs produced inside the Prisma transaction (calls to
logs.push with createLog in the branch around tx.userJourney.updateMany and
similar blocks at lines ~101-118) escape rollback; instead collect them into a
local buffer (e.g., transactionLogs) while inside the transaction and do NOT
mutate the outer logs array there. After the tx completes successfully, append
transactionLogs to the outer logs array (or call logs.push for each buffered
entry). Update all occurrences where createLog + logs.push are used inside the
transaction (including the ownership transfer branch referencing uj, nextOwner,
and any manager-transfer code around 101-118) to use this buffering pattern.
- Around line 52-72: The selection logic for nextOwner in deleteJourneysData
incorrectly prefers an editor because const nextOwner = others.find((o) =>
o.role === 'editor') ?? others[0] should first check for an existing owner;
change it to prefer an owner then an editor (e.g., find owner first, then
editor, then fallback to others[0]) and handle the case where others[0] may be
undefined; update the subsequent role check and tx.userJourney.updateMany call
to use this corrected nextOwner variable and only promote when there is no
existing owner.
- Around line 140-181: Revalidate the deletion sets immediately before Phase 3
(pre-delete heavy child records) and Phase 4 by re-querying current journeys and
teams using the same "accepted-membership" predicate you used in Phase 1:
refresh journeyIdsToDelete and teamIdsToDelete in deleteJourneysData.ts (the
variables journeyIdsToDelete and teamIdsToDelete) and short-circuit if the
refreshed arrays are empty; use the refreshed journeyIdsToDelete when deleting
events, journeyVisitor, action, block and journeys (not just before
prisma.journey.deleteMany) so you never operate on a stale snapshot, and use the
refreshed teamIdsToDelete before prisma.team.deleteMany.

---

Nitpick comments:
In `@apis/api-users/src/schema/userDelete/userDeleteConfirm.ts`:
- Around line 236-244: In the catch block inside userDeleteConfirm replace
direct inclusion of error.message in the client-facing createLog entry with a
sanitized/generic message (e.g., "Unexpected server error while deleting user")
and optionally a short safe identifier (e.g., error code or truncated hash) to
aid support; keep full error details (error object and message) in server logs
via console.error or processLogger for debugging. Specifically, edit the catch
in userDeleteConfirm where createLog is called so the client gets a
non-sensitive message while full error details remain logged server-side.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: 82a4485a-db3c-46d1-85b0-0f07d9697e0d

📥 Commits

Reviewing files that changed from the base of the PR and between 2359f5a and 4d366d8.

📒 Files selected for processing (5)
  • apis/api-journeys-modern/src/schema/userDelete/service/deleteJourneysData.ts
  • apis/api-users/src/schema/userDelete/service/deleteUserData.ts
  • apis/api-users/src/schema/userDelete/service/journeysInterop.ts
  • apis/api-users/src/schema/userDelete/service/lookupUser.ts
  • apis/api-users/src/schema/userDelete/userDeleteConfirm.ts
✅ Files skipped from review due to trivial changes (1)
  • apis/api-users/src/schema/userDelete/service/deleteUserData.ts
🚧 Files skipped from review as they are similar to previous changes (1)
  • apis/api-users/src/schema/userDelete/service/journeysInterop.ts

Comment on lines +140 to +181
if (journeyIdsToDelete.length > 0) {
// Pre-delete heavy child records in parallel
const [eventCount, journeyVisitorCount, actionCount] = await Promise.all([
prisma.event.deleteMany({
where: { journeyId: { in: journeyIdsToDelete } }
}),
prisma.journeyVisitor.deleteMany({
where: { journeyId: { in: journeyIdsToDelete } }
}),
prisma.action.deleteMany({
where: { journeyId: { in: journeyIdsToDelete } }
})
])
// Blocks after actions (actions reference blocks)
const blockCount = await prisma.block.deleteMany({
where: { journeyId: { in: journeyIdsToDelete } }
})

if (eventCount.count > 0)
logs.push(createLog(`🗑️ Deleted ${eventCount.count} events`))
if (journeyVisitorCount.count > 0)
logs.push(
createLog(`🗑️ Deleted ${journeyVisitorCount.count} journey visitors`)
)
if (actionCount.count > 0)
logs.push(createLog(`🗑️ Deleted ${actionCount.count} actions`))
if (blockCount.count > 0)
logs.push(createLog(`🗑️ Deleted ${blockCount.count} blocks`))

// Now delete the journeys (cascades are already cleared)
await prisma.journey.deleteMany({
where: { id: { in: journeyIdsToDelete } }
})
logs.push(createLog(`🗑️ Deleted ${journeyIdsToDelete.length} journeys`))
}

// Phase 4: Delete sole-member teams
if (teamIdsToDelete.length > 0) {
await prisma.team.deleteMany({
where: { id: { in: teamIdsToDelete } }
})
logs.push(createLog(`🗑️ Deleted ${teamIdsToDelete.length} teams`))
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Revalidate the delete set before Phase 3/4.

journeyIdsToDelete and teamIdsToDelete are decided in Phase 1, then reused later outside that transaction. If someone accepts a pending journey invite or is added to a team in the gap, this code will still delete their journey/team from the stale snapshot. Re-check the current membership state immediately before Phase 3/4, using the same accepted-membership predicate for journeys that you used in Phase 1. Also do that before deleting events/actions/blocks; guarding only prisma.journey.deleteMany would be too late.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apis/api-journeys-modern/src/schema/userDelete/service/deleteJourneysData.ts`
around lines 140 - 181, Revalidate the deletion sets immediately before Phase 3
(pre-delete heavy child records) and Phase 4 by re-querying current journeys and
teams using the same "accepted-membership" predicate you used in Phase 1:
refresh journeyIdsToDelete and teamIdsToDelete in deleteJourneysData.ts (the
variables journeyIdsToDelete and teamIdsToDelete) and short-circuit if the
refreshed arrays are empty; use the refreshed journeyIdsToDelete when deleting
events, journeyVisitor, action, block and journeys (not just before
prisma.journey.deleteMany) so you never operate on a stale snapshot, and use the
refreshed teamIdsToDelete before prisma.team.deleteMany.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (1)
apis/api-users/src/schema/userDelete/userDeleteConfirm.ts (1)

146-148: Consider logging the swallowed error for debugging.

The best-effort rationale is sound, but silently swallowing the error makes it difficult to diagnose audit-log update failures in production.

💡 Optional: log at warn level
           } catch {
-            // best-effort
+            // best-effort — log but don't fail the operation
+            console.warn('Failed to update audit log after Firebase deletion')
           }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apis/api-users/src/schema/userDelete/userDeleteConfirm.ts` around lines 146 -
148, The catch block in userDeleteConfirm.ts currently swallows errors ("catch {
// best-effort }"); change it to capture the thrown error (e.g., "catch (err)")
and log it at warn level with contextual info to aid debugging—use the existing
logger/processLogger (or the module's audit logger) and include a descriptive
message like "Failed to update audit log for userDeleteConfirm" plus the error
object so stack/metadata are preserved.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@apis/api-users/src/schema/userDelete/userDeleteConfirm.ts`:
- Around line 150-158: The final yield currently creates a log via
createLog(...) but does not set the log level to 'error' when the Firebase
deletion failed (hasError === true); update that call in the yield returned by
the generator in userDeleteConfirm (the block that sets log: createLog(...),
done: true, success: !hasError) so that createLog receives 'error' as the second
argument when hasError is true (and retains the normal/default level when
hasError is false) to match other error paths.
- Around line 226-234: The emitted log always uses the default level; when user
deletion fails we must set the log level to "error". Update the yield that
builds the log (the createLog call in the userDeleteConfirm flow) to pass an
explicit level parameter based on userResult.success (e.g., level:
userResult.success ? 'info' : 'error' or similar) so that failures use 'error'
level while successes keep the normal level; locate the createLog invocation
around the yield returning done: true, success: userResult.success and adjust
its arguments accordingly.

---

Nitpick comments:
In `@apis/api-users/src/schema/userDelete/userDeleteConfirm.ts`:
- Around line 146-148: The catch block in userDeleteConfirm.ts currently
swallows errors ("catch { // best-effort }"); change it to capture the thrown
error (e.g., "catch (err)") and log it at warn level with contextual info to aid
debugging—use the existing logger/processLogger (or the module's audit logger)
and include a descriptive message like "Failed to update audit log for
userDeleteConfirm" plus the error object so stack/metadata are preserved.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: 1dda5dad-b9ac-4721-95a9-22a4232204ff

📥 Commits

Reviewing files that changed from the base of the PR and between 4d366d8 and 826d5ee.

📒 Files selected for processing (1)
  • apis/api-users/src/schema/userDelete/userDeleteConfirm.ts

@jesus-film-bot
Copy link
Copy Markdown

Ran Plan for dir: infrastructure workspace: default

Plan Failed: This project is currently locked by an unapplied plan from pull #8875. To continue, delete the lock from #8875 or apply that plan and merge the pull request.

Once the lock is released, comment atlantis plan here to re-plan.

@jesus-film-bot
Copy link
Copy Markdown

Ran Plan for dir: infrastructure workspace: default

Plan Failed: This project is currently locked by an unapplied plan from pull #8875. To continue, delete the lock from #8875 or apply that plan and merge the pull request.

Once the lock is released, comment atlantis plan here to re-plan.

…olver

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
@jesus-film-bot
Copy link
Copy Markdown

Ran Plan for dir: infrastructure workspace: default

Plan Failed: This project is currently locked by an unapplied plan from pull #8875. To continue, delete the lock from #8875 or apply that plan and merge the pull request.

Once the lock is released, comment atlantis plan here to re-plan.

@jesus-film-bot
Copy link
Copy Markdown

Ran Plan for dir: infrastructure workspace: default

Plan Failed: This project is currently locked by an unapplied plan from pull #8875. To continue, delete the lock from #8875 or apply that plan and merge the pull request.

Once the lock is released, comment atlantis plan here to re-plan.

@jesus-film-bot
Copy link
Copy Markdown

Ran Plan for dir: infrastructure workspace: default

Plan Failed: This project is currently locked by an unapplied plan from pull #8875. To continue, delete the lock from #8875 or apply that plan and merge the pull request.

Once the lock is released, comment atlantis plan here to re-plan.

@jesus-film-bot
Copy link
Copy Markdown

Ran Plan for dir: infrastructure workspace: default

Plan Failed: This project is currently locked by an unapplied plan from pull #8875. To continue, delete the lock from #8875 or apply that plan and merge the pull request.

Once the lock is released, comment atlantis plan here to re-plan.

@jesus-film-bot
Copy link
Copy Markdown

Ran Plan for dir: infrastructure workspace: default

Plan Failed: This project is currently locked by an unapplied plan from pull #8875. To continue, delete the lock from #8875 or apply that plan and merge the pull request.

Once the lock is released, comment atlantis plan here to re-plan.

@blacksmith-sh

This comment has been minimized.

@jesus-film-bot
Copy link
Copy Markdown

Ran Plan for dir: infrastructure workspace: default

Plan Failed: This project is currently locked by an unapplied plan from pull #8875. To continue, delete the lock from #8875 or apply that plan and merge the pull request.

Once the lock is released, comment atlantis plan here to re-plan.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants