Conversation
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (2)
✅ Files skipped from review due to trivial changes (1)
🚧 Files skipped from review as they are similar to previous changes (1)
📝 WalkthroughWalkthroughConsolidates CI workflows, adds a Redis-backed custom throttler and storage, applies soft-delete fields and entity relation updates, extends notification enums, adds Stellar multi-recipient payments, removes PWA/service worker code, and performs widespread string-quoting/formatting and test adjustments across backend, frontend, and contracts. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant Client
participant App as NestApp
participant Guard as CustomThrottlerGuard
participant Storage as CustomThrottlerRedisStorage
participant Redis
Client->>App: HTTP request
App->>Guard: handleRequest(ThrottlerRequest)
Guard->>Storage: increment(key, ttl, limit, blockDuration, name)
Storage->>Redis: INCR key
Redis-->>Storage: newCount
Storage->>Redis: PEXPIRE key (on first hit)
Redis-->>Storage: pttl
Storage-->>Guard: { totalHits, isBlocked, timeToExpire, blockedUntil }
Guard-->>App: allow or reject (429)
App-->>Client: HTTP 200 or 429 (with X-RateLimit headers)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 inconclusive)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 6
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (8)
backend/src/track-play-count/track-play-count.spec.ts (1)
52-56:⚠️ Potential issue | 🔴 Critical
recordPlaycalls use an outdated signature and are missing required fields.All four test calls (lines 52–56, 66–70, 83–87, 98–102) pass 3 arguments but the current signature accepts only 2:
recordPlay(dto: RecordPlayDto, ip: string). Additionally, all DTOs are missing the requiredsessionIdfield, anduserIdis being passed as a separate argument instead of inside thedto.Update each call to match the current signature and include
sessionIdin the DTO:Suggested update pattern
- const result = await service.recordPlay( - { trackId: "track-1", listenDuration: 60, completedFull: true, source: PlaySource.DIRECT }, - "user-1", - "127.0.0.1", - ); + const result = await service.recordPlay( + { + trackId: "track-1", + userId: "user-1", + sessionId: "sess-1", + listenDuration: 60, + completedFull: true, + source: PlaySource.DIRECT, + }, + "127.0.0.1", + );🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/src/track-play-count/track-play-count.spec.ts` around lines 52 - 56, The test calls to recordPlay use an old 3-argument signature and omit required fields; update each invocation of service.recordPlay to the current signature recordPlay(dto: RecordPlayDto, ip: string) by moving userId into the dto object and adding a sessionId property to the dto (e.g., include sessionId: 'some-session-id' and userId: 'user-1' inside the DTO), and remove the separate userId third argument so each call passes exactly (dto, ip).backend/src/scheduled-releases/presaves.service.ts (1)
284-289:⚠️ Potential issue | 🟠 Major
recentPreSavesis counting the wrong side of the cutoff.Line 288 uses
LessThanOrEqual(...), which returns pre-saves from 7+ days ago. That inverts the metric and undercounts genuinely recent activity.💡 Minimal fix
- createdAt: LessThanOrEqual( + createdAt: MoreThanOrEqual( new Date(Date.now() - 7 * 24 * 60 * 60 * 1000), ),Update the TypeORM import to include
MoreThanOrEqual.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/src/scheduled-releases/presaves.service.ts` around lines 284 - 289, The count for recent pre-saves is using the wrong comparator: replace the LessThanOrEqual cutoff in the preSaveRepository.count call (variable recentPreSaves) with MoreThanOrEqual so it selects items created within the last 7 days; also update the TypeORM import to include MoreThanOrEqual instead of (or in addition to) LessThanOrEqual so the symbol is available in presaves.service.ts.backend/src/scheduled-releases/scheduled-releases.service.ts (1)
284-289:⚠️ Potential issue | 🟠 Major
recentPreSavesis counting the wrong side of the cutoff.Line 288 uses
LessThanOrEqual(...), which returns pre-saves from 7+ days ago. That inverts the metric and undercounts genuinely recent activity.💡 Minimal fix
- createdAt: LessThanOrEqual( + createdAt: MoreThanOrEqual( new Date(Date.now() - 7 * 24 * 60 * 60 * 1000), ),Update the TypeORM import to include
MoreThanOrEqual.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/src/scheduled-releases/scheduled-releases.service.ts` around lines 284 - 289, The count for recentPreSaves is using LessThanOrEqual which selects pre-saves older than the 7-day cutoff; change the query to use MoreThanOrEqual from TypeORM and update imports accordingly so preSaveRepository.count({ where: { trackId, createdAt: MoreThanOrEqual(new Date(Date.now() - 7 * 24 * 60 * 60 * 1000)) } }) returns items within the last 7 days (ensure MoreThanOrEqual is imported alongside LessThanOrEqual).backend/src/common/guards/throttler.guard.spec.ts (1)
38-75:⚠️ Potential issue | 🟠 MajorTest coverage for CustomThrottlerGuard behavior is missing.
The
throttler.guard.spec.tsfile only tests thegetClientIputility method. However, the implementation includes significant functionality that lacks test coverage:
- IP whitelisting logic that bypasses rate limiting (lines 34-41 in implementation)
- Rate limit response headers (
X-RateLimit-*,Retry-After) added on success and failure (lines 55-56, 63-64)- Auth-aware limit overrides for authenticated requests (lines 47-52)
ThrottlerExceptionthrown when limits are exceeded (line 65)The
rate-limit.utils.spec.tsfile tests utility helper functions but does not cover the guard'shandleRequestbehavior. Add tests tothrottler.guard.spec.tsfor:
- Requests exceeding rate limits throw
ThrottlerException- Response includes
X-RateLimit-Limit,X-RateLimit-Remaining,X-RateLimit-Resetheaders- Whitelisted IPs bypass throttling without consuming limit count
- Authenticated requests with
THROTTLE_AUTH_AWAREdecorator use adjusted limitsbackend/src/tips/tips.service.ts (1)
253-271:⚠️ Potential issue | 🔴 CriticalMissing User relationship definition causes broken query join.
The
fromUserfield on line 66 of the Tip entity is a plain string column, not a foreign key relationship. The query at line 250 attempts.leftJoinAndSelect("tip.fromUser", "user")on this string column, which is invalid and cannot execute the soft-delete filter at line 253.The
as anycast at line 266 is necessary becausefromUseris typed asstring, not aUserobject. The map check itself is not redundant—it's a defensive fallback because the broken join never populatesfromUserwith actual User data.Required fixes:
- Add a
@ManyToOne(() => User)relationship with afromUserIdforeign key column to the Tip entity- Update the join to use the proper relationship:
.leftJoinAndSelect("tip.user", "user")(or similar, based on the relationship property name)- Remove the
as anycast once the relationship is properly typed- Remove the defensive null check in the map if the query filter is sufficient, or document why it's needed if the relationship is optional
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/src/tips/tips.service.ts` around lines 253 - 271, Tip.fromUser is currently a string column so the .leftJoinAndSelect("tip.fromUser", "user") join fails; change the Tip entity to define a proper relation: add a fromUserId column and a relation property like `@ManyToOne`(() => User, { nullable: true }) `@JoinColumn`({ name: "fromUserId" }) public fromUser: User | null; (keep fromUserId as the FK column if you need the raw id). Update the query to join the relation (e.g. .leftJoinAndSelect("tip.fromUser", "user")) now that fromUser is a relation, remove the cast (as any) on fromUser and either drop the defensive null check in the map or keep it with a comment explaining optional relation handling. Ensure imports and types for User, ManyToOne, and JoinColumn are added/updated.backend/src/common/guards/throttler.guard.ts (1)
68-79:⚠️ Potential issue | 🟠 MajorDon't turn storage failures into fake 429s.
This catches every error from
super.handleRequest()and rethrowsThrottlerException. If Redis/storage is unhealthy, clients will see rate-limit errors instead of a 5xx and the real outage gets masked.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/src/common/guards/throttler.guard.ts` around lines 68 - 79, In the catch block (override of handleRequest) don’t swallow all errors and convert them to ThrottlerException; only treat genuine rate-limit errors that already indicate 429. Change the logic in the catch to check the caught error (e.g., if error instanceof ThrottlerException or error?.status === 429) and only then call addRateLimitHeaders, set "Retry-After" and throw new ThrottlerException; otherwise simply rethrow the original error so storage/Redis failures surface as 5xx. Ensure references to addRateLimitHeaders and ThrottlerException remain, and avoid setting rate-limit headers for non-rate-limit errors.backend/src/events-live-show/events.service.ts (1)
145-160:⚠️ Potential issue | 🟠 MajorKeep RSVP membership checks inside the transaction.
Both preflight
findOne()calls race with concurrent RSVP/un-RSVP requests. Two requests can pass the check and then double-adjustrsvpCount, or one can surface a raw DB conflict/delete miss instead of a domain exception. Do the existence check and counter mutation in the same transaction, and translate uniqueness/delete conflicts toConflictException/NotFoundException.Also applies to: 171-176
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/src/events-live-show/events.service.ts` around lines 145 - 160, Move the preflight existence check into the same transaction that creates the RSVP and increments the counter: inside dataSource.transaction use the provided transaction manager to query for an existing EventRSVP (replace rsvpRepo.findOne with manager.findOne or manager.getRepository(EventRSVP).findOne), throw ConflictException if found, then manager.create/save the new EventRSVP and manager.increment the ArtistEvent.rsvpCount; also wrap save/increment in try/catch to translate DB uniqueness or missing-row errors into ConflictException or NotFoundException respectively so concurrent RSVP/un-RSVP races produce domain exceptions instead of raw DB errors (update both the block that uses EventRSVP/ArtistEvent and the similar code at lines ~171-176).backend/src/artiste-payout/payout-processor.service.ts (1)
20-24:⚠️ Potential issue | 🟠 MajorDon't eagerly construct
USDCwith an empty issuer.
USDC_ISSUER_ADDRESSdefaults to""on lines 55–56, causingnew Asset("USDC", "")to throw "Issuer cannot be null" at module initialization. This breaks the entire service startup even when only XLM payouts are enabled. Since the asset is only used whenpayout.assetCode !== "XLM"(line 150–151), defer validation until a non-XLM payout is actually processed. Make the asset lazy/nullable and validate it only then.
🟠 Major comments (22)
backend/src/metrics/metrics.e2e.spec.ts-6-7 (1)
6-7:⚠️ Potential issue | 🟠 MajorAvoid disabling the entire metrics e2e suite.
Line 7 skips every test in this spec, so CI no longer validates
/metricsand/metrics/healthbehavior. Please keep the suite active and fix test wiring (e.g., testDataSourceor provider overrides) instead of skipping the whole block.Minimal unblock diff
-// Skipped: MetricsModule pulls TypeORM repositories that need a full DataSource in tests. -describe.skip('Metrics Integration (e2e)', () => { +describe('Metrics Integration (e2e)', () => {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/src/metrics/metrics.e2e.spec.ts` around lines 6 - 7, The tests were disabled via describe.skip which silences the whole Metrics Integration (e2e) suite; re-enable the suite by removing the skip on the describe and instead fix test wiring for MetricsModule: in the test setup for the describe block that references MetricsModule, provide a test DataSource or override the TypeORM repository providers (or mock the DataSource provider) so the module's repository injections resolve during test instantiation; specifically update the test bootstrap that constructs the TestingModule (or app) to register a test TypeORM DataSource or call module.overrideProvider(DataSource).useValue(...)/overrideProvider(getRepositoryToken(...)).useValue(...) so the /metrics and /metrics/health endpoints can be exercised without skipping the suite.backend/src/activities/activities.service.ts-104-112 (1)
104-112:⚠️ Potential issue | 🟠 MajorConsolidate pagination response construction.
All three methods—
getFeed(),getUserActivities(), andgetPlaylistActivities()—declare the same return typePaginatedResponse<Activity>, butgetFeed()uses the constructor whilegetUserActivities()andgetPlaylistActivities()return plain object literals. Standardize on usingnew PaginatedResponse(data, {...})for consistency across the service.♻️ Proposed fix for getUserActivities
- return { - data, - meta: { - total, - page, - limit, - totalPages, - hasNextPage: page < totalPages, - hasPreviousPage: page > 1, - unseenCount, - }, - }; + return new PaginatedResponse(data, { + total, + page, + limit, + totalPages, + hasNextPage: page < totalPages, + hasPreviousPage: page > 1, + unseenCount, + });♻️ Proposed fix for getPlaylistActivities
- return { - data, - meta: { - total, - page, - limit, - totalPages, - hasNextPage: page < totalPages, - hasPreviousPage: page > 1, - unseenCount: 0, - }, - }; + return new PaginatedResponse(data, { + total, + page, + limit, + totalPages, + hasNextPage: page < totalPages, + hasPreviousPage: page > 1, + unseenCount: 0, + });Also applies to: 148-159, 233-244
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/src/activities/activities.service.ts` around lines 104 - 112, Three methods (getFeed, getUserActivities, getPlaylistActivities) currently return the same PaginatedResponse<Activity> but are inconsistent: change the returns in getUserActivities and getPlaylistActivities to construct and return new PaginatedResponse(data, { total, page, limit, totalPages, hasNextPage: page < totalPages, hasPreviousPage: page > 1, unseenCount }) just like getFeed does; locate the return object literals in methods getUserActivities and getPlaylistActivities and replace them with the PaginatedResponse constructor, ensuring all fields (including unseenCount) and the hasNextPage/hasPreviousPage logic are preserved.frontend/src/hooks/useKeyboardShortcuts.ts-34-39 (1)
34-39:⚠️ Potential issue | 🟠 MajorSimplify
optionstyping and config resolution to avoid API friction andenabledfallback leaks.
optionscurrently reuses a type that requiresshortcuts(already passed as the first arg), and the object merge order can letenableddefaulting be overridden unintentionally.💡 Proposed fix
interface UseKeyboardShortcutsOptions { enabled?: boolean; - shortcuts: KeyboardShortcut[]; } + +interface ResolvedKeyboardShortcutsConfig { + enabled: boolean; + shortcuts: KeyboardShortcut[]; +} export const useKeyboardShortcuts = ( shortcuts: KeyboardShortcut[], options: boolean | UseKeyboardShortcutsOptions = true, ) => { - const config: UseKeyboardShortcutsOptions = + const config: ResolvedKeyboardShortcutsConfig = typeof options === 'boolean' ? { enabled: options, shortcuts } - : { enabled: options.enabled ?? true, ...options, shortcuts }; + : { enabled: options.enabled ?? true, shortcuts };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/hooks/useKeyboardShortcuts.ts` around lines 34 - 39, The options parameter should not require shortcuts (they're already passed) and the enabled default must not leak via merge order; change the type of options to boolean | Omit<UseKeyboardShortcutsOptions, 'shortcuts'> (or Partial without shortcuts) and build config so shortcuts is set first and enabled is applied using nullish coalescing after spreading options — e.g., for the object branch use { shortcuts, ...options, enabled: options.enabled ?? true } — and keep the boolean branch as { enabled: options, shortcuts } to ensure enabled defaults correctly and can't be unintentionally overridden.backend/src/track-play-count/play-count.service.ts-147-149 (1)
147-149:⚠️ Potential issue | 🟠 MajorDo not log user/session identifiers at info level.
Line 148 logs
userId/sessionId, which can leak user-identifying data into centralized logs. Prefer a non-identifying message or hashed/tokenized identifiers with stricter log-level controls.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/src/track-play-count/play-count.service.ts` around lines 147 - 149, The info-level log in the PlayCountService uses sensitive identifiers in this.logger.log(`Counted play for track ${dto.trackId} by ${dto.userId ?? dto.sessionId}`); — remove raw userId/sessionId from info logs: either log a non-identifying message (e.g. `Counted play for track ${dto.trackId}`) or hash/tokenize the identifier and lower the level to debug (e.g. use this.logger.debug with a hashed value produced by a helper like hashIdentifier(dto.userId ?? dto.sessionId)); ensure any changes reference the existing logger call and the dto field usage.backend/src/track-play-count/play-count.service.ts-32-35 (1)
32-35:⚠️ Potential issue | 🟠 MajorAvoid hardcoded hashing salt in source code.
Line 34 embeds a static salt (
"tip-tune-salt"), which weakens pseudonymization and makes hashes predictable across environments. Read the salt from configuration/secret management instead.Suggested fix
- hashIp(ip: string): string { + hashIp(ip: string): string { + const salt = process.env.IP_HASH_SALT; + if (!salt) { + throw new BadRequestException("IP hash salt is not configured"); + } return crypto .createHash("sha256") - .update(ip + "tip-tune-salt") + .update(ip + salt) .digest("hex"); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/src/track-play-count/play-count.service.ts` around lines 32 - 35, The hashing salt is hardcoded in the expression crypto.createHash("sha256").update(ip + "tip-tune-salt").digest("hex"); change this to read the salt from configuration/secret management (e.g., inject a ConfigService or use process.env.PLAY_COUNT_SALT) and use that value in the update call instead of the literal "tip-tune-salt"; also validate the salt is present at startup and fail fast or log an error if missing so pseudonymization remains environment-specific and secure.backend/src/track-play-count/play-count.controller.ts-46-49 (1)
46-49:⚠️ Potential issue | 🟠 MajorConfigure
trust proxyand safely parse client IP from X-Forwarded-For.The application lacks
trust proxyconfiguration in main.ts, makingx-forwarded-forclient-spoofable for dedup logic. Additionally, the code incorrectly assumesx-forwarded-foris always a string—Express can send it asstring | string[] | undefined. Addapp.set('trust proxy', 1)in main.ts (or appropriate trust level for your deployment), then usereq.ipwith safe fallback parsing for headers.Suggested fix
- const ip = - (req.headers["x-forwarded-for"] as string)?.split(",")[0].trim() || - req.socket.remoteAddress || - "0.0.0.0"; + const xff = req.headers["x-forwarded-for"]; + const forwardedIp = Array.isArray(xff) + ? xff[0] + : xff?.split(",")[0]?.trim(); + const ip = req.ip || forwardedIp || req.socket.remoteAddress || "0.0.0.0";Also add to main.ts (after
appis created):app.set('trust proxy', 1);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/src/track-play-count/play-count.controller.ts` around lines 46 - 49, Set the app trust proxy and stop parsing X-Forwarded-For manually: in main.ts after the Express app is created call app.set('trust proxy', 1) (or appropriate trust level for your deployment), then in play-count.controller.ts stop assuming req.headers["x-forwarded-for"] is always a string and instead rely on req.ip as the primary client IP with a safe fallback to parsing headers if needed; update the logic around req.headers["x-forwarded-for"] / req.socket.remoteAddress to use req.ip and defensively handle string | string[] | undefined when falling back.frontend/src/pages/TipReceiptPage.tsx-141-146 (1)
141-146:⚠️ Potential issue | 🟠 MajorUSDC balance always shows zero — implement USDC balance fetching in wallet provider or remove USDC UI support.
The hardcoded
usdc: 0creates a broken UX: the TipModal UI supports XLM/USDC currency toggle (AssetToggle, AmountSelector, TipConfirmation all referencewalletBalance.usdc), but theWalletContextonly fetches XLM balance from the Stellar account. Users will always see 0 USDC regardless of their actual holdings.Either extend the wallet provider's
fetchBalanceto retrieve USDC balances alongside XLM, or remove USDC UI components and hardcode the currency to XLM until full multi-asset support is implemented.(Note:
balance.balancecorrectly accesses the wallet's shape — no change needed there.)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/pages/TipReceiptPage.tsx` around lines 141 - 146, The walletBalance currently hardcodes usdc: 0 in TipReceiptPage, causing USDC to always show zero; update the WalletContext/WalletProvider fetchBalance function to parse the Stellar account balances and return { xlm: <nativeBalance>, usdc: <USDC balance> } (matching the walletBalance shape used by TipModal, AssetToggle, AmountSelector, TipConfirmation), i.e., inspect the account.balances array for the USDC asset code/issuer and set walletBalance.usdc accordingly; if you prefer not to support USDC yet, remove or disable the USDC UI paths in TipModal/AssetToggle/AmountSelector/TipConfirmation and hardcode the UI to XLM instead.backend/src/social-sharing/referral.controller.ts-95-97 (1)
95-97:⚠️ Potential issue | 🟠 MajorClamp
limitto a positive range before calling the service.Current logic allows
0/negative values through (Math.min(limit, 50)), which can yield invalid or unintended leaderboard queries. Enforce1..50.Suggested fix
async getLeaderboard( `@Query`("limit", new DefaultValuePipe(10), ParseIntPipe) limit: number, ): Promise<LeaderboardEntryDto[]> { - return this.referralService.getLeaderboard(Math.min(limit, 50)); + const safeLimit = Math.max(1, Math.min(limit, 50)); + return this.referralService.getLeaderboard(safeLimit); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/src/social-sharing/referral.controller.ts` around lines 95 - 97, The controller currently passes the raw parsed "limit" (possibly 0 or negative) into referralService.getLeaderboard; clamp the value to the inclusive range 1..50 before calling the service (e.g., replace Math.min(limit, 50) with a clamp such as Math.max(1, Math.min(limit, 50)) or an equivalent guard) so the getLeaderboard(limit) call always receives a positive limit between 1 and 50; update the parameter handling in the controller method that accepts the Query("limit") value and ensure the adjusted variable is passed to referralService.getLeaderboard.backend/src/notifications/notification.entity.ts-19-21 (1)
19-21:⚠️ Potential issue | 🟠 MajorFix enum value mismatch and add database migration for new notification types.
NotificationType.LICENSE_REQUESTat line 19 maps to"LICENCE_REQUEST"(British spelling), creating inconsistency with the enum key and otherLICENSE_*values. This will cause mismatches between code and persisted data.Additionally, new enum values (
LICENSE_REQUEST,TRACK_RELEASED,LICENSE_RESPONSE) are added to the TypeScript entity but lack corresponding database migrations. The migration file1769303000000-CreateVerificationSystem.tsshowsVERIFICATION_UPDATEwas added with anALTER TYPE "notification_type_enum" ADD VALUEstatement, but the new notification types are missing equivalent migrations. Without them, inserts using these new types will fail on existing databases.Fixes required
- Correct the enum value spelling:
- LICENSE_REQUEST = "LICENCE_REQUEST", + LICENSE_REQUEST = "LICENSE_REQUEST",
- Add a new migration to update the database enum:
ALTER TYPE "notification_type_enum" ADD VALUE 'LICENSE_REQUEST'; ALTER TYPE "notification_type_enum" ADD VALUE 'TRACK_RELEASED'; ALTER TYPE "notification_type_enum" ADD VALUE 'LICENSE_RESPONSE';🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/src/notifications/notification.entity.ts` around lines 19 - 21, The NotificationType enum contains a misspelled value (LICENCE_REQUEST) that should match the key LICENSE_REQUEST and the new types LICENSE_REQUEST, TRACK_RELEASED, and LICENSE_RESPONSE were added in the entity without corresponding DB migrations; update the enum entry in notification.entity.ts so the string value for NotificationType.LICENSE_REQUEST is "LICENSE_REQUEST" (matching the key), and add a new database migration that issues ALTER TYPE "notification_type_enum" ADD VALUE 'LICENSE_REQUEST'; ALTER TYPE "notification_type_enum" ADD VALUE 'TRACK_RELEASED'; ALTER TYPE "notification_type_enum" ADD VALUE 'LICENSE_RESPONSE'; so existing databases accept the new values (ensure the migration file references the same enum name used by the entity).backend/src/scheduled-releases/scheduled-releases.service.ts-231-235 (1)
231-235:⚠️ Potential issue | 🟠 MajorPage through every follower before fanout.
Line 234 only fetches
{ page: 1, limit: 100 }, so artists with more than 100 followers silently miss the release notification. Iterate until pagination is exhausted, or add a follower lookup that returns the full recipient set for this job.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/src/scheduled-releases/scheduled-releases.service.ts` around lines 231 - 235, The notifyFollowers method currently calls followsService.getFollowers(release.track.artist.id, { page: 1, limit: 100 }) which only fetches the first 100 followers; update notifyFollowers to page through all results (e.g., loop with page++ and accumulate until an empty/less-than-limit page or total pages exhausted) or replace the call with a follower lookup API that returns all recipients for the given release.track.artist.id; ensure you iterate over the full accumulated follower set before performing the fanout so no followers are skipped.backend/src/scheduled-releases/presaves.service.ts-231-235 (1)
231-235:⚠️ Potential issue | 🟠 MajorPage through every follower before fanout.
Line 234 only fetches
{ page: 1, limit: 100 }, so artists with more than 100 followers silently miss the release notification. Iterate until pagination is exhausted, or add a follower lookup that returns the full recipient set for this job.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/src/scheduled-releases/presaves.service.ts` around lines 231 - 235, notifyFollowers currently calls followsService.getFollowers with a single page ({ page: 1, limit: 100 }) so artists with >100 followers are missed; change notifyFollowers to page through results (call followsService.getFollowers repeatedly for release.track.artist.id using increasing page numbers and the same limit, accumulating recipients until a page returns fewer than limit or no items) or replace the call with a follower lookup API that returns the full recipient set; ensure you update the code in notifyFollowers to aggregate follower IDs before performing the fanout and handle empty/undefined responses from getFollowers.backend/src/verification/verification.service.ts-269-270 (1)
269-270:⚠️ Potential issue | 🟠 MajorTarget the notification at the artist's user account.
Line 318 writes
request.artistIdintoNotification.userId, butartistIdbelongs to theartiststable. Since the artist relation is loaded on Line 270, this should use the linked user's id or the notification will miss the recipient and may violate the FK.💡 Minimal fix
- userId: request.artistId, + userId: request.artist.userId,Also applies to: 317-320
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/src/verification/verification.service.ts` around lines 269 - 270, The notification is being targeted at request.artistId (an artists PK) instead of the artist's user account; in the block where you set Notification.userId (around the code that writes request.artistId and where relations include "artist"), change it to use the linked user's id from the loaded relation (e.g. request.artist.userId or request.artist.user.id depending on the artist model) and add a null-check for request.artist and request.artist.user to throw/log and avoid FK violations; update the same logic for the other occurrence in the 317-320 region so notifications point to the user PK instead of the artist PK.backend/src/verification/verification.service.ts-47-53 (1)
47-53:⚠️ Potential issue | 🟠 MajorDo not fall back to a known document encryption key.
Lines 50-53 silently encrypt uploaded verification documents with a shared default when
DOCUMENT_ENCRYPTION_KEYis missing. That turns an env misconfiguration into predictable protection for sensitive identity documents; fail fast at startup instead.🔐 Safer startup behavior
- const key = - this.configService.get<string>("DOCUMENT_ENCRYPTION_KEY") || - "default-key-32-chars-long!!!!!!"; + const key = this.configService.get<string>("DOCUMENT_ENCRYPTION_KEY"); + if (!key) { + throw new Error("DOCUMENT_ENCRYPTION_KEY must be configured"); + } this.encryptionKey = crypto.scryptSync(key, "salt", 32);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/src/verification/verification.service.ts` around lines 47 - 53, The code currently falls back to a hard-coded DOCUMENT_ENCRYPTION_KEY when reading config for encryption, which makes encryption predictable; instead, stop using the default and fail fast: in the constructor (where uploadDir and encryptionKey are set) require configService.get<string>("DOCUMENT_ENCRYPTION_KEY") to be present and throw a clear startup error if it's missing or empty, then derive this.encryptionKey with crypto.scryptSync from that value; keep uploadDir fallback if desired but do not default DOCUMENT_ENCRYPTION_KEY.backend/src/artists/entities/artist.entity.ts-50-54 (1)
50-54: 🛠️ Refactor suggestion | 🟠 MajorRemove redundant soft-delete field; use a single source of truth.
Both
deletedAtandisDeletedrepresent deletion state. While currently kept in sync during delete and restore, maintaining two independent fields creates unnecessary maintenance burden. All queries already use onlyisDeleted: false, indicatingdeletedAtfrom@DeleteDateColumnis not leveraged. Remove one field and standardize on either:
- TypeORM's
@DeleteDateColumnwithwithDeletedquery option, or- Manual
isDeletedboolean (if timestamps are not needed)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/src/artists/entities/artist.entity.ts` around lines 50 - 54, Remove the redundant soft-delete field deletedAt and standardize on the existing isDeleted boolean: delete the `@DeleteDateColumn-decorated` property (deletedAt) and its import from the Artist entity, update any delete/restore logic that currently sets deletedAt to only toggle isDeleted in methods that reference deletedAt/isDeleted (e.g., softDelete/restore handlers), update queries to continue filtering on isDeleted, and add a migration to drop the deleted_at column and any tests or code that reference deletedAt to use isDeleted instead.backend/src/tips/entities/tip.entity.ts-146-147 (1)
146-147:⚠️ Potential issue | 🟠 MajorMissing default value for
isDeletedwill cause insert failures.The
isDeletedcolumn is defined as a required boolean without a default value. This will cause database insert operations to fail unless an explicit value is always provided.🐛 Proposed fix: Add default value
- `@Column`() - isDeleted: boolean; + `@Column`({ default: false }) + isDeleted: boolean;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/src/tips/entities/tip.entity.ts` around lines 146 - 147, The isDeleted Column in the Tip entity is required but has no default, causing inserts to fail when not provided; update the `@Column` decorator for isDeleted (in the Tip entity class) to include a default value (e.g., `@Column`({ default: false })) so new rows default to not-deleted, and add/adjust a migration if your schema is already deployed.backend/package.json-60-60 (1)
60-60:⚠️ Potential issue | 🟠 MajorConsolidate duplicate Stellar SDK packages.
The project includes both
stellar-sdk@^13.3.0and@stellar/stellar-sdk@^14.4.3. Code is importing from both packages:
stellar-sdk: payout-processor.service.ts, payouts.integration.spec.ts@stellar/stellar-sdk: stellar.service.ts, auth.service.tsSince
stellar-sdkwas renamed to@stellar/stellar-sdkstarting in v11.x, having both causes unnecessary bundle bloat, potential version conflicts, and maintenance complexity. Removestellar-sdkfrom package.json and update the two affected files to import from@stellar/stellar-sdkinstead.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/package.json` at line 60, Remove the deprecated "stellar-sdk" dependency from package.json and replace all imports from "stellar-sdk" with "@stellar/stellar-sdk" in the two affected modules: update payout-processor.service.ts (look for imports like import {...} from 'stellar-sdk') and payouts.integration.spec.ts to import from '@stellar/stellar-sdk' instead, ensure import names (class/functions) match the `@stellar/stellar-sdk` export names used in stellar.service.ts and auth.service.ts, run npm install to consolidate to `@stellar/stellar-sdk`@^14.4.3 and run tests to confirm no API mismatches.backend/src/tips/entities/tip.entity.ts-65-66 (1)
65-66:⚠️ Potential issue | 🟠 MajorAdd
fromUserIdforeign key column and@ManyToOnerelation to User entity.The
fromUsercolumn is currently defined as a plainstring, but the service code treats it as a User entity relation—it accessestip.fromUser.isDeleted(line 266) and usesleftJoinAndSelect("tip.fromUser", "user")in queries. Additionally, the service queries referencetip.fromUserIdin WHERE clauses, confirming a foreign key exists in the database.Follow the existing pattern for
artist,track, andgoalrelations:`@Column`({ type: "uuid" }) fromUserId: string; `@ManyToOne`(() => User, (user) => user.tips, { onDelete: "SET NULL" }) `@JoinColumn`({ name: "fromUserId" }) fromUser?: User;Add the User import and consider
@Index()onfromUserIdfor query performance.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/src/tips/entities/tip.entity.ts` around lines 65 - 66, The Tip entity currently declares fromUser as a plain string but the service treats it as a relation; add a UUID foreign-key column and a ManyToOne relation to User: introduce a fromUserId column (type: "uuid") with an optional `@Index`(), import the User entity, and replace/augment the existing fromUser field with a `@ManyToOne`(() => User, user => user.tips, { onDelete: "SET NULL" }) plus `@JoinColumn`({ name: "fromUserId" }) so tip.fromUserId and tip.fromUser behave like the other relations (see artist, track, goal patterns).backend/src/app.module.ts-49-67 (1)
49-67:⚠️ Potential issue | 🟠 MajorRedis client lacks lifecycle management.
The Redis client is created inline in
useFactorywithout:
- Graceful shutdown: No
onApplicationShutdownhook to callredisClient.quit(), risking connection leaks.- Connection error handling: No error listener to handle connection failures gracefully.
- Reusability: The client instance is scoped to this factory and cannot be shared with other modules that may need Redis.
Consider extracting Redis client creation to a dedicated provider with proper lifecycle hooks:
♻️ Suggested approach
// redis.provider.ts export const REDIS_CLIENT = 'REDIS_CLIENT'; export const RedisProvider = { provide: REDIS_CLIENT, useFactory: (configService: ConfigService) => { const client = new Redis({ host: configService.get<string>("REDIS_HOST", "localhost"), port: configService.get<number>("REDIS_PORT", 6379), password: configService.get<string>("REDIS_PASSWORD"), db: configService.get<number>("REDIS_DB", 0), }); client.on('error', (err) => console.error('Redis error:', err)); return client; }, inject: [ConfigService], };Then inject
REDIS_CLIENTinto bothThrottlerModuleand any other Redis-dependent services.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/src/app.module.ts` around lines 49 - 67, The Redis client is created inline inside the useFactory for ThrottlerModule (Redis, redisClient, CustomThrottlerRedisStorage), which prevents reuse and lacks lifecycle/error handling; extract the creation into a dedicated provider (e.g., RedisProvider with token REDIS_CLIENT) that is built via a useFactory injected with ConfigService, attach an error listener (client.on('error', ...)), and implement graceful shutdown (onApplicationShutdown or a provider-specific close that calls redisClient.quit()) so the same client can be injected into the ThrottlerModule and other services instead of creating a scoped client in useFactory.backend/src/stellar/stellar.service.ts-148-155 (1)
148-155:⚠️ Potential issue | 🟠 MajorValidate the payout secret before calling
fromSecret().The non-null assertion turns a missing
STELLAR_PAYOUT_SECRET_KEYinto an opaque SDK exception. Fail fast with a clear config error before building the transaction.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/src/stellar/stellar.service.ts` around lines 148 - 155, In sendMultiRecipientPayment, avoid using the non-null assertion when calling StellarSdk.Keypair.fromSecret; first read the secret via this.configService.get<string>("STELLAR_PAYOUT_SECRET_KEY"), validate it (check for undefined/empty), and throw a clear, descriptive error (e.g., "Missing STELLAR_PAYOUT_SECRET_KEY") if it's not present, then pass the validated secret into StellarSdk.Keypair.fromSecret; this ensures config validation happens before invoking the SDK.backend/src/common/guards/throttler.guard.ts-53-58 (1)
53-58:⚠️ Potential issue | 🟠 MajorThe auth-aware branch never changes the applied throttle.
The inner
limit/ttllocals shadow the outer values and are discarded beforesuper.handleRequest(), so authenticated traffic still uses the base settings. Also,authorizationheader presence alone is not a trustworthy auth signal once this path is wired up.Also applies to: 86-87
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/src/common/guards/throttler.guard.ts` around lines 53 - 58, The auth-aware branch currently creates local variables limit/ttl that shadow outer values and are never used, so authenticated requests fall back to base throttling; update the block in throttler.guard.ts (the isAuthAware && this.isAuthenticated(request) branch) to compute the increased rate limits and assign them back to requestProps (or build a new props object) before calling super.handleRequest(request, response, requestProps), rather than creating unused local variables; also replace any naive "authorization header present" check with a real authentication check by calling the existing isAuthenticated(request) helper (or wiring in the actual auth validation) so the branch only applies to truly authenticated users—apply the same fix for the similar code at the other occurrence referenced around the second block (lines 86-87).backend/src/stellar/stellar.service.ts-44-67 (1)
44-67:⚠️ Potential issue | 🟠 MajorNormalize amounts before comparing them.
op.amount !== amountcompares raw strings. Horizon returns canonical 7-decimal amounts, so a valid payment like1.0000000will fail verification when callers pass1. Compare normalized decimal values or stroops instead.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/src/stellar/stellar.service.ts` around lines 44 - 67, The comparison op.amount !== amount is using raw strings; normalize both sides before comparing by converting amounts to a canonical numeric form (e.g., parse Decimal to a fixed 7-decimal representation or convert amounts to stroops by multiplying by 10^7 and comparing integers) so payments like "1" and "1.0000000" match; update the predicate inside the operations.records.find (referencing paymentOp, op.amount and the amount argument) to normalize/parse both op.amount and the incoming amount before the equality check.backend/src/custom-throttler-storage-redis.ts-15-40 (1)
15-40:⚠️ Potential issue | 🟠 MajorSeparate block state must be persisted with its own TTL.
The current implementation only stores the hit counter in Redis. Once
count > limit, the method computesblockedUntilbut does not persist any block state. This means:
- On the next increment call, if the hit counter has expired but
blockDurationhas not, the block is lost- Repeated blocked hits recalculate
blockedUntil = Date.now() + blockDuration, allowing clients to be unblocked prematurelyTo match the
@nestjs/throttlercontract, create a separate Redis key/field for block state with a TTL matchingblockDuration. This allows the guard to correctly re-checkisBlockedon subsequent requests.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/src/custom-throttler-storage-redis.ts` around lines 15 - 40, The increment method currently only stores the hit counter; persist a separate block state key so blocks survive counter expiry: in increment (method name: increment) when count > limit, set a distinct Redis key (e.g., `${key}:block` or include throttlerName) with value of blockedUntil and set its TTL to blockDuration (use psetex/pexpire or set with PX), and on entry check that block key first to derive isBlocked/blockedUntil (override computed values) so subsequent calls honor the original block until the block-key expires; ensure timeToBlockExpire is derived from the block-key TTL (pttl) when present.
🟡 Minor comments (11)
backend/src/activities/activities.service.ts-19-19 (1)
19-19:⚠️ Potential issue | 🟡 MinorRemove unused import.
The
paginatehelper is imported but never used in this file.🧹 Proposed fix
-import { PaginatedResponse } from "../common/dto/paginated-response.dto"; -import { paginate } from "../common/helpers/paginate.helper"; +import { PaginatedResponse } from "../common/dto/paginated-response.dto";🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/src/activities/activities.service.ts` at line 19, Remove the unused import "paginate" from the import list in activities.service.ts; locate the import statement "import { paginate } from \"../common/helpers/paginate.helper\";" at the top of the file (near other imports for ActivitiesService) and delete it so the file no longer contains an unused symbol.backend/src/common/dto/pagination.dto.ts-22-22 (1)
22-22:⚠️ Potential issue | 🟡 MinorRemove leftover development comment.
The comment
// ADD THIS:appears to be a leftover instruction from development and should be removed before merging.🧹 Proposed fix
- // ADD THIS: `@ApiProperty`({ example: 5, required: false }) unseenCount?: number;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/src/common/dto/pagination.dto.ts` at line 22, Remove the stray development comment "// ADD THIS:" from the pagination DTO file; locate the PaginationDto declaration (or the surrounding export/interface) in pagination.dto.ts and delete that leftover comment so the file contains only relevant TS code and no dev instruction comments.frontend/vite.config.ts-31-63 (1)
31-63:⚠️ Potential issue | 🟡 MinorLarge test exclusion list — ensure tracking for repair.
Excluding 30+ test files significantly reduces test coverage. Notable observations:
LivePerformanceMode.test.tsx(line 39) is both modified in this PR and excluded from execution — the test changes won't be validated in CI.- Critical components like
TipModal,WalletBalanceWidget, and receipt pages are excluded.The TODO comment is helpful. Consider creating a tracking issue with this list so these don't become permanent exclusions.
Would you like me to help create an issue template to track repairing these test suites?
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/vite.config.ts` around lines 31 - 63, The exclude list in vite.config.ts is hiding 30+ tests (e.g., LivePerformanceMode.test.tsx, TipModal.test.tsx, WalletBalanceWidget.test.tsx, TipReceiptPage.test.tsx) and the TODO comment notes they must be repaired; create a single tracking issue titled like "Repair excluded frontend tests" that includes the full file list (mark each as a checkbox), call out LivePerformanceMode.test.tsx as currently modified in this PR and requiring validation, assign owner(s), priority and a small acceptance criteria checklist; then update the TODO in vite.config.ts to reference the new issue number/URL so CI exclusions remain temporary and can be cleared when individual tests (the listed files) are fixed and re-enabled.contracts/drip-tip-matching/Cargo.toml-10-13 (1)
10-13:⚠️ Potential issue | 🟡 MinorConsider upgrading soroban-sdk to the latest version (25.1.1) or document the rationale for using 21.7.0.
The manifest uses
soroban-sdk = "21.7.0", released in late 2024. While this version remains stable and supported, the latest version is 25.1.1 (released March 2026). For new contracts, upgrading to v25.1.1 provides current features and fixes. If staying on v21.7.0 is intentional (e.g., due to dependency constraints), document the reason. Note: upgrading to v22+ involves breaking changes (e.g., Env::register replaces Env::register_contract, test event snapshots change, contractevent replaces Events::publish in v23).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@contracts/drip-tip-matching/Cargo.toml` around lines 10 - 13, Update the soroban-sdk dependency version or add a clear rationale comment: either bump both soroban-sdk entries in Cargo.toml from "21.7.0" to "25.1.1" (and run/build/tests to address breaking changes like Env::register → Env::register_contract, contractevent/test snapshot updates) or add a comment in the file or project README explaining why v21.7.0 is intentionally pinned (e.g., dependency compatibility, known breaking-change risks), ensuring both the normal and dev-dependencies (soroban-sdk) are handled consistently..github/workflows/contracts.yml-29-29 (1)
29-29:⚠️ Potential issue | 🟡 MinorUpdate
actions/cacheto v4.The
actions/cache@v3action is outdated. GitHub Actions runners have evolved and v4 is the current recommended version with improved performance and compatibility.Proposed fix
- name: Cache Cargo - uses: actions/cache@v3 + uses: actions/cache@v4 with:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/contracts.yml at line 29, Update the GitHub Action usage string for the cache action: replace the occurrences of "uses: actions/cache@v3" with "uses: actions/cache@v4" so the workflow uses the current recommended version; ensure any inputs or keys remain compatible with the updated action and run the workflow lint/CI to confirm no breaking changes.backend/src/social-sharing/referral.controller.ts-73-74 (1)
73-74:⚠️ Potential issue | 🟡 Minor
applyCoderesponse docs are missing the 404 case.The service throws not-found for invalid/inactive codes, but this endpoint documents only 400/409 here. Please add a 404
@ApiResponsefor accurate client integration.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/src/social-sharing/referral.controller.ts` around lines 73 - 74, The OpenAPI docs for the applyCode endpoint in referral.controller.ts are missing the 404 case; update the controller decorators for the applyCode handler to include an `@ApiResponse`({ status: 404, description: "Referral code not found or inactive" }) alongside the existing 400 and 409 responses so the endpoint accurately documents the service's not-found error (locate the applyCode method in the ReferralController and add the new `@ApiResponse` decorator near the existing ones).backend/src/tracks/tracks.service.ts-8-9 (1)
8-9:⚠️ Potential issue | 🟡 MinorRemove unused imports on lines 8-9.
PaginatedResponseandpaginateare imported but not used in this file. The service has been refactored to usePaginatedResultinstead, making these dead imports.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/src/tracks/tracks.service.ts` around lines 8 - 9, Remove the dead imports PaginatedResponse and paginate from tracks.service.ts: locate the import statement that includes PaginatedResponse and paginate and delete those two specifiers so only the used PaginatedResult (and any other required imports) remain; ensure there are no other references to PaginatedResponse or paginate in the file after removal and run a quick compile/type-check to verify no unresolved symbols.backend/src/users/users.service.spec.ts-284-303 (1)
284-303:⚠️ Potential issue | 🟡 MinorPlease re-enable the username-conflict test.
Skipping this removes coverage for one of
update()'s core invariants. Fix the mock setup instead of leaving the duplicate-username path untested.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/src/users/users.service.spec.ts` around lines 284 - 303, Re-enable the skipped test "should throw ConflictException if new username already exists" and fix the mockRepository.findOne implementation so it correctly distinguishes lookup by id vs by username: remove the .skip, keep/mockReset mockRepository.findOne, and update its mockImplementation to inspect opts.where for the presence of id (and isDeleted === false) to return mockUser, and for the presence of username (matching updateUserDto.username) to return otherUser; otherwise return null—this ensures service.update(mockUser.id, updateUserDto) exercises the duplicate-username branch and throws ConflictException.backend/src/users/entities/user.entity.ts-52-68 (1)
52-68:⚠️ Potential issue | 🟡 MinorType nullable columns as nullable in the entity.
profileImage,bio, anddeletedAtare mapped as nullable columns, but the class still exposes them as always-present values. That mismatch is already leaking into callers/tests that assignnull.🛠️ Suggested type fix
- profileImage: string; + profileImage: string | null; - bio: string; + bio: string | null; - deletedAt: Date; + deletedAt: Date | null;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/src/users/entities/user.entity.ts` around lines 52 - 68, The entity declares nullable columns but the TypeScript types are non-nullable; update the properties profileImage and bio to type string | null and deletedAt to Date | null on the User entity so the class signature matches the Column/ DeleteDateColumn nullable configuration (keep the existing decorators profileImage, bio, deletedAt unchanged, only change the property types).backend/src/users/users.service.spec.ts-320-329 (1)
320-329:⚠️ Potential issue | 🟡 MinorAssert the soft-delete mutation explicitly.
This only proves
save()was called. It will still pass ifremove()saves the untouched entity. Verify the saved payload flipsisDeleted, stampsdeletedAt, and that hard delete is not used.🧪 Suggested assertion
- expect(mockRepository.save).toHaveBeenCalled(); + expect(mockRepository.save).toHaveBeenCalledWith( + expect.objectContaining({ + id: mockUser.id, + isDeleted: true, + deletedAt: expect.any(Date), + }), + ); + expect(mockRepository.remove).not.toHaveBeenCalled();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/src/users/users.service.spec.ts` around lines 320 - 329, The test for users.service.remove currently only checks that mockRepository.save was called; update it to assert the actual saved payload flips isDeleted to true and sets deletedAt (non-null/Date), and assert no hard-delete methods (e.g., mockRepository.delete or mockRepository.remove) were called. Locate the test for remove (service.remove) and either inspect the argument passed to mockRepository.save (using mockRepository.save.mock.calls[0][0]) or use expect(mockRepository.save).toHaveBeenCalledWith(expect.objectContaining({ id: mockUser.id, isDeleted: true, deletedAt: expect.any(Date) })); also add expect(mockRepository.delete).not.toHaveBeenCalled() and expect(mockRepository.remove).not.toHaveBeenCalled().backend/src/search/search.service.ts-109-109 (1)
109-109:⚠️ Potential issue | 🟡 MinorEscape
_and\in addition to%when buildingILIKEpatterns.PostgreSQL
LIKE/ILIKEtreat%(sequence wildcard) and_(single-character wildcard) as special characters. The current code only escapes%, allowing_and backslashes in user input to behave as wildcards and distort search results. All three characters—%,_, and\—must be escaped with a backslash.Occurs at lines 109, 207, and 281.
Proposed fix
+function escapeLikePattern(input: string): string { + return input.replace(/[\\%_]/g, "\\$&"); +} ... - { like: `%${q.replace(/%/g, "\\%")}%` }, + { like: `%${escapeLikePattern(q)}%` }, ... - { like: `%${q.replace(/%/g, "\\%")}%` }, + { like: `%${escapeLikePattern(q)}%` }, ... - const like = `%${sanitized.replace(/%/g, "\\%")}%`; + const like = `%${escapeLikePattern(sanitized)}%`;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/src/search/search.service.ts` at line 109, Update the ILIKE pattern construction to escape %, _, and backslash characters in the user query instead of only %; replace instances where the pattern is built using q.replace(/%/g, "\\%") (e.g., the places building { like: `%${q.replace(/%/g, "\\%")}%` }) with a single utility replacement like q.replace(/([%_\\])/g, "\\$1") and use the result when wrapping with `%...%` for the ILIKE clauses so all three characters are escaped consistently (apply to the occurrences around the q variable at the three noted spots in search.service.ts).
| @Post(":artistId/restore") | ||
| @ApiOperation({ summary: "Restore a soft-deleted artist (admin only)" }) | ||
| @ApiParam({ name: "artistId", description: "Artist UUID", type: "string" }) | ||
| @ApiResponse({ status: 200, description: "Artist restored successfully" }) | ||
| @ApiResponse({ status: 404, description: "Artist not found" }) | ||
| async restore(@Param("artistId") artistId: string) { | ||
| // TODO: Add admin guard | ||
| return this.artistsService.restore(artistId); |
There was a problem hiding this comment.
Protect the restore route with an admin guard.
Lines 82-89 expose POST /artists/:artistId/restore to any authenticated user because the controller only applies JwtAuthGuard. That lets non-admins restore arbitrary artists.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/src/artists/artists.controller.ts` around lines 82 - 89, The restore
endpoint (async restore in artists.controller.ts) is currently only protected by
JwtAuthGuard and allows any authenticated user to call
artistsService.restore(artistId); add an admin-only guard by applying the
appropriate guard/decorator (e.g., UseGuards(RolesGuard, JwtAuthGuard) plus
`@Roles`('admin') or a dedicated AdminGuard) to the restore method (or controller)
so only admin-role users can call restore; ensure the guard symbols used
(UseGuards, Roles, RolesGuard or AdminGuard) are imported and used on the
restore method before calling artistsService.restore.
| if ( | ||
| referralCode.maxUsages !== null && | ||
| referralCode.usageCount >= referralCode.maxUsages | ||
| ) { | ||
| throw new BadRequestException('This referral code has reached its usage limit.'); | ||
| throw new BadRequestException( | ||
| "This referral code has reached its usage limit.", | ||
| ); | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
wc -l backend/src/social-sharing/referral.service.tsRepository: OlufunbiIK/tip-tune
Length of output: 113
🏁 Script executed:
cat -n backend/src/social-sharing/referral.service.ts | head -200Repository: OlufunbiIK/tip-tune
Length of output: 7404
maxUsages enforcement is race-prone under concurrent applyCode calls.
The usage-limit check at lines 116–123 is performed outside the transaction using pre-fetched data. When concurrent requests are processed, both can pass the check before either increments the counter, allowing usageCount to exceed maxUsages. Move the check inside the transaction or use a conditional increment that fails when no slot remains.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/src/social-sharing/referral.service.ts` around lines 116 - 123, The
pre-check that throws the BadRequestException when referralCode.usageCount >=
referralCode.maxUsages is race-prone; move this logic into the same
transactional update that increments usageCount inside applyCode (or use a
conditional DB update) so two concurrent requests cannot both pass.
Specifically, in applyCode replace the external check around
referralCode.maxUsages/usageCount with either (a) a SELECT ... FOR UPDATE (or
ORM equivalent) of the referral row inside the transaction and re-check
maxUsages before incrementing, or (b) perform an atomic UPDATE that increments
usageCount only when usageCount < maxUsages and then verify the affected row
count to decide whether to throw the BadRequestException. Ensure the code path
that currently throws the exception (the block referencing
referralCode.maxUsages, referralCode.usageCount) is removed or moved so the
final decision is made inside the transaction/conditional update.
| @Post("subscribe") | ||
| async subscribe( | ||
| @Body() dto: CreateArtistSubscriptionDto, | ||
| @Request() req: any, | ||
| ) { | ||
| return this.subscriptionsService.subscribe(req.user.id, dto); | ||
| return this.subscriptionsService.subscribe(req.user.id, dto.tierId); | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
dto_file="$(fd 'subscriptions.dto.ts$' | head -n1)"
service_file="$(fd 'subscriptions.service.ts$' | head -n1)"
if [[ -z "${dto_file}" || -z "${service_file}" ]]; then
echo "Could not locate subscriptions DTO/service files" >&2
exit 1
fi
echo "DTO: $dto_file"
rg -n -C 10 '\b(class|interface)\s+CreateArtistSubscriptionDto\b' "$dto_file"
echo
echo "Service: $service_file"
rg -n -C 10 '\bsubscribe\s*\(' "$service_file"Repository: OlufunbiIK/tip-tune
Length of output: 1438
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Check the SubscribeDto definition
rg -n -C 5 '\b(class|interface|type)\s+SubscribeDto\b' backend/src/subscription-tiers/
# Check if stellarTxHash is used in the subscribe method
rg -n 'stellarTxHash' backend/src/subscription-tiers/subscriptions.service.tsRepository: OlufunbiIK/tip-tune
Length of output: 725
Fix critical parameter mismatch in subscribe() call.
The controller passes (req.user.id, dto.tierId) as arguments, but the service signature expects (dto: SubscribeDto, userId: string). The order and type are reversed—the service will receive a string where it expects a SubscribeDto object. Additionally, stellarTxHash from the incoming DTO is dropped. The service code at lines 289, 299, and 316 accesses dto.stellarTxHash, which will fail at runtime.
Pass the full DTO object instead: this.subscriptionsService.subscribe(dto, req.user.id) after validating that SubscribeDto structure matches CreateArtistSubscriptionDto.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/src/subscription-tiers/subscriptions.controller.ts` around lines 91 -
97, The controller is passing arguments to subscriptionsService.subscribe in the
wrong order and dropping stellarTxHash; change the call in the subscribe handler
to pass the DTO first and the user id second so the service signature
subscribe(dto: SubscribeDto, userId: string) is satisfied (i.e., call
subscriptionsService.subscribe(dto, req.user.id)); ensure
CreateArtistSubscriptionDto contains the stellarTxHash field or map it into a
SubscribeDto before calling so references to dto.stellarTxHash inside the
service (lines referencing dto.stellarTxHash) remain valid.
| @Get("artist/:artistId/subscribers") | ||
| async getArtistSubscribers( | ||
| @Param('artistId', ParseUUIDPipe) artistId: string, | ||
| @Param("artistId", ParseUUIDPipe) artistId: string, | ||
| @Query() query: SubscriptionQueryDto, | ||
| ) { | ||
| return this.subscriptionsService.getArtistSubscribers(artistId, query); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
service_file="$(fd 'subscriptions.service.ts$' | head -n1)"
if [[ -z "${service_file}" ]]; then
echo "Could not locate subscriptions.service.ts" >&2
exit 1
fi
echo "Service: $service_file"
rg -n -C 8 '\b(getArtistSubscribers|getSubscriptionRevenue)\s*\(' "$service_file"
echo
echo "Searching for auth/ownership checks near those methods:"
rg -n -C 3 '\b(ForbiddenException|UnauthorizedException|currentUser|actorId|request|req\.user|nestjs-cls|AsyncLocalStorage)\b' "$service_file"Repository: OlufunbiIK/tip-tune
Length of output: 2314
🏁 Script executed:
#!/bin/bash
set -euo pipefail
service_file="backend/src/subscription-tiers/subscriptions.service.ts"
# Get full implementation of both methods
echo "=== Full getArtistSubscribers implementation ==="
rg -n -A 20 'async getArtistSubscribers' "$service_file"
echo
echo "=== Full getSubscriptionRevenue implementation ==="
rg -n -A 50 'async getSubscriptionRevenue' "$service_file"Repository: OlufunbiIK/tip-tune
Length of output: 2587
🏁 Script executed:
#!/bin/bash
set -euo pipefail
controller_file="backend/src/subscription-tiers/subscriptions.controller.ts"
# Check how the controller calls these service methods and if it passes requester info
echo "=== Controller implementation around lines 161-177 ==="
sed -n '155,180p' "$controller_file"
echo
echo "=== Check for requester/user context in controller ==="
rg -n -B 5 -A 5 'getArtistSubscribers|getSubscriptionRevenue' "$controller_file"Repository: OlufunbiIK/tip-tune
Length of output: 1694
🏁 Script executed:
#!/bin/bash
set -euo pipefail
service_file="backend/src/subscription-tiers/subscriptions.service.ts"
# Check entire service file for ClsService or request context usage
echo "=== Check for request context/ClsService in service ==="
rg -n 'ClsService|REQUEST|this\.request|req\.|getCurrentUser|getActorId' "$service_file"
echo
echo "=== Check service imports to see auth/context setup ==="
head -40 "$service_file"Repository: OlufunbiIK/tip-tune
Length of output: 118
Implement ownership verification for artist analytics routes.
Both getArtistSubscribers and getSubscriptionRevenue lack authorization checks. Any authenticated user can query another artist's subscriber list or revenue by supplying an arbitrary artistId. The controller passes no requester identity to the service, and the service methods contain no ownership validation.
Add requester identity to the service method signatures and verify that the requester is the artist owner before returning results, using the same pattern already applied elsewhere in this service (e.g., ForbiddenException('You do not own this tier')).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/src/subscription-tiers/subscriptions.controller.ts` around lines 161
- 166, The artist analytics endpoints getArtistSubscribers and
getSubscriptionRevenue must verify the requester owns the artist before
returning data: modify the controller methods (getArtistSubscribers and
getSubscriptionRevenue) to accept the requester identity (e.g., inject the user
from the request and pass it into subscriptionsService.getArtistSubscribers and
subscriptionsService.getSubscriptionRevenue), then update those service methods
to validate ownership (lookup artist owner and throw ForbiddenException('You do
not own this tier') if the requester is not the owner) following the same
pattern used elsewhere in this service.
| import { TrackPlayCountService } from './play-count.service'; | ||
| import { TrackPlay, PlaySource } from './track-play.entity'; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify exported class name in the service file and its usage in this spec
rg -n 'export class .*Service' backend/src/track-play-count/play-count.service.ts
rg -n 'TrackPlayCountService|PlayCountService' backend/src/track-play-count/track-play-count.spec.ts -C 2Repository: OlufunbiIK/tip-tune
Length of output: 848
Fix import symbol: PlayCountService is exported, not TrackPlayCountService.
The import on line 3 requests TrackPlayCountService, but backend/src/track-play-count/play-count.service.ts exports PlayCountService (line 20). This causes a compile/type-check failure. Change the import to import { PlayCountService } from './play-count.service'; and update all references in the spec file (lines 32, 33, 39, 44) from TrackPlayCountService to PlayCountService.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/src/track-play-count/track-play-count.spec.ts` around lines 3 - 4,
The spec imports the wrong symbol: replace the import of TrackPlayCountService
with PlayCountService (import { PlayCountService } from './play-count.service')
and update every usage of TrackPlayCountService in this test file to
PlayCountService (e.g., the mocked instance declarations, variable names, and
any new PlayCountService(...) or jest.mock references around the previous usages
on lines noted). Ensure constructors, type annotations, and describe/it setup
that reference TrackPlayCountService are renamed to PlayCountService so the spec
matches the real exported class.
| ); | ||
|
|
||
| let waveform = await this.waveformRepository.findOne({ where: { trackId } }); | ||
| const waveform = await this.waveformRepository.findOne({ where: { trackId } }); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Description: Search for upsert followed by findOne without null checks in TypeORM services
# Find files with upsert operations
rg -l 'upsert\(' --type=ts -g 'backend/**/*.service.ts' | while read -r file; do
echo "=== $file ==="
# Show upsert followed by findOne patterns with context
rg -A 10 'upsert\(' "$file" | rg -B 5 -A 5 'findOne'
doneRepository: OlufunbiIK/tip-tune
Length of output: 290
🏁 Script executed:
# Read the waveform.service.ts file to see the full method context
cat -n backend/src/waveform/waveform.service.ts | sed -n '15,55p'Repository: OlufunbiIK/tip-tune
Length of output: 1889
🏁 Script executed:
# Check the entire generateForTrack method to understand control flow and null handling
rg -A 40 'generateForTrack' backend/src/waveform/waveform.service.ts -B 2Repository: OlufunbiIK/tip-tune
Length of output: 2841
🏁 Script executed:
# Check if there's a null check or guard anywhere after the findOne call
rg -A 3 'findOne.*trackId' backend/src/waveform/waveform.service.tsRepository: OlufunbiIK/tip-tune
Length of output: 611
Add null guard after findOne—this is a critical null safety bug, not an edge case.
The change from let to const is correct since waveform is never reassigned. However, the code lacks a null guard after findOne (line 30), causing a guaranteed TypeError at line 35+ if the record doesn't exist. While upsert should insert the record, findOne is a separate call and can return null due to race conditions, transaction isolation issues, or other DB failures.
Other methods in this service (getByTrackId and getStatus) correctly guard against this. Add a null check immediately after line 30:
const waveform = await this.waveformRepository.findOne({ where: { trackId } });
+
+if (!waveform) {
+ throw new Error(`Failed to retrieve waveform record after upsert for track ${trackId}`);
+}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/src/waveform/waveform.service.ts` at line 30, After calling
this.waveformRepository.findOne({ where: { trackId } }) assign to const waveform
and immediately add a null guard (e.g., if (!waveform) { throw new
NotFoundException(...) } or return a handled value) to avoid a TypeError later;
update the code around the findOne call so that the waveform variable is
validated before any access (match the null-check behavior used in
getByTrackId/getStatus) and reference this.waveformRepository.findOne and the
waveform constant when implementing the guard.
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
Cleaning the repo
Summary by CodeRabbit
New Features
Improvements
Removals