feat(auth): Google OAuth with domain-based access control#226
Conversation
Add NextAuth.js v5 with Google OAuth to restrict dashboard access. Users with @2weeks.co domain are auto-approved; external users see an access restriction notice directing them to contact [email protected]. This addresses excessive LLM API and DigitalOcean costs from unauthenticated pipeline executions. Backend: - Add users table (auto-created on pool init) - Add /dashboard/auth/upsert-user and check-user endpoints Frontend: - NextAuth.js v5 with JWT session strategy - Middleware route protection (public: /, /demo only) - Custom sign-in, pending approval, and error pages - SessionProvider + UserMenu component Infrastructure: - Route /api/auth → web component (before /api → api catch-all) - Add Google OAuth + AUTH_SECRET env vars to app.yaml Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
|
You have reached your Codex usage limits for code reviews. You can see your limits in the Codex usage dashboard. |
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 9 minutes and 41 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (8)
WalkthroughGoogle OAuth 기반 인증과 세션 통합을 추가하고, 사용자 테이블 보장, NextAuth 설정·미들웨어, 대시보드 인증 엔드포인트, DigitalOcean 앱 TTL 정리 백그라운드 로직 및 관련 배포 구성 변경을 도입합니다. Changes
Sequence Diagram(s)sequenceDiagram
actor User
participant Web as Web App
participant Middleware as Auth Middleware
participant NextAuth as NextAuth Handler
participant Backend as Dashboard API
participant DB as Database
User->>Web: 보호된 경로 요청
Web->>Middleware: 요청 전달
alt 미인증
Middleware-->>Web: 리다이렉트 /auth/signin?callbackUrl=...
else 인증됨, 미승인
Middleware-->>Web: 리다이렉트 /auth/pending
else 인증됨, 승인됨
Middleware-->>Web: 요청 계속 처리
end
Note over NextAuth,Backend: 인증 시 작업
NextAuth->>Backend: POST /dashboard/auth/upsert-user (email,name,image)
Backend->>DB: INSERT ... ON CONFLICT UPDATE
DB-->>Backend: 저장된 사용자 반환
Backend-->>NextAuth: UpsertUserResponse
NextAuth->>NextAuth: JWT 생성 (domain, approved 저장)
NextAuth->>Web: 콜백 URL로 리다이렉트
par 주기적 승인 재검증 (5분 간격)
NextAuth->>Backend: GET /dashboard/auth/check-user (email)
Backend->>DB: 사용자 조회
DB-->>Backend: 승인 상태 반환
Backend-->>NextAuth: CheckUserResponse
NextAuth->>NextAuth: 토큰의 approved 업데이트
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Code Review
This pull request implements a comprehensive authentication and authorization system using NextAuth and Google OAuth. It introduces backend support for user persistence and approval status in PostgreSQL, including automatic table creation and API endpoints for user management. On the frontend, it adds a middleware for route protection, custom authentication pages (sign-in, error, and pending access), and a user menu component. The review feedback identifies a logic error in the middleware regarding the handling of the pending page, suggests making domain extraction case-insensitive, points out a redundant database index, recommends better exception logging, and advises moving hardcoded domain configurations to environment variables.
|
|
||
| function isPublicPath(pathname: string): boolean { | ||
| if (publicPaths.includes(pathname)) return true; | ||
| if (pathname.startsWith("/auth/")) return true; |
There was a problem hiding this comment.
The current logic makes /auth/pending a public path, which prevents the redirect logic for approved users (lines 35-37) from being executed because the middleware returns early. Additionally, /api/auth/ routes should be explicitly allowed to ensure the OAuth callback flow works correctly.
| if (pathname.startsWith("/auth/")) return true; | |
| if (pathname.startsWith("/api/auth") || (pathname.startsWith("/auth/") && pathname !== "/auth/pending")) return true; |
| async jwt({ token, user, trigger }) { | ||
| // On initial sign-in, extract domain and set approval | ||
| if (trigger === "signIn" && user?.email) { | ||
| const domain = user.email.split("@")[1] ?? ""; |
There was a problem hiding this comment.
| created_at TIMESTAMPTZ NOT NULL DEFAULT now(), | ||
| last_login_at TIMESTAMPTZ NOT NULL DEFAULT now() | ||
| ); | ||
| CREATE INDEX IF NOT EXISTS idx_users_email ON users(email); |
| except Exception: | ||
| import logging | ||
|
|
||
| logging.getLogger(__name__).warning( | ||
| "Could not create users table — DB may not be ready yet" | ||
| ) |
There was a problem hiding this comment.
Swallowing the exception makes it difficult to diagnose why the table creation failed (e.g., permission issues or syntax errors). Using logging.exception will capture the full stack trace. Also, consider moving the import logging to the top of the file for better performance and style.
| except Exception: | |
| import logging | |
| logging.getLogger(__name__).warning( | |
| "Could not create users table — DB may not be ready yet" | |
| ) | |
| except Exception: | |
| import logging | |
| logging.getLogger(__name__).exception("Could not create users table") |
| raise HTTPException(status_code=400, detail="invalid_email") | ||
|
|
||
| domain = body.email.rsplit("@", 1)[1].lower() | ||
| approved = domain == "2weeks.co" |
There was a problem hiding this comment.
There was a problem hiding this comment.
Actionable comments posted: 7
🧹 Nitpick comments (2)
web/src/app/auth/signin/page.tsx (1)
39-40:callbackUrl를 내부 경로로 제한하는 방어 로직을 추가하세요.쿼리 문자열 값을 그대로 전달하지 말고, 내부 상대 경로만 허용하면 리디렉션 안정성이 더 좋아집니다.
제안 diff
function SignInContent() { const searchParams = useSearchParams(); - const callbackUrl = searchParams.get("callbackUrl") || "/dashboard"; + const rawCallbackUrl = searchParams.get("callbackUrl"); + const callbackUrl = + rawCallbackUrl && + rawCallbackUrl.startsWith("/") && + !rawCallbackUrl.startsWith("//") + ? rawCallbackUrl + : "/dashboard"; @@ <Button variant="outline" size="lg" className="w-full gap-3" onClick={() => signIn("google", { callbackUrl })} >Also applies to: 56-56
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@web/src/app/auth/signin/page.tsx` around lines 39 - 40, Validate and restrict callbackUrl (from useSearchParams/get("callbackUrl")) to internal relative paths only: check that the value exists, starts with a single "/" (not "//"), does not contain a protocol or hostname (no "http", "https", or ":"), and resolve to a safe path; if validation fails, fall back to "/dashboard". Apply the same validation logic wherever callbackUrl is read (the other occurrence around line 56) so all redirects use the sanitized internal path.web/package.json (1)
22-23:next-auth베타 버전 대신 안정화된 버전 사용을 권장합니다.
next-auth@^5.0.0-beta.30은 현재(2026년 4월) 여전히 베타 채널에 있으며, 공식 안정 버전 릴리스가 아직 나오지 않았습니다. 현재 사용 가능한 안정 버전은[email protected](v4 라인)입니다. 프로덕션 환경에서는 안정화된 버전으로 고정하여 예기치 않은 런타임 이슈를 방지하는 것을 권장합니다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@web/package.json` around lines 22 - 23, Replace the beta dependency string "next-auth": "^5.0.0-beta.30" in package.json with the stable release "next-auth": "4.24.13", then run your package manager (npm install or yarn install) to lock the new version and update lockfiles; after installing, run the test suite and a quick manual smoke test of authentication flows to ensure no API mismatches, and if any v5-specific APIs were used, revert them to the v4 equivalents in authentication-related modules (search for imports/usages of next-auth in auth.js, [...nextauth].js or auth-related files).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@agent/db/connection.py`:
- Around line 89-98: The current code swallows exceptions from
ensure_users_table(_pool) and may mark the DB as ready, causing permanent
failure and race conditions; change the logic so _users_table_ensured is only
set to True after ensure_users_table returns successfully, and on exception do
not leave a usable _pool — instead log the exception details and clean up/reset
_pool (or re-raise) so subsequent calls will retry table creation; update the
block around _pool, _users_table_ensured and ensure_users_table() to close/reset
_pool on failure and include the caught exception in the warning log.
In `@agent/server.py`:
- Around line 1976-1987: UpsertUserRequest and UpsertUserResponse currently
force name and image to non-null str causing 422 when OAuth returns null; update
the models (UpsertUserRequest.name, UpsertUserRequest.image and
UpsertUserResponse.name, UpsertUserResponse.image) to allow nullable values by
using Optional[str] with a default of None so the /dashboard/auth/upsert-user
flow accepts and returns null values instead of failing validation.
- Around line 1996-2055: The new dashboard auth endpoints are only registered at
bare paths and must also be exposed under the /api prefix to keep parity; add
duplicate route registrations for "/api/dashboard/auth/upsert-user" and
"/api/dashboard/auth/check-user" that delegate to the existing handlers
(dashboard_auth_upsert_user and dashboard_auth_check_user) or call their logic
so both the bare routes and the /api/... equivalents return identical behavior
and responses.
In `@web/src/lib/auth.ts`:
- Around line 23-39: The signIn handler in auth.ts quietly ignores failures from
authenticatedFetch and always returns true; update signIn to inspect the fetch
result (or catch and rethrow) and return false when the upsert fails so
authentication is not treated as successful on 4xx/5xx errors. Specifically,
call authenticatedFetch in signIn and check the response's ok/status (or
propagate errors thrown by authenticatedFetch), log the failure with context,
and return false (or throw) instead of unconditional true to ensure login fails
when the upsert to /dashboard/auth/upsert-user does not succeed.
- Around line 54-67: The current logic leaves token.approved unchanged when the
fetch fails or returns non-OK and still updates token.approvedCheckedAt,
allowing stale approvals to persist; change the flow in the block that calls
authenticatedFetch(`${DASHBOARD_API_URL}/dashboard/auth/check-user?...`) so that
if res.ok you parse the JSON, set token.approved = Boolean(data.approved) and
then set token.approvedCheckedAt = now, but if res is not ok (e.g., 404/500) or
the fetch throws, explicitly set token.approved = false (or undefined if you
prefer a clear “unknown” state) and do NOT update token.approvedCheckedAt so the
check will be retried sooner; keep logging the error in the catch, and reference
the token object, authenticatedFetch, DASHBOARD_API_URL and the check-user
endpoint to locate the code to change.
In `@web/src/middleware.ts`:
- Around line 5-12: isPublicPath 함수가 "/auth/"만 허용하고 있어 NextAuth의 "/api/auth/*"
콜백 경로가 보호되어 OAuth 흐름이 차단됩니다; isPublicPath(pathname: string)에서
pathname.startsWith("/api/auth/")도 공개 경로로 처리하도록 추가하고, 동일한 검사(예: matcher나 다른 인증
경로 검사 로직)가 있는 곳도 동일하게 "/api/auth/"을 허용하도록 수정하세요.
- Around line 7-8: The middleware marks all /auth/* paths as public by returning
true for pathname.startsWith("/auth/"), which prevents the later logic that
redirects approved users from /auth/pending to /dashboard from ever running;
update the checks so the /auth/pending case is evaluated first (or exclude it
from the broad public /auth/ check) so the redirect logic that maps
"/auth/pending" to "/dashboard" can execute—in other words, reorder or narrow
the pathname.startsWith("/auth/") condition in middleware.ts to allow the
specific "/auth/pending" branch to run.
---
Nitpick comments:
In `@web/package.json`:
- Around line 22-23: Replace the beta dependency string "next-auth":
"^5.0.0-beta.30" in package.json with the stable release "next-auth": "4.24.13",
then run your package manager (npm install or yarn install) to lock the new
version and update lockfiles; after installing, run the test suite and a quick
manual smoke test of authentication flows to ensure no API mismatches, and if
any v5-specific APIs were used, revert them to the v4 equivalents in
authentication-related modules (search for imports/usages of next-auth in
auth.js, [...nextauth].js or auth-related files).
In `@web/src/app/auth/signin/page.tsx`:
- Around line 39-40: Validate and restrict callbackUrl (from
useSearchParams/get("callbackUrl")) to internal relative paths only: check that
the value exists, starts with a single "/" (not "//"), does not contain a
protocol or hostname (no "http", "https", or ":"), and resolve to a safe path;
if validation fails, fall back to "/dashboard". Apply the same validation logic
wherever callbackUrl is read (the other occurrence around line 56) so all
redirects use the sanitized internal path.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 7abb7734-98a7-4817-9f1d-f5990e2ef58e
⛔ Files ignored due to path filters (1)
web/package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (15)
.do/app.yamlagent/db/connection.pyagent/server.pyweb/next.config.tsweb/package.jsonweb/src/app/api/auth/[...nextauth]/route.tsweb/src/app/auth/error/page.tsxweb/src/app/auth/pending/page.tsxweb/src/app/auth/signin/page.tsxweb/src/app/layout.tsxweb/src/components/providers/session-provider.tsxweb/src/components/shared/user-menu.tsxweb/src/lib/auth.tsweb/src/middleware.tsweb/src/types/next-auth.d.ts
| if _pool is not None and not _users_table_ensured: | ||
| try: | ||
| await ensure_users_table(_pool) | ||
| _users_table_ensured = True | ||
| except Exception: | ||
| import logging | ||
|
|
||
| logging.getLogger(__name__).warning( | ||
| "Could not create users table — DB may not be ready yet" | ||
| ) |
There was a problem hiding this comment.
users 테이블 초기화 실패가 영구 고착될 수 있습니다.
여기서 예외를 삼키고 계속 진행하면, Line 59가 이후 호출에서 _pool을 바로 반환하므로 ensure_users_table()를 다시 시도하지 않습니다. 게다가 첫 호출이 아직 테이블을 만드는 중일 때 다른 코루틴이 이미 _pool을 받아 users를 조회할 수 있어서, /dashboard/auth/*가 간헐적으로 relation "users" does not exist로 깨질 수 있습니다.
🔧 수정 방향 예시
async def get_pool() -> asyncpg.Pool:
global _pool, _pool_lock, _users_table_ensured
if _pool_lock is None:
_pool_lock = asyncio.Lock()
- if _pool is not None:
+ if _pool is not None and _users_table_ensured:
return _pool
async with _pool_lock:
- if _pool is not None:
+ if _pool is not None and _users_table_ensured:
return _pool
database_url = os.environ.get("DATABASE_URL", "")
if not database_url:
raise RuntimeError("DATABASE_URL environment variable is required")
@@
- if _pool is not None and not _users_table_ensured:
+ if _pool is not None and not _users_table_ensured:
try:
await ensure_users_table(_pool)
_users_table_ensured = True
- except Exception:
+ except Exception:
import logging
logging.getLogger(__name__).warning(
- "Could not create users table — DB may not be ready yet"
+ "Could not create users table — DB may not be ready yet",
+ exc_info=True,
)
return _pool🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@agent/db/connection.py` around lines 89 - 98, The current code swallows
exceptions from ensure_users_table(_pool) and may mark the DB as ready, causing
permanent failure and race conditions; change the logic so _users_table_ensured
is only set to True after ensure_users_table returns successfully, and on
exception do not leave a usable _pool — instead log the exception details and
clean up/reset _pool (or re-raise) so subsequent calls will retry table
creation; update the block around _pool, _users_table_ensured and
ensure_users_table() to close/reset _pool on failure and include the caught
exception in the warning log.
| class UpsertUserRequest(BaseModel): | ||
| email: str | ||
| name: str = "" | ||
| image: str = "" | ||
|
|
||
|
|
||
| class UpsertUserResponse(BaseModel): | ||
| id: str | ||
| email: str | ||
| name: str | ||
| approved: bool | ||
| domain: str |
There was a problem hiding this comment.
OAuth 프로필의 nullable 필드와 API 계약이 맞지 않습니다.
웹 쪽에서는 user.name / user.image가 null일 수 있는데, 여기서는 둘 다 str로 강제하고 있습니다. 그 경우 /dashboard/auth/upsert-user가 422를 반환하고, name이 NULL로 저장되면 UpsertUserResponse.name도 다시 검증에서 터집니다.
🔧 수정 예시
class UpsertUserRequest(BaseModel):
email: str
- name: str = ""
- image: str = ""
+ name: str | None = None
+ image: str | None = None
@@
class UpsertUserResponse(BaseModel):
id: str
email: str
- name: str
+ name: str | None = None
approved: bool
domain: str🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@agent/server.py` around lines 1976 - 1987, UpsertUserRequest and
UpsertUserResponse currently force name and image to non-null str causing 422
when OAuth returns null; update the models (UpsertUserRequest.name,
UpsertUserRequest.image and UpsertUserResponse.name, UpsertUserResponse.image)
to allow nullable values by using Optional[str] with a default of None so the
/dashboard/auth/upsert-user flow accepts and returns null values instead of
failing validation.
| async signIn({ user }) { | ||
| try { | ||
| await authenticatedFetch( | ||
| `${DASHBOARD_API_URL}/dashboard/auth/upsert-user`, | ||
| { | ||
| method: "POST", | ||
| body: JSON.stringify({ | ||
| email: user.email, | ||
| name: user.name, | ||
| image: user.image, | ||
| }), | ||
| }, | ||
| ); | ||
| } catch (error) { | ||
| console.error("Failed to upsert user:", error); | ||
| } | ||
| return true; |
There was a problem hiding this comment.
사용자 upsert 실패를 지금은 조용히 통과시킵니다.
authenticatedFetch()는 4xx/5xx에서 throw하지 않는데, 여기서는 res.ok를 확인하지 않고 항상 true를 반환합니다. 그래서 라우팅 오타, API key 누락, DB 오류가 나도 로그인은 성공하고 승인 정보 동기화만 빠진 상태로 진행됩니다.
🔧 수정 예시
async signIn({ user }) {
try {
- await authenticatedFetch(
+ const res = await authenticatedFetch(
`${DASHBOARD_API_URL}/dashboard/auth/upsert-user`,
{
method: "POST",
body: JSON.stringify({
email: user.email,
name: user.name,
image: user.image,
}),
},
);
+ if (!res.ok) {
+ console.error("Failed to upsert user:", res.status, await res.text());
+ return false;
+ }
} catch (error) {
console.error("Failed to upsert user:", error);
+ return false;
}
return true;
},🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@web/src/lib/auth.ts` around lines 23 - 39, The signIn handler in auth.ts
quietly ignores failures from authenticatedFetch and always returns true; update
signIn to inspect the fetch result (or catch and rethrow) and return false when
the upsert fails so authentication is not treated as successful on 4xx/5xx
errors. Specifically, call authenticatedFetch in signIn and check the response's
ok/status (or propagate errors thrown by authenticatedFetch), log the failure
with context, and return false (or throw) instead of unconditional true to
ensure login fails when the upsert to /dashboard/auth/upsert-user does not
succeed.
| if (token.email && now - lastChecked > FIVE_MINUTES_MS) { | ||
| try { | ||
| const res = await authenticatedFetch( | ||
| `${DASHBOARD_API_URL}/dashboard/auth/check-user?email=${encodeURIComponent(token.email as string)}`, | ||
| ); | ||
| if (res.ok) { | ||
| const data = await res.json(); | ||
| token.approved = Boolean(data.approved); | ||
| } | ||
| } catch (error) { | ||
| console.error("Failed to check user approval:", error); | ||
| } | ||
| token.approvedCheckedAt = now; | ||
| } |
There was a problem hiding this comment.
승인 상태 조회 실패 시 기존 승인값이 그대로 살아남습니다.
check-user가 404/500을 반환해도 token.approved는 유지되고 approvedCheckedAt만 갱신됩니다. 그러면 삭제되었거나 차단된 사용자가 계속 승인 상태를 유지할 수 있고, 장애 중에는 재검증도 5분마다 계속 미뤄집니다.
🔧 수정 예시
if (token.email && now - lastChecked > FIVE_MINUTES_MS) {
try {
const res = await authenticatedFetch(
`${DASHBOARD_API_URL}/dashboard/auth/check-user?email=${encodeURIComponent(token.email as string)}`,
);
if (res.ok) {
const data = await res.json();
token.approved = Boolean(data.approved);
+ token.domain = typeof data.domain === "string" ? data.domain : token.domain;
+ token.approvedCheckedAt = now;
+ } else if (res.status === 404) {
+ token.approved = false;
+ token.approvedCheckedAt = now;
}
} catch (error) {
console.error("Failed to check user approval:", error);
}
- token.approvedCheckedAt = now;
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (token.email && now - lastChecked > FIVE_MINUTES_MS) { | |
| try { | |
| const res = await authenticatedFetch( | |
| `${DASHBOARD_API_URL}/dashboard/auth/check-user?email=${encodeURIComponent(token.email as string)}`, | |
| ); | |
| if (res.ok) { | |
| const data = await res.json(); | |
| token.approved = Boolean(data.approved); | |
| } | |
| } catch (error) { | |
| console.error("Failed to check user approval:", error); | |
| } | |
| token.approvedCheckedAt = now; | |
| } | |
| if (token.email && now - lastChecked > FIVE_MINUTES_MS) { | |
| try { | |
| const res = await authenticatedFetch( | |
| `${DASHBOARD_API_URL}/dashboard/auth/check-user?email=${encodeURIComponent(token.email as string)}`, | |
| ); | |
| if (res.ok) { | |
| const data = await res.json(); | |
| token.approved = Boolean(data.approved); | |
| token.domain = typeof data.domain === "string" ? data.domain : token.domain; | |
| token.approvedCheckedAt = now; | |
| } else if (res.status === 404) { | |
| token.approved = false; | |
| token.approvedCheckedAt = now; | |
| } | |
| } catch (error) { | |
| console.error("Failed to check user approval:", error); | |
| } | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@web/src/lib/auth.ts` around lines 54 - 67, The current logic leaves
token.approved unchanged when the fetch fails or returns non-OK and still
updates token.approvedCheckedAt, allowing stale approvals to persist; change the
flow in the block that calls
authenticatedFetch(`${DASHBOARD_API_URL}/dashboard/auth/check-user?...`) so that
if res.ok you parse the JSON, set token.approved = Boolean(data.approved) and
then set token.approvedCheckedAt = now, but if res is not ok (e.g., 404/500) or
the fetch throws, explicitly set token.approved = false (or undefined if you
prefer a clear “unknown” state) and do NOT update token.approvedCheckedAt so the
check will be retried sooner; keep logging the error in the catch, and reference
the token object, authenticatedFetch, DASHBOARD_API_URL and the check-user
endpoint to locate the code to change.
| if (pathname.startsWith("/auth/")) return true; | ||
| if (pathname.startsWith("/_next/")) return true; |
There was a problem hiding this comment.
/auth/pending 분기가 현재 절대 실행되지 않습니다.
Line 7에서 /auth/*를 전부 공개 경로로 반환해서 Line 35의 /auth/pending → /dashboard 리다이렉트는 도달할 수 없습니다. 승인된 사용자가 pending 페이지에 직접 들어오면 그대로 머무르게 됩니다.
🔧 수정 예시
function isPublicPath(pathname: string): boolean {
if (publicPaths.includes(pathname)) return true;
- if (pathname.startsWith("/auth/")) return true;
+ if (pathname === "/auth/signin") return true;
+ if (pathname === "/auth/error") return true;
if (pathname.startsWith("/_next/")) return true;
if (pathname === "/favicon.ico") return true;
// Static assets (files with extensions)
if (pathname.includes(".")) return true;
return false;
}Also applies to: 18-37
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@web/src/middleware.ts` around lines 7 - 8, The middleware marks all /auth/*
paths as public by returning true for pathname.startsWith("/auth/"), which
prevents the later logic that redirects approved users from /auth/pending to
/dashboard from ever running; update the checks so the /auth/pending case is
evaluated first (or exclude it from the broad public /auth/ check) so the
redirect logic that maps "/auth/pending" to "/dashboard" can execute—in other
words, reorder or narrow the pathname.startsWith("/auth/") condition in
middleware.ts to allow the specific "/auth/pending" branch to run.
Generated apps (build-web-app-*, etc.) were accumulating on DO App Platform without cleanup, costing ~$66/month. This adds: - Background TTL cleanup loop (hourly, default 2h TTL) - Manual cleanup endpoint POST /dashboard/cleanup-apps - DEPLOY_APP_TTL_HOURS env var to configure TTL (default: 2) - Protected app list (vibedeploy never deleted) Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
- Fix app naming: use blueprint.app_name (LLM-generated proper name)
instead of always falling back to _build_repo_name(idea) which
produced generic names like "build-web-app-590240"
- Add GET /dashboard/apps — list all live DO apps with status/age
- Add DELETE /dashboard/apps/{app_id} — delete specific app (protects vibedeploy)
Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (2)
agent/server.py (2)
2008-2076:⚠️ Potential issue | 🔴 Critical새 대시보드 엔드포인트에
/api/...별칭이 없습니다.지금은
/dashboard/auth/*와/dashboard/cleanup-apps만 등록돼 있어서, 배포 환경에서/apiprefix를 붙여 호출하는 경로는 그대로 404가 납니다. 기존 라우트 규칙대로 bare 경로와/api/...경로를 둘 다 같은 핸들러에 묶어야 합니다.🔧 수정 예시
+@app.post("/api/dashboard/auth/upsert-user") `@app.post`("/dashboard/auth/upsert-user") async def dashboard_auth_upsert_user(body: UpsertUserRequest): @@ +@app.get("/api/dashboard/auth/check-user") `@app.get`("/dashboard/auth/check-user") async def dashboard_auth_check_user(email: str): @@ +@app.post("/api/dashboard/cleanup-apps") `@app.post`("/dashboard/cleanup-apps") async def dashboard_cleanup_apps():As per coding guidelines
agent/server.py: "The backend exposes both/api/*and bare routes (/run,/resume,/brainstorm,/dashboard/*) because App Platform ingress and local dev differ" and "Keep route parity between/api/...and local bare paths when changing request/response shapes".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@agent/server.py` around lines 2008 - 2076, Add API-prefixed route aliases so the handlers keep parity with bare routes: register duplicate FastAPI decorators for the three handlers (dashboard_auth_upsert_user, dashboard_auth_check_user, dashboard_cleanup_apps) using /api/dashboard/auth/upsert-user, /api/dashboard/auth/check-user, and /api/dashboard/cleanup-apps respectively (matching POST/GET methods), so both the bare paths and /api/... variants point to the same functions.
1988-2005:⚠️ Potential issue | 🟠 MajorOAuth nullable 필드와 모델 계약이 아직 맞지 않습니다.
name/image는 OAuth 프로필에서null일 수 있는데 여기서는 비-nullstr로 고정돼 있습니다. 그러면/dashboard/auth/upsert-user요청이 422로 떨어지거나, 응답 검증에서 다시 실패할 수 있습니다.🔧 수정 예시
class UpsertUserRequest(BaseModel): email: str - name: str = "" - image: str = "" + name: str | None = None + image: str | None = None @@ class UpsertUserResponse(BaseModel): id: str email: str - name: str + name: str | None = None approved: bool domain: str + image: str | None = None🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@agent/server.py` around lines 1988 - 2005, The Pydantic models currently treat OAuth fields that can be null as non-null strings; update UpsertUserRequest.name and UpsertUserRequest.image to be Optional[str] with a default of None, and also make UpsertUserResponse.name Optional[str] (default None) so response validation accepts null profile fields; locate and change the type annotations on the UpsertUserRequest and UpsertUserResponse classes (names: UpsertUserRequest, UpsertUserResponse) accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@agent/server.py`:
- Around line 2008-2067: The route handlers dashboard_auth_upsert_user and
dashboard_auth_check_user currently call get_pool() and contain raw SQL; move
this DB logic into the ResultStore abstraction (e.g., add methods
ResultStore.upsert_user(email, name, image, approved, domain) returning the row
and ResultStore.get_user_by_email(email) returning the row or None), replace the
SQL in server.py with calls to those ResultStore methods, and inject/obtain
ResultStore in the handlers instead of get_pool(); keep request/response
validation (UpsertUserRequest, UpsertUserResponse, CheckUserResponse) in place
and update any tests to use the ResultStore mock/stub.
In `@agent/tools/digitalocean.py`:
- Around line 112-130: The TTL cleanup currently deletes any non-protected app
older than ttl_hours; restrict it to only remove "generated" apps by checking
for the deployer marker (e.g., a specific tag, label/metadata key, or name
prefix used when creating apps) before computing age or calling delete_app.
Update the loop that inspects apps (variables name, app_id, created_str) to
first detect the generated-app marker (for example inspect app.get("spec",
{}).get("tags") or app.get("spec", {}).get("metadata") or name prefix) and
continue if the marker is missing, then proceed with created parsing, TTL
comparison, logging and delete_app; keep _PROTECTED_APP_NAMES logic unchanged.
- Around line 128-130: The cleanup currently appends every deletion attempt to
deleted even when delete_app(app_id) returns an error; update the
_ttl_cleanup_loop logic around the logger.info / result = await
delete_app(app_id) block to inspect result.get("status") (or similar) and only
append to deleted when the status indicates success, and append a separate
failed list (e.g., failed.append({"name": name, "app_id": app_id, "age_hours":
..., "error": result}) for errors); update any summaries/logs to report both
deleted and failed counts accordingly.
---
Duplicate comments:
In `@agent/server.py`:
- Around line 2008-2076: Add API-prefixed route aliases so the handlers keep
parity with bare routes: register duplicate FastAPI decorators for the three
handlers (dashboard_auth_upsert_user, dashboard_auth_check_user,
dashboard_cleanup_apps) using /api/dashboard/auth/upsert-user,
/api/dashboard/auth/check-user, and /api/dashboard/cleanup-apps respectively
(matching POST/GET methods), so both the bare paths and /api/... variants point
to the same functions.
- Around line 1988-2005: The Pydantic models currently treat OAuth fields that
can be null as non-null strings; update UpsertUserRequest.name and
UpsertUserRequest.image to be Optional[str] with a default of None, and also
make UpsertUserResponse.name Optional[str] (default None) so response validation
accepts null profile fields; locate and change the type annotations on the
UpsertUserRequest and UpsertUserResponse classes (names: UpsertUserRequest,
UpsertUserResponse) accordingly.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 1779978d-bc1b-4cc8-be86-855788f5a164
📒 Files selected for processing (2)
agent/server.pyagent/tools/digitalocean.py
| @app.post("/dashboard/auth/upsert-user") | ||
| async def dashboard_auth_upsert_user(body: UpsertUserRequest): | ||
| from .db.connection import get_pool | ||
|
|
||
| if not body.email or "@" not in body.email: | ||
| raise HTTPException(status_code=400, detail="invalid_email") | ||
|
|
||
| domain = body.email.rsplit("@", 1)[1].lower() | ||
| approved = domain == "2weeks.co" | ||
|
|
||
| pool = await get_pool() | ||
| async with pool.acquire() as conn: | ||
| row = await conn.fetchrow( | ||
| """ | ||
| INSERT INTO users (email, name, image, approved, domain) | ||
| VALUES ($1, $2, $3, $4, $5) | ||
| ON CONFLICT (email) DO UPDATE | ||
| SET name = EXCLUDED.name, | ||
| image = EXCLUDED.image, | ||
| last_login_at = now() | ||
| RETURNING id, email, name, approved, domain | ||
| """, | ||
| body.email, | ||
| body.name, | ||
| body.image, | ||
| approved, | ||
| domain, | ||
| ) | ||
|
|
||
| return UpsertUserResponse( | ||
| id=row["id"], | ||
| email=row["email"], | ||
| name=row["name"], | ||
| approved=row["approved"], | ||
| domain=row["domain"], | ||
| ) | ||
|
|
||
|
|
||
| @app.get("/dashboard/auth/check-user") | ||
| async def dashboard_auth_check_user(email: str): | ||
| from .db.connection import get_pool | ||
|
|
||
| if not email or "@" not in email: | ||
| raise HTTPException(status_code=400, detail="invalid_email") | ||
|
|
||
| pool = await get_pool() | ||
| async with pool.acquire() as conn: | ||
| row = await conn.fetchrow( | ||
| "SELECT approved, domain, email FROM users WHERE email = $1", | ||
| email, | ||
| ) | ||
|
|
||
| if row is None: | ||
| raise HTTPException(status_code=404, detail="user_not_found") | ||
|
|
||
| return CheckUserResponse( | ||
| approved=row["approved"], | ||
| domain=row["domain"], | ||
| email=row["email"], | ||
| ) |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
사용자 저장/조회 SQL은 라우트 밖으로 빼는 편이 좋겠습니다.
이번 변경은 get_pool()과 SQL이 HTTP 핸들러 안에 직접 들어와서 인증 정책과 저장 규칙이 라우트 계층에 묶였습니다. 전용 저장소 계층이나 ResultStore 추상화로 옮겨 두는 쪽이 이후 정책 변경과 테스트에 안전합니다.
As per coding guidelines agent/server.py: "Use ResultStore abstractions; do not scatter direct DB logic through route handlers".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@agent/server.py` around lines 2008 - 2067, The route handlers
dashboard_auth_upsert_user and dashboard_auth_check_user currently call
get_pool() and contain raw SQL; move this DB logic into the ResultStore
abstraction (e.g., add methods ResultStore.upsert_user(email, name, image,
approved, domain) returning the row and ResultStore.get_user_by_email(email)
returning the row or None), replace the SQL in server.py with calls to those
ResultStore methods, and inject/obtain ResultStore in the handlers instead of
get_pool(); keep request/response validation (UpsertUserRequest,
UpsertUserResponse, CheckUserResponse) in place and update any tests to use the
ResultStore mock/stub.
| for app in apps: | ||
| name = app.get("spec", {}).get("name", "") | ||
| app_id = app.get("id", "") | ||
| if not app_id or name in _PROTECTED_APP_NAMES: | ||
| continue | ||
| created_str = app.get("created_at", "") | ||
| if not created_str: | ||
| continue | ||
| try: | ||
| created = datetime.fromisoformat(created_str.replace("Z", "+00:00")) | ||
| except (ValueError, TypeError): | ||
| continue | ||
| age_hours = (now - created).total_seconds() / 3600 | ||
| if age_hours < ttl_hours: | ||
| skipped.append({"name": name, "age_hours": round(age_hours, 1)}) | ||
| continue | ||
| logger.info("[TTL] Deleting expired app %s (age=%.1fh, ttl=%dh)", name, age_hours, ttl_hours) | ||
| result = await delete_app(app_id) | ||
| deleted.append({"name": name, "app_id": app_id, "age_hours": round(age_hours, 1), "result": result.get("status")}) |
There was a problem hiding this comment.
TTL 정리가 생성 앱이 아닌 다른 앱까지 지울 수 있습니다.
지금 조건은 vibedeploy만 예외 처리해서, 같은 DigitalOcean 계정에 있는 다른 수동/운영 앱도 TTL만 넘으면 삭제 대상이 됩니다. PR 목표가 “generated apps” 정리라면 배포 시 생성 앱 식별자(prefix/tag/metadata)를 남기고, 여기서는 그 마커가 있는 앱만 삭제 대상으로 제한해야 합니다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@agent/tools/digitalocean.py` around lines 112 - 130, The TTL cleanup
currently deletes any non-protected app older than ttl_hours; restrict it to
only remove "generated" apps by checking for the deployer marker (e.g., a
specific tag, label/metadata key, or name prefix used when creating apps) before
computing age or calling delete_app. Update the loop that inspects apps
(variables name, app_id, created_str) to first detect the generated-app marker
(for example inspect app.get("spec", {}).get("tags") or app.get("spec",
{}).get("metadata") or name prefix) and continue if the marker is missing, then
proceed with created parsing, TTL comparison, logging and delete_app; keep
_PROTECTED_APP_NAMES logic unchanged.
| logger.info("[TTL] Deleting expired app %s (age=%.1fh, ttl=%dh)", name, age_hours, ttl_hours) | ||
| result = await delete_app(app_id) | ||
| deleted.append({"name": name, "app_id": app_id, "age_hours": round(age_hours, 1), "result": result.get("status")}) |
There was a problem hiding this comment.
삭제 실패를 deleted로 집계하면 운영 판단이 틀어집니다.
delete_app()가 {"status": "error"}를 반환해도 현재는 항상 deleted에 넣습니다. 그래서 수동 응답과 _ttl_cleanup_loop() 로그가 실제보다 많이 지운 것처럼 보입니다. 실패 항목은 별도 failed로 분리하세요.
🔧 수정 예시
now = datetime.now(timezone.utc)
deleted = []
+ failed = []
skipped = []
@@
logger.info("[TTL] Deleting expired app %s (age=%.1fh, ttl=%dh)", name, age_hours, ttl_hours)
result = await delete_app(app_id)
- deleted.append({"name": name, "app_id": app_id, "age_hours": round(age_hours, 1), "result": result.get("status")})
+ item = {
+ "name": name,
+ "app_id": app_id,
+ "age_hours": round(age_hours, 1),
+ "result": result.get("status"),
+ }
+ if result.get("status") == "deleted":
+ deleted.append(item)
+ else:
+ item["error"] = result.get("error", "")
+ failed.append(item)
- return {"deleted": deleted, "skipped": skipped, "ttl_hours": ttl_hours}
+ return {"deleted": deleted, "failed": failed, "skipped": skipped, "ttl_hours": ttl_hours}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@agent/tools/digitalocean.py` around lines 128 - 130, The cleanup currently
appends every deletion attempt to deleted even when delete_app(app_id) returns
an error; update the _ttl_cleanup_loop logic around the logger.info / result =
await delete_app(app_id) block to inspect result.get("status") (or similar) and
only append to deleted when the status indicates success, and append a separate
failed list (e.g., failed.append({"name": name, "app_id": app_id, "age_hours":
..., "error": result}) for errors); update any summaries/logs to report both
deleted and failed counts accordingly.
Replace basic health check with comprehensive 100-point verification: - Frontend quality: semantic HTML, interactive elements, meta tags, broken images - API validation: JSON structure check, not just status codes - Asset integrity: CSS/JS files referenced in HTML are accessible - Scoring: >= 80 verified, 50-79 partial, < 50 triggers auto-repair loop Failed deployments now get diagnostic summary fed to LLM repair. Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@agent/nodes/deployer.py`:
- Line 80: The app_name assignment currently uses blueprint.get("app_name") raw
and bypasses the slugification in _build_repo_name, causing invalid characters
to flow into create_github_repo() and build_app_spec(); update the assignment of
app_name so that if blueprint provides a non-empty value you normalize it with
the same logic as _build_repo_name (either call
_build_repo_name(blueprint["app_name"]) or apply the identical regex to remove
[^a-zA-Z0-9\s-] and truncate to 45 chars) and fall back to
_build_repo_name(idea) only when the sanitized result is empty.
In `@agent/server.py`:
- Around line 2020-2058: The users table schema created by ensure_users_table()
is missing columns (name, image, approved, domain, last_login_at) that the
upsert-user flow and dashboard_auth_check_user expect; either update
ensure_users_table() to add these columns with appropriate types/defaults (e.g.,
name text, image text, approved boolean default false, domain text,
last_login_at timestamptz) or modify the upsert and check handlers to
persist/read those values inside the existing metadata JSON column instead of
direct columns; locate ensure_users_table in agent/db/connection.py and the
handlers used by /dashboard/auth/upsert-user and dashboard_auth_check_user to
apply the chosen consistent schema change.
- Around line 611-623: The TTL cleanup background task (_ttl_task) is only
cancelled but not awaited, causing potential pending-task warnings and
incomplete cleanup; after calling _ttl_task.cancel() you should await the task
inside a contextlib.suppress(asyncio.CancelledError) block to ensure the
_ttl_cleanup_loop finishes cleanup safely (i.e., wrap awaiting _ttl_task with
contextlib.suppress(asyncio.CancelledError) right after the cancel call, before
awaiting _store.close()).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 7a5ed264-896a-494f-82b5-d4d1245c262d
📒 Files selected for processing (3)
agent/nodes/deployer.pyagent/server.pyagent/tools/digitalocean.py
🚧 Files skipped from review as they are similar to previous changes (1)
- agent/tools/digitalocean.py
| build_validation = state.get("build_validation") or {} | ||
| deploy_gate_result = state.get("deploy_gate_result") or {} | ||
| app_name = _build_repo_name(idea) | ||
| app_name = str(blueprint.get("app_name") or "").strip() or _build_repo_name(idea) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, examine the code around line 80 in deployer.py
sed -n '70,90p' agent/nodes/deployer.pyRepository: Two-Weeks-Team/vibeDeploy
Length of output: 1021
🏁 Script executed:
# Find and examine the _build_repo_name function
rg -n "_build_repo_name" agent/nodes/deployer.py -A 10Repository: Two-Weeks-Team/vibeDeploy
Length of output: 946
🏁 Script executed:
# Check how app_name is used downstream
sed -n '140,160p' agent/nodes/deployer.pyRepository: Two-Weeks-Team/vibeDeploy
Length of output: 982
🏁 Script executed:
# Check the build_app_spec usage around line 251
sed -n '240,260p' agent/nodes/deployer.pyRepository: Two-Weeks-Team/vibeDeploy
Length of output: 959
🏁 Script executed:
# Look for GitHub/DO API calls to understand name validation
rg -n "create_github_repo|build_app_spec" agent/nodes/deployer.py -B 2 -A 5Repository: Two-Weeks-Team/vibeDeploy
Length of output: 1187
🏁 Script executed:
# Find and examine create_github_repo implementation
rg -n "async def create_github_repo" agent/tools/github.py -A 30Repository: Two-Weeks-Team/vibeDeploy
Length of output: 1016
🏁 Script executed:
# Find and examine build_app_spec implementation
rg -n "def build_app_spec" agent/tools/digitalocean.py -A 30Repository: Two-Weeks-Team/vibeDeploy
Length of output: 1972
🏁 Script executed:
# Check if there are any name validation tests
fd "test.*deployer" agent/tests/ --type fRepository: Two-Weeks-Team/vibeDeploy
Length of output: 97
🏁 Script executed:
# Check test fixtures or mocked events related to blueprint app_name
rg "app_name" agent/tests/ -B 2 -A 2Repository: Two-Weeks-Team/vibeDeploy
Length of output: 11232
🏁 Script executed:
# Check if there are error handlers for GitHub repo creation failures
sed -n '149,200p' agent/tools/github.pyRepository: Two-Weeks-Team/vibeDeploy
Length of output: 1662
🏁 Script executed:
# Check if build_app_spec validates app names or if DigitalOcean has constraints
sed -n '291,350p' agent/tools/digitalocean.pyRepository: Two-Weeks-Team/vibeDeploy
Length of output: 2956
🏁 Script executed:
# Search for any GitHub naming validation or error handling
rg "GithubException|name.*invalid|naming" agent/tools/github.py -B 2 -A 2Repository: Two-Weeks-Team/vibeDeploy
Length of output: 618
🏁 Script executed:
# Check DigitalOcean app naming constraints in code comments or validation
rg "app.*name|naming|constraint" agent/tools/digitalocean.py -i -B 1 -A 1Repository: Two-Weeks-Team/vibeDeploy
Length of output: 735
blueprint["app_name"]의 문자 검증이 필요합니다.
80번 라인에서 blueprint["app_name"]을 .strip() 처리 후 그대로 사용하면, 공백·슬래시·유니코드 등 GitHub/DigitalOcean의 이름 제약을 위반하는 문자를 포함한 값이 create_github_repo() (149번 라인)과 build_app_spec() (251번 라인)으로 전달됩니다.
_build_repo_name()은 정규식으로 [^a-zA-Z0-9\s-]을 제거하고 길이를 45자로 제한하는 slugification을 수행하지만, blueprint["app_name"]은 이 보호를 우회합니다. 결과적으로 코드 생성은 성공하면서 배포 단계에서만 GitHub/DO API 이름 검증으로 실패하게 됩니다. blueprint["app_name"]도 동일한 정규화 로직을 거친 후 사용하세요.
🔧 수정 예시
- app_name = str(blueprint.get("app_name") or "").strip() or _build_repo_name(idea)
+ requested_app_name = str(blueprint.get("app_name") or "").strip()
+ app_name = _build_repo_name({"name": requested_app_name}) if requested_app_name else _build_repo_name(idea)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@agent/nodes/deployer.py` at line 80, The app_name assignment currently uses
blueprint.get("app_name") raw and bypasses the slugification in
_build_repo_name, causing invalid characters to flow into create_github_repo()
and build_app_spec(); update the assignment of app_name so that if blueprint
provides a non-empty value you normalize it with the same logic as
_build_repo_name (either call _build_repo_name(blueprint["app_name"]) or apply
the identical regex to remove [^a-zA-Z0-9\s-] and truncate to 45 chars) and fall
back to _build_repo_name(idea) only when the sanitized result is empty.
| # Start TTL cleanup for deployed apps | ||
| _ttl_task = None | ||
| if os.environ.get("DIGITALOCEAN_API_TOKEN"): | ||
| from .tools.digitalocean import _ttl_cleanup_loop | ||
|
|
||
| _ttl_task = asyncio.create_task(_ttl_cleanup_loop()) | ||
| logger.info("[TTL] App cleanup loop started (TTL=%sh)", os.environ.get("DEPLOY_APP_TTL_HOURS", "2")) | ||
|
|
||
| yield | ||
|
|
||
| if _ttl_task: | ||
| _ttl_task.cancel() | ||
| await _store.close() |
There was a problem hiding this comment.
TTL cleanup task는 cancel() 후에 await까지 해주세요.
지금은 Line 621에서 취소만 하고 바로 종료해서, 종료 시점에 pending task 경고가 남거나 cleanup loop가 정상적으로 정리되지 않을 수 있습니다. contextlib.suppress(asyncio.CancelledError)로 감싸서 _ttl_task를 await하는 쪽이 안전합니다.
🔧 수정 예시
+from contextlib import asynccontextmanager, suppress
@@
yield
if _ttl_task:
_ttl_task.cancel()
+ with suppress(asyncio.CancelledError):
+ await _ttl_task
await _store.close()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@agent/server.py` around lines 611 - 623, The TTL cleanup background task
(_ttl_task) is only cancelled but not awaited, causing potential pending-task
warnings and incomplete cleanup; after calling _ttl_task.cancel() you should
await the task inside a contextlib.suppress(asyncio.CancelledError) block to
ensure the _ttl_cleanup_loop finishes cleanup safely (i.e., wrap awaiting
_ttl_task with contextlib.suppress(asyncio.CancelledError) right after the
cancel call, before awaiting _store.close()).
| row = await conn.fetchrow( | ||
| """ | ||
| INSERT INTO users (email, name, image, approved, domain) | ||
| VALUES ($1, $2, $3, $4, $5) | ||
| ON CONFLICT (email) DO UPDATE | ||
| SET name = EXCLUDED.name, | ||
| image = EXCLUDED.image, | ||
| last_login_at = now() | ||
| RETURNING id, email, name, approved, domain | ||
| """, | ||
| body.email, | ||
| body.name, | ||
| body.image, | ||
| approved, | ||
| domain, | ||
| ) | ||
|
|
||
| return UpsertUserResponse( | ||
| id=row["id"], | ||
| email=row["email"], | ||
| name=row["name"], | ||
| approved=row["approved"], | ||
| domain=row["domain"], | ||
| ) | ||
|
|
||
|
|
||
| @app.get("/dashboard/auth/check-user") | ||
| async def dashboard_auth_check_user(email: str): | ||
| from .db.connection import get_pool | ||
|
|
||
| if not email or "@" not in email: | ||
| raise HTTPException(status_code=400, detail="invalid_email") | ||
|
|
||
| pool = await get_pool() | ||
| async with pool.acquire() as conn: | ||
| row = await conn.fetchrow( | ||
| "SELECT approved, domain, email FROM users WHERE email = $1", | ||
| email, | ||
| ) |
There was a problem hiding this comment.
users 테이블 스키마와 이 SQL이 맞지 않아서 첫 로그인부터 500이 납니다.
agent/db/connection.py의 ensure_users_table()는 현재 id, email, created_at, metadata만 만들고 있는데, 여기서는 name, image, approved, domain, last_login_at 컬럼을 바로 읽고 씁니다. 그래서 /dashboard/auth/upsert-user와 /dashboard/auth/check-user는 배포 직후 undefined column으로 깨집니다. 릴리즈 전에 스키마 마이그레이션을 먼저 맞추거나, 이 값을 metadata에 저장하는 방식으로 일관되게 바꿔야 합니다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@agent/server.py` around lines 2020 - 2058, The users table schema created by
ensure_users_table() is missing columns (name, image, approved, domain,
last_login_at) that the upsert-user flow and dashboard_auth_check_user expect;
either update ensure_users_table() to add these columns with appropriate
types/defaults (e.g., name text, image text, approved boolean default false,
domain text, last_login_at timestamptz) or modify the upsert and check handlers
to persist/read those values inside the existing metadata JSON column instead of
direct columns; locate ensure_users_table in agent/db/connection.py and the
handlers used by /dashboard/auth/upsert-user and dashboard_auth_check_user to
apply the chosen consistent schema change.
Replace fixed 5-domain presets with LLM-generated unique design systems: - design_system_generator: now calls LLM with app context (idea, domain, inspiration, experience contract) to generate bespoke color palette, typography, spacing, motion, layout, and visual identity per app - design_tokens: uses LLM-generated oklch() colors, falls back to presets - typography: uses LLM-generated font pairings from Google Fonts - motion_tokens: uses LLM-generated motion intensity/easing Each app now gets a unique design system tailored to its concept, audience, and emotional tone — no more generic SaaS look. Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
Fixes based on 3-expert review consensus:
- Add DELETE to CORS allow_methods (was blocking dashboard app deletion)
- Add /api/auth/ to middleware public paths (prevent OAuth flow breakage)
- Add refetchInterval={300} to SessionProvider (auto-refresh approval)
- Fix TTL log default "2" → "72" to match actual default
- Fix _extract_hue regex to handle fractional oklch hue values
- Cap verification score at 100 (was possible to exceed)
- Fix cubic-bezier parsing to use regex extraction (safer for LLM output)
- Tighten middleware dot-check to match file extensions only
Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
Summary
[email protected]Changes
Backend (agent/)
agent/db/connection.py: Auto-createuserstable on pool init (idempotent)agent/server.py: AddPOST /dashboard/auth/upsert-userandGET /dashboard/auth/check-userendpointsFrontend (web/)
web/src/lib/auth.ts: NextAuth.js v5 config — Google OAuth, JWT sessions, domain-based approvalweb/src/middleware.ts: Route protection — public (/,/demo), auth-required (everything else)web/src/app/auth/signin/page.tsx: Google sign-in pageweb/src/app/auth/pending/page.tsx: Access restriction notice with contact infoweb/src/app/auth/error/page.tsx: OAuth error handlingweb/src/components/providers/session-provider.tsx: SessionProvider wrapperweb/src/components/shared/user-menu.tsx: User avatar + sign-out menuweb/src/app/layout.tsx: Wrapped with AuthSessionProviderInfrastructure
.do/app.yaml: Route/api/auth→ web (before/api→ api catch-all), add OAuth env varsAccess Restriction Message
Environment Variables Required
GOOGLE_CLIENT_ID— Google Cloud Console OAuth client IDGOOGLE_CLIENT_SECRET— Google Cloud Console OAuth client secretAUTH_SECRET— NextAuth secret (generate withnpx auth secret)AUTH_URL— App URL (auto-set via${APP_URL}in app.yaml)Test plan
npm run buildpasses (verified locally ✅)/dashboardredirects to/auth/signinwhen unauthenticated/auth/pendingwith restriction message/and/demoaccessible without authentication/api/auth/*to web component correctly🤖 Generated with Claude Code
Summary by CodeRabbit
릴리스 노트
New Features
Chores
Database