-
Notifications
You must be signed in to change notification settings - Fork 20
feat(flow): add AI Flow (智能流) module for automated task execution #5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
This commit introduces the AI Flow module which enables automated task execution with configurable triggers. Key features include: Backend: - Flow CRD schema with support for cron, interval, one-time, and event triggers - FlowResource and FlowExecution database models with proper indexes - Flow service layer for CRUD operations and execution management - Flow scheduler for periodic execution of scheduled flows - REST API endpoints for flow management and timeline view - Webhook endpoint for external trigger support Frontend: - Flow types and API client - FlowContext for state management - FlowList component for configuration management - FlowTimeline component for execution history display - FlowForm component for creating/editing flows - FlowPage main component with tab navigation - i18n translations (zh-CN and en) - Navigation entry in DesktopNavLinks Database: - flows table for storing flow configurations - flow_executions table for execution records - Proper indexes for scheduler queries Co-Authored-By: Claude <[email protected]>
📝 WalkthroughWalkthroughIntroduces a complete AI Flow feature enabling users to create, manage, and execute automated workflows with cron/interval/one-time/event-based triggers, webhook support, and execution history tracking across backend database schema, ORM models, API endpoints, services with background scheduling, and comprehensive frontend UI components with localization. Changes
Sequence Diagram(s)sequenceDiagram
participant Scheduler as Flow Scheduler<br/>(Worker)
participant DB as Database
participant FlowSvc as Flow Service
participant TaskExec as Task Executor
participant ChatTrigger as Chat Trigger
Scheduler->>DB: Acquire distributed lock
Scheduler->>DB: Query flows with next_execution_time <= now
DB-->>Scheduler: Return due flows
loop For each due flow
Scheduler->>FlowSvc: execute_flow(flow)
FlowSvc->>DB: Create FlowExecution record
FlowSvc->>FlowSvc: Resolve prompt template
DB-->>FlowSvc: Execution created
FlowSvc->>TaskExec: _trigger_task_execution(flow, execution)
TaskExec->>DB: Create TaskResource linked to execution
TaskExec->>ChatTrigger: Trigger chat with flow context
ChatTrigger->>DB: Update execution status to RUNNING
ChatTrigger-->>TaskExec: Chat completes
TaskExec->>FlowSvc: Update execution status
FlowSvc->>DB: Update FlowExecution + FlowResource stats
end
Scheduler->>DB: Release distributed lock
sequenceDiagram
participant Client as API Client
participant FlowAPI as Flow API<br/>Endpoints
participant FlowSvc as Flow Service
participant DB as Database
participant Webhook as External<br/>Webhook
Client->>FlowAPI: POST /flows/webhook/{token}
FlowAPI->>FlowSvc: get_flow_by_webhook_token(token)
FlowSvc->>DB: Query FlowResource by webhook_token
DB-->>FlowSvc: Flow found
FlowAPI->>FlowSvc: trigger_flow_by_webhook(flow, payload)
FlowSvc->>FlowSvc: Resolve prompt with payload variables
FlowSvc->>DB: Create FlowExecution
FlowSvc->>DB: Update FlowResource last_execution_time
DB-->>FlowSvc: Execution created & recorded
FlowSvc-->>FlowAPI: Execution details
FlowAPI-->>Client: 200 OK + execution response
sequenceDiagram
participant ChatSystem as Chat Trigger<br/>System
participant Emitter as Event Emitter<br/>(FlowEventEmitter)
participant FlowSvc as Flow Service
participant DB as Database
ChatSystem->>ChatSystem: Initialize with event_emitter=FlowEventEmitter(execution_id)
ChatSystem->>ChatSystem: Stream chat response
ChatSystem->>Emitter: emit_chat_done(results)
Emitter->>FlowSvc: _update_execution_status(execution_id, COMPLETED)
FlowSvc->>DB: Update FlowExecution status, result_summary
FlowSvc->>DB: Update FlowResource success_count, last_execution_status
DB-->>FlowSvc: Updated
FlowSvc-->>Emitter: Status updated
Emitter-->>ChatSystem: Done
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 12
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
frontend/src/features/layout/components/DesktopNavLinks.tsx (1)
82-82: MissingisWikiEnabledin useMemo dependencies.The
navItemsarray conditionally includes the wiki item based onisWikiEnabled, but this value is not in the dependency array. If runtime config changes, the memoized value won't update.Proposed fix
- [t, router, startTransition] + [t, router, startTransition, isWikiEnabled]
🤖 Fix all issues with AI agents
In @backend/app/api/endpoints/adapter/flows.py:
- Around line 182-200: The endpoint trigger_flow_webhook uses a mutable default
for the payload parameter (payload: Dict[str, Any] = {}), which can be shared
across requests; change the signature to accept an optional payload (e.g.,
payload: Optional[Dict[str, Any]] = None) and inside trigger_flow_webhook
normalize it to an empty dict when None before calling
flow_service.trigger_flow_by_webhook so the payload passed is a fresh dict for
each request.
In @backend/app/schemas/flow.py:
- Around line 262-264: The Pydantic schema FlowExecutionBase declares
trigger_reason as required but the ORM model FlowExecution.trigger_reason is
nullable; make the schema match the model by changing
FlowExecutionBase.trigger_reason to an optional field (use typing.Optional[str]
and set Field default to None, e.g. trigger_reason: Optional[str] = Field(None,
description="Human-readable trigger reason")), add the Optional import if
missing, and adjust any validators/consumers that assume a non-null value to
handle None.
In @backend/app/services/flow_scheduler.py:
- Around line 26-29: The lock expiry (FLOW_SCHEDULER_LOCK_EXPIRY) is currently
computed as max(FLOW_SCHEDULER_INTERVAL_SECONDS - 10, 10), which can release the
lock before long-running processing finishes; change the calculation to account
for worst-case processing time (e.g., set FLOW_SCHEDULER_LOCK_EXPIRY to a larger
value such as FLOW_SCHEDULER_INTERVAL_SECONDS * 2 or
FLOW_SCHEDULER_INTERVAL_SECONDS + MAX_FLOW_PROCESSING_SECONDS) or implement a
lock renewal/heartbeat in the scheduler loop that extends the Redis lock while
work is ongoing; update references to FLOW_SCHEDULER_LOCK_EXPIRY and
add/configure a MAX_FLOW_PROCESSING_SECONDS constant or renewal logic in the
flow scheduler function that acquires FLOW_SCHEDULER_LOCK_KEY.
- Around line 63-87: Update get_due_flows to acquire row-level locks using
SQLAlchemy's with_for_update(skip_locked=True) on the query so selected
FlowResource rows are locked until commit (prevents concurrent schedulers from
picking the same rows); also change the truthiness checks to explicit
comparisons (e.g., FlowResource.is_active.is_(True) or FlowResource.is_active ==
True per your project style and FlowResource.next_execution_time.isnot(None) /
FlowResource.next_execution_time != None to use is not None style) and keep the
trigger_type filter using FlowTriggerType.CRON/INTERVAL/ONE_TIME values; apply
these changes to the
db.query(FlowResource).filter(...).with_for_update(skip_locked=True).all() call
in get_due_flows.
- Around line 289-352: The worker currently creates and closes a new asyncio
event loop each cycle (variables loop, asyncio.new_event_loop(),
asyncio.set_event_loop()) which is inefficient and can leak resources; modify
flow_scheduler_worker to create the event loop once before entering the while
not stop_event.is_set() loop, set it as the current loop, reuse that loop for
acquire_flow_scheduler_lock() and release_flow_scheduler_lock() calls inside
each iteration, and only close the loop after exiting the main loop (ensuring
any held distributed lock is released with
loop.run_until_complete(release_flow_scheduler_lock()) before closing); keep
existing exception handling but move loop creation/closure outside the per-cycle
try so the loop is reused safely.
- Around line 90-157: The transaction commits the flow update before attempting
task creation, risking inconsistent state if _trigger_task_execution fails; in
execute_flow move the db.commit() to after _trigger_task_execution(db, flow,
execution) completes successfully (and keep db.rollback() in the except), so the
next_execution_time or enabled flag is only persisted when the task is created,
and remove the now-obsolete TODO about triggering the task; references:
execute_flow, _trigger_task_execution,
flow_service._calculate_next_execution_time, db.commit, db.rollback.
- Around line 160-286: The code in _trigger_task_execution commits the DB after
creating TaskResource and again after linking execution, causing orphaned tasks
if flow_service.update_execution_status fails; wrap creation of TaskResource,
setting execution.task_id and updating the execution status in a single DB
transaction (use the session transaction context or db.begin()) and only commit
after flow_service.update_execution_status returns successfully, remove the
redundant direct assignment of execution.status before calling
flow_service.update_execution_status, and ensure exceptions roll back the
transaction and call flow_service.update_execution_status(...,
status=FlowExecutionStatus.FAILED, error_message=...) where appropriate.
In @backend/app/services/flow.py:
- Around line 546-551: The loop currently re-queries FlowResource despite
flow_cache already containing extracted flow data; update the caching logic so
flow_cache stores the flow's team_id (e.g., flow_cache[flow_id]["team_id"]) when
flows are first loaded and then use that cached team_id to look up the Kind
(team) and set exec_dict["team_name"] instead of re-querying FlowResource;
additionally add an optional small cache for Kind names (team_name_cache) and
consult it before querying Kind to further reduce DB calls (refer to flow_cache,
FlowResource, Kind, exec_dict, and team_id).
- Around line 808-812: The code calls
datetime.fromisoformat(trigger_config.get("execute_at")) inside the
OneTimeTriggerConfig construction which will raise ValueError for malformed ISO
strings; wrap that conversion in a try/except that catches ValueError, validate
that trigger_config.get("execute_at") is a non-empty string before parsing, and
on failure log or raise a clear validation error (including the invalid value
and context) so trigger building in the code path using OneTimeTriggerConfig and
datetime.fromisoformat won’t raise an unhandled exception.
In @frontend/src/features/flows/components/FlowTimeline.tsx:
- Around line 149-152: In FlowTimeline.tsx the separator dot <span>·</span> is
always rendered causing a leading dot when exec.team_name is falsy; update the
JSX around exec.team_name and the dot so the dot is only rendered when
exec.team_name is truthy (e.g., render the team span and the dot together
conditionally or render the dot inline inside the team_name conditional) so that
the trigger text (exec.trigger_reason || exec.trigger_type) is not prefixed by
an orphaned separator.
In @frontend/src/types/flow.ts:
- Around line 52-56: FlowTriggerConfig currently declares a flat union of
CronTriggerConfig | IntervalTriggerConfig | OneTimeTriggerConfig |
EventTriggerConfig but must mirror the backend discriminated shape; change
FlowTriggerConfig to a discriminated union of objects each with a literal type
field (e.g., type: 'cron' | 'interval' | 'one_time' | 'event') and an optional
nested config property (e.g., { type: 'cron'; config?: CronTriggerConfig }) so
runtime type-narrowing works; update any code using FlowTriggerConfig to narrow
on the type field instead of assuming the concrete config shape.
🧹 Nitpick comments (23)
backend/app/models/flow.py (2)
43-44: Consider renamingjsoncolumn to avoid shadowing.The column name
jsonshadows the importedJSONtype. While this works, renaming toresourceorconfigwould improve clarity and avoid potential confusion.
67-70: Consider using timezone-aware datetime defaults to address Python 3.12+ deprecation.
datetime.utcnowis deprecated in Python 3.12+. Use SQLAlchemy'sfunc.now()for database-side defaults ordatetime.now(timezone.utc)for Python-side timezone-aware timestamps.Proposed fix using SQLAlchemy's server-side default
+from sqlalchemy import func + # Timestamps -created_at = Column(DateTime, default=datetime.utcnow, nullable=False) +created_at = Column(DateTime, default=func.now(), nullable=False) updated_at = Column( - DateTime, default=datetime.utcnow, onupdate=datetime.utcnow, nullable=False + DateTime, default=func.now(), onupdate=func.now(), nullable=False )frontend/src/types/flow.ts (1)
68-68: Consider usingFlowTriggerConfiginstead ofRecord<string, unknown>.Using
Record<string, unknown>loses type safety. If theFlowTriggerConfigtype is updated to match the backend structure, this field could use it for better type checking in components.backend/app/schemas/flow.py (1)
57-63: Consider constrainingunitfield to valid values.The
unitfield accepts any string, but should be limited to'minutes','hours','days'per the description. Using aLiteraltype provides validation.Proposed fix
+from typing import Literal + class IntervalTriggerConfig(BaseModel): """Interval trigger configuration.""" value: int = Field(..., description="Interval value") - unit: str = Field( + unit: Literal["minutes", "hours", "days"] = Field( ..., description="Interval unit: 'minutes', 'hours', 'days'" )frontend/src/app/flow/page.tsx (1)
1-9: LGTM!Clean Next.js App Router page implementation following the project conventions. The component correctly delegates to
FlowPagefrom the features module.Consider adding page metadata for SEO if this is a user-facing page:
export const metadata = { title: 'AI Flow', description: 'Automated task execution workflows', }frontend/src/features/flows/contexts/flowContext.tsx (2)
184-186: RemoverefreshExecutionsfrom dependency array to prevent unnecessary re-renders.The
useEffectdepends on bothexecutionFilterandrefreshExecutions, but sincerefreshExecutionsis recreated wheneverexecutionFilterchanges (line 129-147), this creates redundant effect triggers. The effect should only depend onexecutionFilter.♻️ Recommended fix
// Refresh executions when filter changes useEffect(() => { refreshExecutions() - }, [executionFilter, refreshExecutions]) + }, [executionFilter]) // eslint-disable-line react-hooks/exhaustive-depsAlternatively, if you want to avoid the eslint-disable comment, you can restructure by extracting the API call logic outside the callback, or use
useCallbackwith a ref pattern. However, the above solution is the most straightforward for this use case.
178-181: Consider including callback dependencies for consistency.While the current implementation works because
refreshFlowsandrefreshExecutionsare stable callbacks, React's exhaustive-deps rule expects these to be included in the dependency array for consistency and future-proofing.♻️ Optional improvement
// Initial load useEffect(() => { refreshFlows() refreshExecutions() - }, []) + }, [refreshFlows, refreshExecutions])Note: This change is optional and primarily for linting compliance. The current empty array works correctly in this context.
frontend/src/features/flows/components/FlowForm.tsx (1)
84-99: Silent failure when loading teams.If team loading fails, the error is only logged to the console. Users won't know why the team dropdown is empty.
Proposed improvement
} catch (error) { console.error('Failed to load teams:', error) + toast.error(t('load_teams_failed')) } finally {frontend/src/features/flows/components/FlowList.tsx (2)
67-84: Consider optimistic UI update for toggle.Currently, the toggle waits for the API call and list refresh before updating the UI. An optimistic update would improve perceived responsiveness.
This is a nice-to-have improvement. The current implementation is correct and provides accurate feedback.
86-100: Manual trigger doesn't refresh timeline.After triggering a flow, the execution appears in the timeline, but
refreshExecutionsisn't called. Users won't see the new execution until they manually refresh or switch tabs.Proposed fix
If
refreshExecutionsis available from context:+ const { flows, flowsLoading, flowsTotal, refreshFlows, loadMoreFlows, refreshExecutions } = + useFlowContext() const handleTrigger = useCallback( async (flow: Flow) => { setActionLoading(flow.id) try { await flowApis.triggerFlow(flow.id) toast.success(t('trigger_success')) + refreshExecutions?.() } catch (error) {backend/tests/api/endpoints/test_flows.py (2)
102-102: Useisfor boolean assertions.Per Ruff E712, prefer
is Trueor direct truthiness checks over== True.Proposed fix
- assert data["enabled"] == True + assert data["enabled"] is TrueOr simply:
- assert data["enabled"] == True + assert data["enabled"]
23-42: Duplicate fixture definition.The
test_teamfixture is defined identically in bothTestFlowEndpointsandTestFlowExecutionEndpoints. Consider extracting to a shared conftest or module-level fixture.Proposed approach
Move to
conftest.pyor define once at module level:@pytest.fixture def test_team(db_session, test_user): """Create a test team.""" team = Kind( user_id=test_user.id, kind="Team", name="test-team", namespace="default", json={ "apiVersion": "agent.wecode.io/v1", "kind": "Team", "metadata": {"name": "test-team", "namespace": "default"}, "spec": {"displayName": "Test Team", "members": []}, }, is_active=True, ) db_session.add(team) db_session.commit() db_session.refresh(team) return teambackend/app/services/flow.py (3)
46-51: Class attribute should useClassVarannotation.Per Ruff RUF012, mutable class attributes should be annotated to prevent accidental mutation across instances.
Proposed fix
+from typing import Any, ClassVar, Dict, List, Optional, Callable + class FlowService: """Service class for AI Flow operations.""" # Supported prompt template variables - TEMPLATE_VARIABLES = { + TEMPLATE_VARIABLES: ClassVar[Dict[str, Callable[[], str]]] = { "date": lambda: datetime.utcnow().strftime("%Y-%m-%d"),
714-720: Move import to top of file.The
jsonimport inside_resolve_prompt_templateshould be at the module level for consistency and slight performance improvement.Proposed fix
Add at the top of the file (around line 8):
import logging import re import secrets import uuid +import json from datetime import datetime, timedeltaThen remove the inline import at line 716.
876-885: Avoid catching bareException.Per Ruff BLE001, catching all exceptions masks unexpected errors. The croniter library raises specific exceptions that should be caught.
Proposed fix
try: from croniter import croniter cron_expr = trigger_config.get("expression", "0 9 * * *") iter = croniter(cron_expr, now) return iter.get_next(datetime) - except Exception as e: + except (KeyError, ValueError) as e: logger.warning(f"Failed to parse cron expression: {e}") return NoneAlternatively, import croniter at module level and catch its specific exception types.
frontend/src/features/flows/components/FlowTimeline.tsx (1)
64-97: Date grouping logic mutates Date objects.Using
setHours(0, 0, 0, 0)ontoday,yesterday, andexecDatemodifies the original objects. While this works in the current implementation, it can cause subtle bugs if references are reused.Safer approach
const groupedExecutions = useMemo(() => { const groups: Record<string, FlowExecution[]> = {} - const today = new Date() - today.setHours(0, 0, 0, 0) + const today = new Date() + const todayStart = new Date(today.getFullYear(), today.getMonth(), today.getDate()) - const yesterday = new Date(today) - yesterday.setDate(yesterday.getDate() - 1) + const yesterdayStart = new Date(todayStart) + yesterdayStart.setDate(yesterdayStart.getDate() - 1) - const weekAgo = new Date(today) - weekAgo.setDate(weekAgo.getDate() - 7) + const weekAgoStart = new Date(todayStart) + weekAgoStart.setDate(weekAgoStart.getDate() - 7) executions.forEach(exec => { - const execDate = new Date(exec.created_at) - execDate.setHours(0, 0, 0, 0) + const execDateTime = new Date(exec.created_at) + const execDate = new Date(execDateTime.getFullYear(), execDateTime.getMonth(), execDateTime.getDate())backend/app/services/flow_scheduler.py (7)
32-60: Consider usinglogging.exceptionfor better error diagnostics.The exception handlers use
logging.error, which doesn't include stack traces. Usinglogging.exceptionwould provide more context when debugging lock acquisition or release failures.♻️ Proposed refactor
except Exception as e: - logger.error(f"[flow_scheduler] Error acquiring distributed lock: {str(e)}") + logger.exception("[flow_scheduler] Error acquiring distributed lock") return Falseexcept Exception as e: - logger.error(f"[flow_scheduler] Error releasing lock: {str(e)}") + logger.exception("[flow_scheduler] Error releasing lock") return FalseAs per static analysis hints.
97-125: Eliminate redundant parsing of flow CRD.
Flow.model_validate(flow.json)is called twice (lines 97 and 125). Store the result from line 97 and reuse it at line 125 to avoid unnecessary parsing overhead.♻️ Proposed refactor
try: flow_crd = Flow.model_validate(flow.json) trigger_type = flow.trigger_type # Determine trigger reason based on trigger type if trigger_type == FlowTriggerType.CRON.value: trigger_reason = f"Scheduled (cron: {flow_crd.spec.trigger.cron.expression})" elif trigger_type == FlowTriggerType.INTERVAL.value: interval = flow_crd.spec.trigger.interval trigger_reason = f"Scheduled (interval: {interval.value} {interval.unit})" elif trigger_type == FlowTriggerType.ONE_TIME.value: trigger_reason = "One-time scheduled execution" else: trigger_reason = "Scheduled execution" # Create execution record execution = flow_service._create_execution( db, flow=flow, user_id=flow.user_id, trigger_type=trigger_type, trigger_reason=trigger_reason, ) logger.info( f"[flow_scheduler] Created execution {execution.id} for flow {flow.id} ({flow.name})" ) # Update flow's next execution time - flow_crd = Flow.model_validate(flow.json) + # Reuse flow_crd from line 97 trigger_config = flow_service._extract_trigger_config(flow_crd.spec.trigger)
171-171: Remove unused variableflow_crd.The variable is assigned but never used in this function. Remove it to clean up the code.
♻️ Proposed fix
try: - from app.models.task import TaskResource - from app.models.kind import Kind - - flow_crd = Flow.model_validate(flow.json) + from app.models.task import TaskResource + from app.models.kind import Kind # Get teamAs per static analysis hints.
209-212: Remove unused workspace repository variables.The variables
git_url,git_repo,git_domain, andbranch_nameare extracted from the workspace but never used. Either remove them or integrate them into the task payload if they're needed for future functionality.♻️ Proposed cleanup
if workspace: ws_json = workspace.json - git_url = ws_json.get("spec", {}).get("repository", {}).get("url", "") - git_repo = ws_json.get("spec", {}).get("repository", {}).get("name", "") - git_domain = ws_json.get("spec", {}).get("repository", {}).get("domain", "") - branch_name = ws_json.get("spec", {}).get("repository", {}).get("branch", "") + # Repository info can be added to task payload if neededAs per static analysis hints.
277-286: Uselogging.exceptionfor better error diagnostics.Replace
logging.errorwithlogging.exceptionto include stack traces, which will help with debugging task creation failures.♻️ Proposed refactor
except Exception as e: - logger.error( - f"[flow_scheduler] Error triggering task for flow {flow.id}: {str(e)}" - ) + logger.exception(f"[flow_scheduler] Error triggering task for flow {flow.id}") flow_service.update_execution_status( db, execution_id=execution.id, status=FlowExecutionStatus.FAILED, error_message=str(e), )As per static analysis hints.
334-347: Uselogging.exceptionfor better error diagnostics.Replace
logging.errorwithlogging.exceptionin exception handlers to include stack traces for easier debugging.♻️ Proposed refactor
except Exception as e: - logger.error(f"[flow_scheduler] Error in scheduler cycle: {str(e)}") + logger.exception("[flow_scheduler] Error in scheduler cycle") finally: # Release lock try: loop.run_until_complete(release_flow_scheduler_lock()) except Exception as e: - logger.error(f"[flow_scheduler] Error releasing lock: {str(e)}") + logger.exception("[flow_scheduler] Error releasing lock") except Exception as e: - logger.error(f"[flow_scheduler] Worker error: {str(e)}") + logger.exception("[flow_scheduler] Worker error")As per static analysis hints.
373-389: Improve shutdown graceful handling and logging.The shutdown logic could be more robust by checking if the thread is still alive after the timeout and logging a warning if it didn't stop gracefully.
♻️ Proposed enhancement
def stop_flow_scheduler(app) -> None: """ Stop the flow scheduler background worker. Args: app: FastAPI application instance """ stop_event = getattr(app.state, "flow_scheduler_stop_event", None) thread = getattr(app.state, "flow_scheduler_thread", None) if stop_event: stop_event.set() + logger.info("[flow_scheduler] Shutdown signal sent to flow scheduler worker") if thread: - thread.join(timeout=5.0) + thread.join(timeout=10.0) # Increase timeout for in-progress flows + if thread.is_alive(): + logger.warning( + "[flow_scheduler] Flow scheduler worker did not stop gracefully within timeout" + ) + else: + logger.info("[flow_scheduler] Flow scheduler worker stopped gracefully") - - logger.info("[flow_scheduler] Flow scheduler worker stopped")
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (25)
backend/alembic/versions/q7r8s9t0u1v2_add_flow_tables.pybackend/app/api/api.pybackend/app/api/endpoints/adapter/flows.pybackend/app/models/flow.pybackend/app/schemas/flow.pybackend/app/services/flow.pybackend/app/services/flow_scheduler.pybackend/app/services/jobs.pybackend/tests/api/endpoints/test_flows.pyfrontend/src/apis/flow.tsfrontend/src/app/flow/page.tsxfrontend/src/config/paths.tsfrontend/src/features/flows/components/FlowForm.tsxfrontend/src/features/flows/components/FlowList.tsxfrontend/src/features/flows/components/FlowPage.tsxfrontend/src/features/flows/components/FlowTimeline.tsxfrontend/src/features/flows/components/index.tsfrontend/src/features/flows/contexts/flowContext.tsxfrontend/src/features/layout/components/DesktopNavLinks.tsxfrontend/src/i18n/locales/en/common.jsonfrontend/src/i18n/locales/en/flow.jsonfrontend/src/i18n/locales/zh-CN/common.jsonfrontend/src/i18n/locales/zh-CN/flow.jsonfrontend/src/i18n/setup.tsfrontend/src/types/flow.ts
🧰 Additional context used
🧠 Learnings (7)
📚 Learning: 2025-12-18T02:09:09.776Z
Learnt from: CR
Repo: wecode-ai/Wegent PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-18T02:09:09.776Z
Learning: Applies to frontend/src/i18n/zh-CN.json : Frontend i18n Chinese values (zh-CN) MUST use UI terms: `'智能体列表'`, `'机器人配置'`
Applied to files:
frontend/src/i18n/locales/en/common.jsonfrontend/src/i18n/locales/zh-CN/common.jsonfrontend/src/i18n/locales/en/flow.jsonfrontend/src/i18n/locales/zh-CN/flow.json
📚 Learning: 2025-12-18T02:09:09.776Z
Learnt from: CR
Repo: wecode-ai/Wegent PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-18T02:09:09.776Z
Learning: Applies to frontend/src/**/*.tsx : Frontend component files named with CRD terms: team-list.tsx, bot-form.tsx, model-selector.tsx
Applied to files:
frontend/src/features/flows/components/index.tsfrontend/src/features/flows/components/FlowForm.tsxfrontend/src/features/flows/components/FlowList.tsx
📚 Learning: 2025-12-18T02:09:09.776Z
Learnt from: CR
Repo: wecode-ai/Wegent PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-18T02:09:09.776Z
Learning: Applies to frontend/src/components/**/*.tsx : Extract reusable UI logic: place shared business components in `src/components/common/`, and pure UI components in `src/components/ui/` (shadcn/ui)
Applied to files:
frontend/src/features/flows/components/index.ts
📚 Learning: 2025-12-18T02:09:09.776Z
Learnt from: CR
Repo: wecode-ai/Wegent PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-18T02:09:09.776Z
Learning: Backend database migrations: run `alembic revision --autogenerate -m 'description'` to create migrations, `alembic upgrade head` to apply, always test migrations before committing
Applied to files:
backend/alembic/versions/q7r8s9t0u1v2_add_flow_tables.py
📚 Learning: 2025-12-31T03:47:12.173Z
Learnt from: CR
Repo: wecode-ai/Wegent PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-31T03:47:12.173Z
Learning: Applies to frontend/src/**/*.{ts,tsx} : Frontend i18n translation keys MUST use current namespace format `t('key.subkey')` for keys within namespace and `t('namespace:key.subkey')` for cross-namespace keys
Applied to files:
frontend/src/i18n/setup.ts
📚 Learning: 2025-12-31T03:47:12.173Z
Learnt from: CR
Repo: wecode-ai/Wegent PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-31T03:47:12.173Z
Learning: Applies to frontend/src/**/*.{ts,tsx} : MUST use single namespace with `useTranslation()` - never use array format like `useTranslation(['common', 'groups'])`
Applied to files:
frontend/src/i18n/setup.ts
📚 Learning: 2025-12-31T03:47:12.173Z
Learnt from: CR
Repo: wecode-ai/Wegent PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-31T03:47:12.173Z
Learning: Applies to frontend/src/**/*.{ts,tsx} : MUST use `useTranslation` hook imported from `@/hooks/useTranslation`, not from `react-i18next`
Applied to files:
frontend/src/i18n/setup.ts
🧬 Code graph analysis (10)
frontend/src/app/flow/page.tsx (2)
frontend/src/features/flows/components/FlowPage.tsx (1)
FlowPage(102-108)frontend/src/features/flows/components/index.ts (1)
FlowPage(5-5)
frontend/src/features/flows/components/FlowPage.tsx (9)
frontend/src/hooks/useTranslation.ts (1)
useTranslation(8-30)frontend/src/features/flows/contexts/flowContext.tsx (2)
useFlowContext(218-224)FlowProvider(70-216)backend/app/schemas/flow.py (2)
Flow(172-179)FlowList(182-187)frontend/src/app/flow/page.tsx (1)
Flow(7-9)frontend/src/types/flow.ts (1)
Flow(59-83)frontend/src/components/ui/tabs.tsx (4)
Tabs(59-59)TabsList(59-59)TabsTrigger(59-59)TabsContent(59-59)frontend/src/features/flows/components/FlowTimeline.tsx (1)
FlowTimeline(53-278)frontend/src/features/flows/components/FlowList.tsx (1)
FlowList(59-307)frontend/src/features/flows/components/FlowForm.tsx (1)
FlowForm(58-513)
backend/tests/api/endpoints/test_flows.py (2)
backend/app/models/flow.py (2)
FlowExecution(84-138)FlowResource(25-81)backend/app/schemas/flow.py (3)
FlowExecutionStatus(38-46)FlowTaskType(15-19)FlowTriggerType(22-28)
frontend/src/features/flows/contexts/flowContext.tsx (2)
frontend/src/types/flow.ts (3)
Flow(59-83)FlowExecution(122-143)FlowExecutionStatus(19-25)frontend/src/apis/flow.ts (1)
flowApis(21-126)
frontend/src/features/layout/components/DesktopNavLinks.tsx (1)
frontend/src/config/paths.ts (1)
paths(7-55)
backend/app/services/jobs.py (1)
backend/app/services/flow_scheduler.py (2)
start_flow_scheduler(355-370)stop_flow_scheduler(373-389)
frontend/src/features/flows/components/FlowTimeline.tsx (3)
backend/app/models/flow.py (1)
FlowExecution(84-138)frontend/src/types/flow.ts (2)
FlowExecution(122-143)FlowExecutionStatus(19-25)frontend/src/features/flows/contexts/flowContext.tsx (1)
useFlowContext(218-224)
backend/app/services/flow_scheduler.py (6)
backend/app/models/flow.py (1)
FlowResource(25-81)backend/app/schemas/flow.py (3)
Flow(172-179)FlowExecutionStatus(38-46)FlowTriggerType(22-28)frontend/src/types/flow.ts (3)
Flow(59-83)FlowExecutionStatus(19-25)FlowTriggerType(13-13)backend/app/core/cache.py (1)
setnx(121-135)backend/app/services/flow.py (4)
_create_execution(652-690)_extract_trigger_config(834-859)_calculate_next_execution_time(861-906)update_execution_status(594-648)backend/app/models/task.py (1)
TaskResource(26-77)
frontend/src/types/flow.ts (3)
backend/app/schemas/flow.py (14)
FlowTaskType(15-19)FlowTriggerType(22-28)FlowEventType(31-35)FlowExecutionStatus(38-46)CronTriggerConfig(50-54)IntervalTriggerConfig(57-63)OneTimeTriggerConfig(66-69)GitPushEventConfig(72-76)EventTriggerConfig(79-85)FlowTriggerConfig(88-103)Flow(172-179)FlowListResponse(250-254)FlowExecutionListResponse(303-307)FlowTimelineFilter(311-322)frontend/src/app/flow/page.tsx (1)
Flow(7-9)backend/app/models/flow.py (1)
FlowExecution(84-138)
backend/app/api/endpoints/adapter/flows.py (1)
backend/app/services/flow.py (10)
list_flows(174-204)create_flow(53-149)get_flow(151-172)update_flow(206-342)delete_flow(344-370)toggle_flow(372-414)trigger_flow_manually(416-444)trigger_flow_by_webhook(463-484)list_executions(488-555)get_execution(557-592)
🪛 Ruff (0.14.10)
backend/tests/api/endpoints/test_flows.py
81-81: Unused method argument: test_user
(ARG002)
102-102: Avoid equality comparisons to True; use data["enabled"]: for truth checks
Replace with data["enabled"]
(E712)
104-104: Unused method argument: test_flow
(ARG002)
130-130: Avoid equality comparisons to False; use not data["enabled"]: for false checks
Replace with not data["enabled"]
(E712)
148-148: Avoid equality comparisons to False; use not response.json()["enabled"]: for false checks
Replace with not response.json()["enabled"]
(E712)
155-155: Avoid equality comparisons to True; use response.json()["enabled"]: for truth checks
Replace with response.json()["enabled"]
(E712)
253-253: Unused method argument: test_execution
(ARG002)
272-272: Unused method argument: test_execution
(ARG002)
backend/app/services/flow.py
46-51: Mutable class attributes should be annotated with typing.ClassVar
(RUF012)
69-69: Avoid equality comparisons to True; use FlowResource.is_active: for truth checks
Replace with FlowResource.is_active
(E712)
86-86: Avoid equality comparisons to True; use Kind.is_active: for truth checks
Replace with Kind.is_active
(E712)
105-105: Avoid equality comparisons to True; use TaskResource.is_active: for truth checks
Replace with TaskResource.is_active
(E712)
164-164: Avoid equality comparisons to True; use FlowResource.is_active: for truth checks
Replace with FlowResource.is_active
(E712)
187-187: Avoid equality comparisons to True; use FlowResource.is_active: for truth checks
Replace with FlowResource.is_active
(E712)
220-220: Avoid equality comparisons to True; use FlowResource.is_active: for truth checks
Replace with FlowResource.is_active
(E712)
238-238: Avoid equality comparisons to True; use Kind.is_active: for truth checks
Replace with Kind.is_active
(E712)
259-259: Avoid equality comparisons to True; use TaskResource.is_active: for truth checks
Replace with TaskResource.is_active
(E712)
357-357: Avoid equality comparisons to True; use FlowResource.is_active: for truth checks
Replace with FlowResource.is_active
(E712)
386-386: Avoid equality comparisons to True; use FlowResource.is_active: for truth checks
Replace with FlowResource.is_active
(E712)
429-429: Avoid equality comparisons to True; use FlowResource.is_active: for truth checks
Replace with FlowResource.is_active
(E712)
457-457: Avoid equality comparisons to True; use FlowResource.is_active: for truth checks
Replace with FlowResource.is_active
(E712)
458-458: Avoid equality comparisons to True; use FlowResource.enabled: for truth checks
Replace with FlowResource.enabled
(E712)
832-832: Avoid specifying long messages outside the exception class
(TRY003)
883-883: Do not catch blind exception: Exception
(BLE001)
backend/app/services/flow_scheduler.py
48-48: Consider moving this statement to an else block
(TRY300)
49-49: Do not catch blind exception: Exception
(BLE001)
50-50: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
50-50: Use explicit conversion flag
Replace with conversion flag
(RUF010)
58-58: Do not catch blind exception: Exception
(BLE001)
59-59: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
59-59: Use explicit conversion flag
Replace with conversion flag
(RUF010)
76-76: Avoid equality comparisons to True; use FlowResource.is_active: for truth checks
Replace with FlowResource.is_active
(E712)
77-77: Avoid equality comparisons to True; use FlowResource.enabled: for truth checks
Replace with FlowResource.enabled
(E712)
78-78: Comparison to None should be cond is not None
Replace with cond is not None
(E711)
153-153: Do not catch blind exception: Exception
(BLE001)
154-156: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
155-155: Use explicit conversion flag
Replace with conversion flag
(RUF010)
171-171: Local variable flow_crd is assigned to but never used
Remove assignment to unused variable flow_crd
(F841)
176-176: Avoid equality comparisons to True; use Kind.is_active: for truth checks
Replace with Kind.is_active
(E712)
203-203: Avoid equality comparisons to True; use TaskResource.is_active: for truth checks
Replace with TaskResource.is_active
(E712)
209-209: Local variable git_url is assigned to but never used
Remove assignment to unused variable git_url
(F841)
210-210: Local variable git_repo is assigned to but never used
Remove assignment to unused variable git_repo
(F841)
211-211: Local variable git_domain is assigned to but never used
Remove assignment to unused variable git_domain
(F841)
212-212: Local variable branch_name is assigned to but never used
Remove assignment to unused variable branch_name
(F841)
277-277: Do not catch blind exception: Exception
(BLE001)
278-280: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
279-279: Use explicit conversion flag
Replace with conversion flag
(RUF010)
334-334: Do not catch blind exception: Exception
(BLE001)
335-335: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
335-335: Use explicit conversion flag
Replace with conversion flag
(RUF010)
341-341: Do not catch blind exception: Exception
(BLE001)
342-342: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
342-342: Use explicit conversion flag
Replace with conversion flag
(RUF010)
346-346: Do not catch blind exception: Exception
(BLE001)
347-347: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
347-347: Use explicit conversion flag
Replace with conversion flag
(RUF010)
backend/app/api/endpoints/adapter/flows.py
40-42: Do not perform function call Query in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
43-43: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
44-44: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
69-69: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
70-70: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
88-88: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
89-89: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
105-105: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
106-106: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
124-124: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
125-125: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
143-143: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
144-144: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
163-163: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
164-164: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
185-185: Do not use mutable data structures for argument defaults
Replace with None; initialize within function
(B006)
186-186: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
211-213: Do not perform function call Query in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
214-214: Do not perform function call Query in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
215-215: Do not perform function call Query in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
216-216: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
217-217: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
244-244: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
245-245: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
- GitHub Check: E2E Tests (Shard 2/3)
- GitHub Check: E2E Tests (Shard 1/3)
- GitHub Check: E2E Tests (Shard 3/3)
- GitHub Check: Test wegent CLI Integration
- GitHub Check: Test Frontend
🔇 Additional comments (34)
frontend/src/features/layout/components/DesktopNavLinks.tsx (1)
14-14: LGTM!The Flow navigation integration follows the established patterns:
activePagetype correctly extended with'flow'- Prefetch added for smoother navigation
- Navigation item structure matches existing items (chat, code, wiki)
Also applies to: 33-33, 59-67
backend/app/models/flow.py (1)
84-138: LGTM!The
FlowExecutionmodel correctly establishes:
- Foreign key with CASCADE delete for cleanup when parent flow is removed
- Appropriate indexes for timeline queries, flow history, and status filtering
- Comprehensive fields for execution tracking including retry support
Note: The same
datetime.utcnowdeprecation applies to lines 126-129.frontend/src/types/flow.ts (1)
1-160: LGTM overall!The type definitions provide comprehensive coverage for the Flow module:
- All trigger types and execution statuses are defined
- Request/response interfaces align with API contracts
- Filter interface supports timeline queries
The types establish a solid contract for the Flow UI components.
backend/app/schemas/flow.py (1)
1-322: Well-structured schema definitions!The Pydantic schemas provide:
- Clear CRD-style structure with
Flow,FlowSpec,FlowStatus, andFlowMetadata- Separate API models (
FlowBase,FlowCreate,FlowUpdate,FlowInDB) following best practices- Proper use of
from_attributes = Truefor ORM compatibility- Good field constraints (
retryCount0-3, required fields marked)frontend/src/i18n/locales/zh-CN/common.json (1)
59-59: LGTM!The translation "智能流" for "AI Flow" is appropriate and consistent with the PR's feature naming convention. Based on learnings, this follows the required UI terminology for zh-CN.
frontend/src/i18n/locales/en/common.json (1)
59-59: LGTM!The English translation "AI Flow" appropriately matches the feature branding and is consistent with the zh-CN translation "智能流".
frontend/src/config/paths.ts (1)
35-37: LGTM! Path entry follows the established pattern.The flow path entry is correctly structured and consistent with other path definitions in the file.
frontend/src/i18n/setup.ts (1)
29-29: LGTM! Flow namespace correctly registered.The 'flow' namespace is properly added to both the loadTranslations array and the i18next configuration, enabling dynamic loading of flow translations.
Also applies to: 79-79
backend/app/services/jobs.py (1)
20-20: LGTM! Flow scheduler properly integrated into background jobs lifecycle.The flow scheduler is correctly integrated following the same pattern as other background workers:
- Import statement on Line 20
- Started in
start_background_jobs()on Lines 185-186- Stopped in
stop_background_jobs()on Lines 213-215The lifecycle management delegates thread handling to the scheduler module, maintaining consistency with the existing architecture.
Also applies to: 185-186, 213-215
frontend/src/features/flows/components/index.ts (1)
5-8: LGTM! Clean barrel exports for Flow components.The component exports follow standard barrel export patterns, providing a clean public API for the Flow feature.
frontend/src/i18n/locales/zh-CN/flow.json (1)
1-81: LGTM! Comprehensive Chinese translations for Flow UI.The translation file provides complete coverage of Flow-related UI strings with proper terminology:
- Uses "智能体" for team/agent (Line 42), consistent with existing conventions
- Includes all necessary UI elements: titles, actions, validation messages, status labels
- Terminology is consistent and appropriate for the Chinese UI
frontend/src/i18n/locales/en/flow.json (1)
1-81: LGTM! Well-structured localization file.The English localization file is comprehensive and well-organized. The terminology is consistent, examples are helpful (e.g., cron expression hints on line 32), and placeholder syntax follows the expected
{{variable}}convention. The status strings align with the backendFlowExecutionStatustype.backend/app/api/api.py (1)
31-31: LGTM! Standard router registration.The flows router import and registration follow the established pattern used by other endpoints in the codebase (teams, tasks, etc.). The
/flowsprefix and tags are appropriate.Also applies to: 63-63
frontend/src/features/flows/components/FlowPage.tsx (2)
20-100: LGTM! Clean component structure.The
FlowPageContentcomponent is well-organized with proper state management, callback memoization, and clear separation of concerns. The tab navigation, form handling, and context integration are all implemented correctly. The type assertion on line 59 is acceptable given the Tabs component API.
102-108: LGTM! Proper provider pattern.The
FlowPagecomponent correctly wrapsFlowPageContentwithFlowProviderto ensure context is available to all child components.backend/alembic/versions/q7r8s9t0u1v2_add_flow_tables.py (3)
28-82: LGTM! Well-designed flows table schema with comprehensive indexing.The flows table schema is well-structured with appropriate column types, defaults, and a comprehensive set of indexes optimized for the expected query patterns:
- Single-column indexes for filtering and lookups
- Composite indexes (
ix_flows_enabled_next_exec,ix_flows_user_kind_name_ns,ix_flows_user_active) for scheduler and multi-criteria queries- Unique constraint on
(user_id, kind, name, namespace)for preventing duplicatesNote: Line 81 creates a unique index on the nullable
webhook_tokencolumn. In MySQL, multiple NULL values are permitted in unique indexes, which is the intended behavior here (flows without webhooks can all havewebhook_token=NULL).
84-124: LGTM! Flow executions table with proper foreign key and indexes.The flow_executions table is correctly structured with:
- Appropriate CASCADE delete on the foreign key (line 113) ensuring executions are removed when flows are deleted
- Comprehensive indexes for common query patterns (user-based, flow-based, status filtering, time-range queries)
- Proper defaults for status and retry_attempt
The composite indexes (
ix_flow_exec_user_created,ix_flow_exec_flow_created,ix_flow_exec_user_status) will significantly improve query performance for timeline views and status filtering.
127-150: LGTM! Proper downgrade implementation.The downgrade function correctly drops all indexes and tables in reverse order, ensuring clean rollback capability.
frontend/src/features/flows/components/FlowForm.tsx (3)
275-291: Potential timezone inconsistency with datetime-local input.The
datetime-localinput produces a local time string without timezone info. Converting it directly withnew Date(e.target.value).toISOString()interprets the input as local time, which may not match the user's expectation if the server expects UTC.Verify that the backend correctly handles the ISO string and that the displayed time matches user expectations. Consider documenting the expected timezone behavior or adding explicit timezone handling.
166-172: Name generation edge case.If
displayNamecontains only special characters (e.g., "日报任务"), the regex replacement produces an empty string, and the fallbackflow-${Date.now()}is used. This is handled but the resulting name may not be meaningful to users.The fallback is acceptable, though you might consider using a transliteration library for non-ASCII display names in the future.
354-512: Form implementation looks good overall.The form properly handles create/edit modes, validates required fields, and provides appropriate user feedback. The dynamic trigger configuration rendering is well-structured.
frontend/src/features/flows/components/FlowList.tsx (2)
142-278: List implementation is well-structured.Good handling of loading states, empty states, pagination, and action menus. The conditional rendering for different trigger types and webhook URLs is clean.
280-305: Delete confirmation dialog is properly implemented.Uses destructive styling, includes flow name in message, and has proper cancel/confirm actions.
backend/tests/api/endpoints/test_flows.py (2)
81-183: Good test coverage for flow CRUD operations.Tests cover create, list, get, update, delete, toggle, trigger, and validation error cases. The soft-delete verification is a good practice.
Consider adding tests for:
- Accessing another user's flow (authorization)
- Invalid trigger configurations
- Edge cases like empty prompt templates
253-287: Execution endpoint tests look good.Coverage includes listing, filtering by flow_id and status, and retrieving specific executions.
backend/app/services/flow.py (1)
53-149: Flow creation logic is well-implemented.Proper validation of uniqueness, team/workspace existence, webhook token generation for event triggers, and CRD construction. Transaction handling via SQLAlchemy session is appropriate.
frontend/src/features/flows/components/FlowTimeline.tsx (2)
122-214: Execution rendering implementation is well-structured.Good use of Collapsible for expandable details, status icons/colors provide clear visual feedback, and the layout handles various execution states appropriately.
216-277: Timeline component handles states well.Proper handling of loading, empty, and populated states. Sticky headers for date groups improve navigation. Load more pagination works correctly.
backend/app/api/endpoints/adapter/flows.py (3)
182-200: Webhook endpoint security considerations.The webhook endpoint relies solely on the token for authentication. Consider:
- Rate limiting to prevent abuse
- Payload size limits to prevent DoS
- Logging webhook calls for audit purposes
Verify that the webhook token has sufficient entropy (32 bytes from
secrets.token_urlsafe(32)in the service layer provides ~256 bits, which is adequate).The current implementation is acceptable for an MVP, but these enhancements should be considered for production hardening.
35-63: List flows endpoint implementation looks good.Proper pagination, optional filters, and delegation to service layer. The
B008Ruff warnings forDependsandQueryare false positives—this is the standard FastAPI pattern.
206-257: Execution endpoints are well-implemented.Proper filtering options for timeline view (flow_id, status, date range), pagination, and access control via user_id from current_user.
frontend/src/apis/flow.ts (1)
21-126: API client implementation is correct.All endpoint paths match the backend router configuration:
/flowsendpoints for list/create operations/flows/{id}endpoints for get/update/delete/flows/{id}/toggleand/flows/{id}/triggerfor flow operations/flows/executionsand/flows/executions/{id}for execution historyMethods are well-typed with proper return types, query parameters are correctly constructed using
URLSearchParams, and the implementation follows existing patterns. The status array filtering withforEachproperly handles multiple query parameter values.backend/app/services/flow_scheduler.py (2)
1-24: LGTM! Module setup and imports are well-organized.The license header, imports, and logger initialization follow standard practices.
355-371: LGTM! Scheduler startup is implemented correctly.The thread creation with a stop event and daemon mode is appropriate for a background worker.
| def get_due_flows(db, now: datetime) -> List[FlowResource]: | ||
| """ | ||
| Get all enabled flows that are due for execution. | ||
| Returns flows where: | ||
| - is_active = True | ||
| - enabled = True | ||
| - next_execution_time <= now | ||
| - trigger_type is cron, interval, or one_time (not event) | ||
| """ | ||
| return ( | ||
| db.query(FlowResource) | ||
| .filter( | ||
| FlowResource.is_active == True, | ||
| FlowResource.enabled == True, | ||
| FlowResource.next_execution_time != None, | ||
| FlowResource.next_execution_time <= now, | ||
| FlowResource.trigger_type.in_([ | ||
| FlowTriggerType.CRON.value, | ||
| FlowTriggerType.INTERVAL.value, | ||
| FlowTriggerType.ONE_TIME.value, | ||
| ]), | ||
| ) | ||
| .all() | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add row-level locking to prevent race conditions.
The query fetches flows without row-level locking. If the distributed lock fails or expires prematurely, multiple scheduler instances could fetch and execute the same flows concurrently, leading to duplicate executions.
Add FOR UPDATE to lock the selected rows until the transaction commits.
🔒 Proposed fix with row-level locking
def get_due_flows(db, now: datetime) -> List[FlowResource]:
"""
Get all enabled flows that are due for execution.
Returns flows where:
- is_active = True
- enabled = True
- next_execution_time <= now
- trigger_type is cron, interval, or one_time (not event)
"""
return (
db.query(FlowResource)
.filter(
- FlowResource.is_active == True,
- FlowResource.enabled == True,
- FlowResource.next_execution_time != None,
+ FlowResource.is_active,
+ FlowResource.enabled,
+ FlowResource.next_execution_time.is_not(None),
FlowResource.next_execution_time <= now,
FlowResource.trigger_type.in_([
FlowTriggerType.CRON.value,
FlowTriggerType.INTERVAL.value,
FlowTriggerType.ONE_TIME.value,
]),
)
+ .with_for_update(skip_locked=True)
.all()
)Note: Using skip_locked=True allows other instances to skip flows that are already being processed.
Also addresses style issues flagged by static analysis (truthiness checks and is not None).
🧰 Tools
🪛 Ruff (0.14.10)
76-76: Avoid equality comparisons to True; use FlowResource.is_active: for truth checks
Replace with FlowResource.is_active
(E712)
77-77: Avoid equality comparisons to True; use FlowResource.enabled: for truth checks
Replace with FlowResource.enabled
(E712)
78-78: Comparison to None should be cond is not None
Replace with cond is not None
(E711)
🤖 Prompt for AI Agents
In @backend/app/services/flow_scheduler.py around lines 63 - 87, Update
get_due_flows to acquire row-level locks using SQLAlchemy's
with_for_update(skip_locked=True) on the query so selected FlowResource rows are
locked until commit (prevents concurrent schedulers from picking the same rows);
also change the truthiness checks to explicit comparisons (e.g.,
FlowResource.is_active.is_(True) or FlowResource.is_active == True per your
project style and FlowResource.next_execution_time.isnot(None) /
FlowResource.next_execution_time != None to use is not None style) and keep the
trigger_type filter using FlowTriggerType.CRON/INTERVAL/ONE_TIME values; apply
these changes to the
db.query(FlowResource).filter(...).with_for_update(skip_locked=True).all() call
in get_due_flows.
| def execute_flow(db, flow: FlowResource) -> None: | ||
| """ | ||
| Execute a single flow by creating an execution record. | ||
| The actual task execution is handled asynchronously by the task system. | ||
| """ | ||
| try: | ||
| flow_crd = Flow.model_validate(flow.json) | ||
| trigger_type = flow.trigger_type | ||
|
|
||
| # Determine trigger reason based on trigger type | ||
| if trigger_type == FlowTriggerType.CRON.value: | ||
| trigger_reason = f"Scheduled (cron: {flow_crd.spec.trigger.cron.expression})" | ||
| elif trigger_type == FlowTriggerType.INTERVAL.value: | ||
| interval = flow_crd.spec.trigger.interval | ||
| trigger_reason = f"Scheduled (interval: {interval.value} {interval.unit})" | ||
| elif trigger_type == FlowTriggerType.ONE_TIME.value: | ||
| trigger_reason = "One-time scheduled execution" | ||
| else: | ||
| trigger_reason = "Scheduled execution" | ||
|
|
||
| # Create execution record | ||
| execution = flow_service._create_execution( | ||
| db, | ||
| flow=flow, | ||
| user_id=flow.user_id, | ||
| trigger_type=trigger_type, | ||
| trigger_reason=trigger_reason, | ||
| ) | ||
|
|
||
| logger.info( | ||
| f"[flow_scheduler] Created execution {execution.id} for flow {flow.id} ({flow.name})" | ||
| ) | ||
|
|
||
| # Update flow's next execution time | ||
| flow_crd = Flow.model_validate(flow.json) | ||
| trigger_config = flow_service._extract_trigger_config(flow_crd.spec.trigger) | ||
|
|
||
| if trigger_type == FlowTriggerType.ONE_TIME.value: | ||
| # One-time flows should be disabled after execution | ||
| flow.enabled = False | ||
| flow.next_execution_time = None | ||
| flow_crd.spec.enabled = False | ||
| flow.json = flow_crd.model_dump(mode="json") | ||
| logger.info( | ||
| f"[flow_scheduler] One-time flow {flow.id} disabled after execution" | ||
| ) | ||
| else: | ||
| # Calculate next execution time for recurring flows | ||
| flow.next_execution_time = flow_service._calculate_next_execution_time( | ||
| trigger_type, trigger_config | ||
| ) | ||
| logger.info( | ||
| f"[flow_scheduler] Next execution for flow {flow.id}: {flow.next_execution_time}" | ||
| ) | ||
|
|
||
| db.commit() | ||
|
|
||
| # TODO: Trigger actual task execution via the task system | ||
| # This would involve calling the task creation API or directly | ||
| # creating a task associated with this execution | ||
| _trigger_task_execution(db, flow, execution) | ||
|
|
||
| except Exception as e: | ||
| logger.error( | ||
| f"[flow_scheduler] Error executing flow {flow.id} ({flow.name}): {str(e)}" | ||
| ) | ||
| db.rollback() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix transaction management to prevent inconsistent state.
The function commits the flow's next_execution_time update (line 146) before calling _trigger_task_execution (line 151). If task creation fails, the flow's next execution time is already advanced, potentially causing missed executions.
Move the commit to after _trigger_task_execution completes successfully, or handle failures by rolling back the next_execution_time update.
🔧 Proposed fix
db.commit()
- # TODO: Trigger actual task execution via the task system
- # This would involve calling the task creation API or directly
- # creating a task associated with this execution
_trigger_task_execution(db, flow, execution)
+
+ # Commit the next_execution_time update after task creation succeeds
+ db.commit()
except Exception as e:
logger.error(
f"[flow_scheduler] Error executing flow {flow.id} ({flow.name}): {str(e)}"
)
db.rollback()Also removes the obsolete TODO comment (lines 148-150).
🧰 Tools
🪛 Ruff (0.14.10)
153-153: Do not catch blind exception: Exception
(BLE001)
154-156: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
155-155: Use explicit conversion flag
Replace with conversion flag
(RUF010)
🤖 Prompt for AI Agents
In @backend/app/services/flow_scheduler.py around lines 90 - 157, The
transaction commits the flow update before attempting task creation, risking
inconsistent state if _trigger_task_execution fails; in execute_flow move the
db.commit() to after _trigger_task_execution(db, flow, execution) completes
successfully (and keep db.rollback() in the except), so the next_execution_time
or enabled flag is only persisted when the task is created, and remove the
now-obsolete TODO about triggering the task; references: execute_flow,
_trigger_task_execution, flow_service._calculate_next_execution_time, db.commit,
db.rollback.
| def flow_scheduler_worker(stop_event: threading.Event) -> None: | ||
| """ | ||
| Background worker for the flow scheduler. | ||
| Periodically checks for flows that need execution and triggers them. | ||
| """ | ||
| logger.info("[flow_scheduler] Flow scheduler worker started") | ||
|
|
||
| while not stop_event.is_set(): | ||
| try: | ||
| # Create async runtime for distributed locking | ||
| loop = asyncio.new_event_loop() | ||
| asyncio.set_event_loop(loop) | ||
|
|
||
| # Try to acquire distributed lock | ||
| lock_acquired = loop.run_until_complete(acquire_flow_scheduler_lock()) | ||
|
|
||
| if not lock_acquired: | ||
| logger.debug( | ||
| "[flow_scheduler] Another instance is running the scheduler, skipping" | ||
| ) | ||
| else: | ||
| try: | ||
| logger.debug("[flow_scheduler] Starting scheduler cycle") | ||
|
|
||
| db = SessionLocal() | ||
| try: | ||
| now = datetime.utcnow() | ||
| due_flows = get_due_flows(db, now) | ||
|
|
||
| if due_flows: | ||
| logger.info( | ||
| f"[flow_scheduler] Found {len(due_flows)} flow(s) due for execution" | ||
| ) | ||
|
|
||
| for flow in due_flows: | ||
| execute_flow(db, flow) | ||
| else: | ||
| logger.debug("[flow_scheduler] No flows due for execution") | ||
|
|
||
| finally: | ||
| db.close() | ||
|
|
||
| logger.debug("[flow_scheduler] Scheduler cycle completed") | ||
|
|
||
| except Exception as e: | ||
| logger.error(f"[flow_scheduler] Error in scheduler cycle: {str(e)}") | ||
|
|
||
| finally: | ||
| # Release lock | ||
| try: | ||
| loop.run_until_complete(release_flow_scheduler_lock()) | ||
| except Exception as e: | ||
| logger.error(f"[flow_scheduler] Error releasing lock: {str(e)}") | ||
|
|
||
| loop.close() | ||
|
|
||
| except Exception as e: | ||
| logger.error(f"[flow_scheduler] Worker error: {str(e)}") | ||
|
|
||
| # Wait for next cycle | ||
| stop_event.wait(timeout=FLOW_SCHEDULER_INTERVAL_SECONDS) | ||
|
|
||
| logger.info("[flow_scheduler] Flow scheduler worker stopped") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid recreating event loop in every iteration.
Creating a new event loop on every scheduler cycle (lines 300-301) is inefficient and can lead to resource exhaustion over time. Create the event loop once before entering the main loop and reuse it.
🔧 Proposed fix
def flow_scheduler_worker(stop_event: threading.Event) -> None:
"""
Background worker for the flow scheduler.
Periodically checks for flows that need execution and triggers them.
"""
logger.info("[flow_scheduler] Flow scheduler worker started")
+
+ # Create event loop once for the lifetime of the worker
+ loop = asyncio.new_event_loop()
+ asyncio.set_event_loop(loop)
while not stop_event.is_set():
try:
- # Create async runtime for distributed locking
- loop = asyncio.new_event_loop()
- asyncio.set_event_loop(loop)
-
# Try to acquire distributed lock
lock_acquired = loop.run_until_complete(acquire_flow_scheduler_lock())
if not lock_acquired:
logger.debug(
"[flow_scheduler] Another instance is running the scheduler, skipping"
)
else:
try:
logger.debug("[flow_scheduler] Starting scheduler cycle")
db = SessionLocal()
try:
now = datetime.utcnow()
due_flows = get_due_flows(db, now)
if due_flows:
logger.info(
f"[flow_scheduler] Found {len(due_flows)} flow(s) due for execution"
)
for flow in due_flows:
execute_flow(db, flow)
else:
logger.debug("[flow_scheduler] No flows due for execution")
finally:
db.close()
logger.debug("[flow_scheduler] Scheduler cycle completed")
except Exception as e:
logger.error(f"[flow_scheduler] Error in scheduler cycle: {str(e)}")
finally:
# Release lock
try:
loop.run_until_complete(release_flow_scheduler_lock())
except Exception as e:
logger.error(f"[flow_scheduler] Error releasing lock: {str(e)}")
-
- loop.close()
except Exception as e:
logger.error(f"[flow_scheduler] Worker error: {str(e)}")
# Wait for next cycle
stop_event.wait(timeout=FLOW_SCHEDULER_INTERVAL_SECONDS)
+
+ # Clean up event loop on shutdown
+ loop.close()
logger.info("[flow_scheduler] Flow scheduler worker stopped")🧰 Tools
🪛 Ruff (0.14.10)
334-334: Do not catch blind exception: Exception
(BLE001)
335-335: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
335-335: Use explicit conversion flag
Replace with conversion flag
(RUF010)
341-341: Do not catch blind exception: Exception
(BLE001)
342-342: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
342-342: Use explicit conversion flag
Replace with conversion flag
(RUF010)
346-346: Do not catch blind exception: Exception
(BLE001)
347-347: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
347-347: Use explicit conversion flag
Replace with conversion flag
(RUF010)
🤖 Prompt for AI Agents
In @backend/app/services/flow_scheduler.py around lines 289 - 352, The worker
currently creates and closes a new asyncio event loop each cycle (variables
loop, asyncio.new_event_loop(), asyncio.set_event_loop()) which is inefficient
and can leak resources; modify flow_scheduler_worker to create the event loop
once before entering the while not stop_event.is_set() loop, set it as the
current loop, reuse that loop for acquire_flow_scheduler_lock() and
release_flow_scheduler_lock() calls inside each iteration, and only close the
loop after exiting the main loop (ensuring any held distributed lock is released
with loop.run_until_complete(release_flow_scheduler_lock()) before closing);
keep existing exception handling but move loop creation/closure outside the
per-cycle try so the loop is reused safely.
| if exec.flow_id in flow_cache: | ||
| flow = db.query(FlowResource).filter(FlowResource.id == exec.flow_id).first() | ||
| if flow and flow.team_id: | ||
| team = db.query(Kind).filter(Kind.id == flow.team_id).first() | ||
| if team: | ||
| exec_dict["team_name"] = team.name |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Redundant database query.
Inside the loop, flow is queried again even though the flow_id is already in flow_cache. The cache stores extracted data, but then another query fetches the full flow object again just to get team_id.
Proposed fix
Store team_id in the cache to avoid the redundant query:
if exec.flow_id not in flow_cache:
flow = db.query(FlowResource).filter(FlowResource.id == exec.flow_id).first()
if flow:
flow_crd = Flow.model_validate(flow.json)
flow_cache[exec.flow_id] = {
"name": flow.name,
"display_name": flow_crd.spec.displayName,
"task_type": flow_crd.spec.taskType.value,
+ "team_id": flow.team_id,
}
flow_info = flow_cache.get(exec.flow_id, {})
exec_dict["flow_name"] = flow_info.get("name")
exec_dict["flow_display_name"] = flow_info.get("display_name")
exec_dict["task_type"] = flow_info.get("task_type")
# Get team name if available
- if exec.flow_id in flow_cache:
- flow = db.query(FlowResource).filter(FlowResource.id == exec.flow_id).first()
- if flow and flow.team_id:
- team = db.query(Kind).filter(Kind.id == flow.team_id).first()
- if team:
- exec_dict["team_name"] = team.name
+ team_id = flow_info.get("team_id")
+ if team_id:
+ team = db.query(Kind).filter(Kind.id == team_id).first()
+ if team:
+ exec_dict["team_name"] = team.nameConsider also caching team names to further reduce queries.
🤖 Prompt for AI Agents
In @backend/app/services/flow.py around lines 546 - 551, The loop currently
re-queries FlowResource despite flow_cache already containing extracted flow
data; update the caching logic so flow_cache stores the flow's team_id (e.g.,
flow_cache[flow_id]["team_id"]) when flows are first loaded and then use that
cached team_id to look up the Kind (team) and set exec_dict["team_name"] instead
of re-querying FlowResource; additionally add an optional small cache for Kind
names (team_name_cache) and consult it before querying Kind to further reduce DB
calls (refer to flow_cache, FlowResource, Kind, exec_dict, and team_id).
| type=trigger_type_enum, | ||
| one_time=OneTimeTriggerConfig( | ||
| execute_at=datetime.fromisoformat(trigger_config.get("execute_at")), | ||
| ), | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing error handling for invalid ISO date string.
datetime.fromisoformat() will raise ValueError if the input is malformed. This could cause unhandled exceptions during trigger config building.
Proposed fix
elif trigger_type_enum == FlowTriggerType.ONE_TIME:
+ execute_at_str = trigger_config.get("execute_at")
+ if not execute_at_str:
+ raise HTTPException(
+ status_code=400,
+ detail="execute_at is required for one-time triggers",
+ )
+ try:
+ execute_at = datetime.fromisoformat(execute_at_str)
+ except ValueError:
+ raise HTTPException(
+ status_code=400,
+ detail=f"Invalid execute_at format: {execute_at_str}",
+ )
return FlowTriggerConfig(
type=trigger_type_enum,
one_time=OneTimeTriggerConfig(
- execute_at=datetime.fromisoformat(trigger_config.get("execute_at")),
+ execute_at=execute_at,
),
)🤖 Prompt for AI Agents
In @backend/app/services/flow.py around lines 808 - 812, The code calls
datetime.fromisoformat(trigger_config.get("execute_at")) inside the
OneTimeTriggerConfig construction which will raise ValueError for malformed ISO
strings; wrap that conversion in a try/except that catches ValueError, validate
that trigger_config.get("execute_at") is a non-empty string before parsing, and
on failure log or raise a clear validation error (including the invalid value
and context) so trigger building in the code path using OneTimeTriggerConfig and
datetime.fromisoformat won’t raise an unhandled exception.
| export type FlowTriggerConfig = | ||
| | CronTriggerConfig | ||
| | IntervalTriggerConfig | ||
| | OneTimeTriggerConfig | ||
| | EventTriggerConfig |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FlowTriggerConfig type doesn't match backend schema structure.
The backend FlowTriggerConfig (see backend/app/schemas/flow.py lines 88-103) includes a type discriminator field with optional nested configs. The frontend union type lacks this structure, making runtime type narrowing impossible.
Proposed fix to match backend schema
-export type FlowTriggerConfig =
- | CronTriggerConfig
- | IntervalTriggerConfig
- | OneTimeTriggerConfig
- | EventTriggerConfig
+export interface FlowTriggerConfig {
+ type: FlowTriggerType
+ cron?: CronTriggerConfig
+ interval?: IntervalTriggerConfig
+ one_time?: OneTimeTriggerConfig
+ event?: EventTriggerConfig
+}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| export type FlowTriggerConfig = | |
| | CronTriggerConfig | |
| | IntervalTriggerConfig | |
| | OneTimeTriggerConfig | |
| | EventTriggerConfig | |
| export interface FlowTriggerConfig { | |
| type: FlowTriggerType | |
| cron?: CronTriggerConfig | |
| interval?: IntervalTriggerConfig | |
| one_time?: OneTimeTriggerConfig | |
| event?: EventTriggerConfig | |
| } |
🤖 Prompt for AI Agents
In @frontend/src/types/flow.ts around lines 52 - 56, FlowTriggerConfig currently
declares a flat union of CronTriggerConfig | IntervalTriggerConfig |
OneTimeTriggerConfig | EventTriggerConfig but must mirror the backend
discriminated shape; change FlowTriggerConfig to a discriminated union of
objects each with a literal type field (e.g., type: 'cron' | 'interval' |
'one_time' | 'event') and an optional nested config property (e.g., { type:
'cron'; config?: CronTriggerConfig }) so runtime type-narrowing works; update
any code using FlowTriggerConfig to narrow on the type field instead of assuming
the concrete config shape.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (3)
backend/app/models/flow.py (3)
66-70: Use timezone-aware timestamps and server-side defaults to avoid naive-UTC drift.
DateTime+datetime.utcnowproduces naive datetimes; this tends to leak into API/UI and sorting. PreferDateTime(timezone=True)+server_default=func.now()(andonupdate=func.now()), if consistent with the rest of the backend.Proposed diff
-from datetime import datetime +from datetime import datetime -from sqlalchemy import ( +from sqlalchemy import ( JSON, Boolean, Column, DateTime, ForeignKey, Index, Integer, String, Text, + func, ) @@ - created_at = Column(DateTime, default=datetime.utcnow, nullable=False) + created_at = Column(DateTime(timezone=True), server_default=func.now(), nullable=False) updated_at = Column( - DateTime, default=datetime.utcnow, onupdate=datetime.utcnow, nullable=False + DateTime(timezone=True), + server_default=func.now(), + onupdate=func.now(), + nullable=False, ) @@ - created_at = Column(DateTime, default=datetime.utcnow, nullable=False, index=True) + created_at = Column(DateTime(timezone=True), server_default=func.now(), nullable=False, index=True) updated_at = Column( - DateTime, default=datetime.utcnow, onupdate=datetime.utcnow, nullable=False + DateTime(timezone=True), + server_default=func.now(), + onupdate=func.now(), + nullable=False, )Also applies to: 123-129
58-65: Add basic DB constraints for counters/retries to prevent invalid states.
Given the PR guarantees “0–3 retries”, consider aCheckConstraintforFlowExecution.retry_attempt BETWEEN 0 AND 3, and non-negative checks for the *_count fields (and optionallysuccess_count + failure_count <= execution_count).Also applies to: 120-122
35-37: Confirm whether missing foreign keys are intentional (user/team/workspace/task).
OnlyFlowExecution.flow_idis enforced at the DB level; if you rely on referential integrity and cascade semantics elsewhere, add FKs foruser_id,team_id,workspace_id, andtask_id(and optionallyrelationship()s for ORM traversal). If you intentionally avoid cross-module FK coupling, worth documenting that choice in the model docstring.Also applies to: 52-54, 94-107
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
backend/app/models/flow.py
🧰 Additional context used
🧬 Code graph analysis (1)
backend/app/models/flow.py (1)
frontend/src/types/flow.ts (1)
FlowExecution(122-143)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
- GitHub Check: Test wegent CLI Integration
- GitHub Check: E2E Tests (Shard 3/3)
- GitHub Check: E2E Tests (Shard 1/3)
- GitHub Check: E2E Tests (Shard 2/3)
- GitHub Check: Test Frontend
| # Webhook support | ||
| webhook_token = Column(String(255), nullable=True, unique=True) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Webhook token should be treated as a secret (hash at rest + enforce “required when webhook trigger”).
Right now webhook_token looks like it will be stored in plaintext and is only “optionally unique”. Consider storing a hash (and comparing in constant time in the auth layer), and adding a DB constraint so webhook-triggered flows can’t exist without a token.
Static routes like /executions must be defined before dynamic /{flow_id}
routes to prevent FastAPI from matching 'executions' as an integer flow_id.
The Flow CRD schema requires a metadata field. Updated the test_flow fixture in TestFlowExecutionEndpoints to include the metadata with name and namespace.
- Change FlowList.tsx to import from @/components/ui/dropdown - Change FlowForm.tsx to use Drawer instead of Sheet component - Add collapsible.tsx component with @radix-ui/react-collapsible
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🤖 Fix all issues with AI agents
In @backend/app/api/endpoints/adapter/flows.py:
- Around line 149-167: The endpoint function trigger_flow_webhook currently uses
a mutable default payload: Dict[str, Any] = {}, which is an anti-pattern; change
the parameter to avoid a shared mutable default — either accept payload:
Optional[Dict[str, Any]] = None and normalize inside the function to an empty
dict when None before passing to flow_service.trigger_flow_by_webhook, or use
FastAPI's Body with default_factory=dict to create a fresh dict per request;
update the function signature and the payload handling accordingly (refer to
trigger_flow_webhook and the payload parameter).
In @frontend/src/components/ui/collapsible.tsx:
- Around line 7-16: The file imports Radix Collapsible primitives but references
non-existent exports; change the assigned symbols so CollapsibleTrigger is
assigned from CollapsiblePrimitive.Trigger and CollapsibleContent is assigned
from CollapsiblePrimitive.Content (keep Collapsible = CollapsiblePrimitive.Root
unchanged), then re-export Collapsible, CollapsibleTrigger, and
CollapsibleContent.
In @frontend/src/features/flows/components/FlowForm.tsx:
- Around line 51-56: defaultTriggerConfig currently sets one_time.execute_at
with new Date().toISOString() at module load, creating a stale timestamp;
replace the static map with a function getDefaultTriggerConfig(triggerType:
FlowTriggerType) that returns the appropriate config object and generates
execute_at dynamically (e.g., { execute_at: new Date().toISOString() }) for
'one_time'; update call sites that used defaultTriggerConfig (the locations
calling getDefaultTriggerConfig('cron') and the calls at the places previously
using getDefaultTriggerConfig(value)) to call this new function so each new flow
gets a fresh timestamp.
- Around line 271-292: The datetime-local input is mixing local and UTC
conversions causing displayed values to shift; stop converting
triggerConfig.execute_at to ISO for the input and instead store a local datetime
string for the form field (use the existing triggerConfig and setTriggerConfig
for this), then convert to a proper UTC/ISO timestamp only when the form is
submitted or when building the payload for the backend; update the input's value
logic and onChange to read/write a local datetime string and add conversion to
ISO in the submit handler that currently reads triggerConfig.execute_at.
🧹 Nitpick comments (4)
frontend/src/features/flows/components/FlowForm.tsx (1)
84-99: Consider showing user feedback when team loading fails.The error is only logged to the console. If teams fail to load, users won't know why the team dropdown is empty.
Proposed fix
try { const response = await teamApis.getTeams({ page: 1, limit: 100 }) setTeams(response.items) } catch (error) { console.error('Failed to load teams:', error) + toast.error(t('load_teams_failed')) } finally {frontend/src/features/flows/components/FlowList.tsx (2)
86-100: Consider refreshing execution data after manual trigger.The trigger creates a new execution, but neither flows (for updated
execution_count) nor executions are refreshed. The user won't see the updated stats or new execution until they manually refresh or switch tabs.Proposed enhancement
const handleTrigger = useCallback( async (flow: Flow) => { setActionLoading(flow.id) try { await flowApis.triggerFlow(flow.id) + await refreshFlows() // Update execution_count toast.success(t('trigger_success')) } catch (error) {
125-140: Minor: Interval label may have grammatical issues.The interval label
${value} ${unit}doesn't handle singular/plural (e.g., "1 hours" instead of "1 hour").This is a minor i18n concern. Consider using translation keys with pluralization support if the framework supports it, or handle it inline:
case 'interval': - return `${flow.trigger_config?.value || ''} ${flow.trigger_config?.unit || ''}` + const val = flow.trigger_config?.value || 1 + const unit = flow.trigger_config?.unit || 'hours' + return `${val} ${t(`unit_${unit}${val === 1 ? '_singular' : ''}`, { defaultValue: unit })}`backend/tests/api/endpoints/test_flows.py (1)
106-106: Consider usingisfor boolean assertions (per linter).Static analysis suggests avoiding
== True/False. While explicit comparison is clearer for API assertions, usingisis more Pythonic.- assert data["enabled"] == True + assert data["enabled"] is TrueApply similar changes to lines 145, 174, and 182.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
frontend/package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (6)
backend/app/api/endpoints/adapter/flows.pybackend/tests/api/endpoints/test_flows.pyfrontend/package.jsonfrontend/src/components/ui/collapsible.tsxfrontend/src/features/flows/components/FlowForm.tsxfrontend/src/features/flows/components/FlowList.tsx
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: 2025-12-18T02:09:09.776Z
Learnt from: CR
Repo: wecode-ai/Wegent PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-18T02:09:09.776Z
Learning: Applies to frontend/src/components/**/*.tsx : Extract reusable UI logic: place shared business components in `src/components/common/`, and pure UI components in `src/components/ui/` (shadcn/ui)
Applied to files:
frontend/src/components/ui/collapsible.tsx
📚 Learning: 2025-12-18T02:09:09.776Z
Learnt from: CR
Repo: wecode-ai/Wegent PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-18T02:09:09.776Z
Learning: Applies to frontend/src/**/*.tsx : Frontend component files named with CRD terms: team-list.tsx, bot-form.tsx, model-selector.tsx
Applied to files:
frontend/src/components/ui/collapsible.tsxfrontend/src/features/flows/components/FlowList.tsxfrontend/src/features/flows/components/FlowForm.tsx
🧬 Code graph analysis (4)
frontend/src/features/flows/components/FlowList.tsx (4)
frontend/src/types/flow.ts (2)
Flow(59-83)FlowTriggerType(13-13)frontend/src/hooks/useTranslation.ts (1)
useTranslation(8-30)frontend/src/features/flows/contexts/flowContext.tsx (1)
useFlowContext(218-224)frontend/src/apis/flow.ts (1)
flowApis(21-126)
frontend/src/features/flows/components/FlowForm.tsx (3)
frontend/src/types/flow.ts (5)
Flow(59-83)FlowTriggerType(13-13)FlowTaskType(10-10)FlowUpdateRequest(102-113)FlowCreateRequest(86-99)frontend/src/hooks/useTranslation.ts (1)
useTranslation(8-30)frontend/src/apis/flow.ts (1)
flowApis(21-126)
backend/tests/api/endpoints/test_flows.py (2)
backend/app/models/flow.py (2)
FlowExecution(84-138)FlowResource(25-81)backend/app/schemas/flow.py (3)
FlowExecutionStatus(38-46)FlowTaskType(15-19)FlowTriggerType(22-28)
backend/app/api/endpoints/adapter/flows.py (3)
backend/app/schemas/flow.py (6)
FlowCreate(209-212)FlowExecutionInDB(273-294)FlowExecutionListResponse(303-307)FlowInDB(230-247)FlowListResponse(250-254)FlowTriggerType(22-28)backend/app/services/flow.py (10)
list_flows(174-204)create_flow(53-149)list_executions(488-555)get_execution(557-592)trigger_flow_by_webhook(463-484)get_flow(151-172)update_flow(206-342)delete_flow(344-370)toggle_flow(372-414)trigger_flow_manually(416-444)backend/app/core/security.py (1)
get_current_user(43-65)
🪛 Ruff (0.14.10)
backend/tests/api/endpoints/test_flows.py
84-84: Unused method argument: test_user
(ARG002)
106-106: Avoid equality comparisons to True; use data["enabled"]: for truth checks
Replace with data["enabled"]
(E712)
109-109: Unused method argument: test_flow
(ARG002)
145-145: Avoid equality comparisons to False; use not data["enabled"]: for false checks
Replace with not data["enabled"]
(E712)
174-174: Avoid equality comparisons to False; use not response.json()["enabled"]: for false checks
Replace with not response.json()["enabled"]
(E712)
182-182: Avoid equality comparisons to True; use response.json()["enabled"]: for truth checks
Replace with response.json()["enabled"]
(E712)
287-287: Unused method argument: test_execution
(ARG002)
313-313: Unused method argument: test_execution
(ARG002)
backend/app/api/endpoints/adapter/flows.py
43-45: Do not perform function call Query in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
46-46: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
47-47: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
72-72: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
73-73: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
97-99: Do not perform function call Query in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
100-100: Do not perform function call Query in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
101-101: Do not perform function call Query in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
102-102: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
103-103: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
130-130: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
131-131: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
152-152: Do not use mutable data structures for argument defaults
Replace with None; initialize within function
(B006)
153-153: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
177-177: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
178-178: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
194-194: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
195-195: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
213-213: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
214-214: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
232-232: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
233-233: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
252-252: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
253-253: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (7)
- GitHub Check: Test wegent CLI Integration
- GitHub Check: Test Backend (3.10)
- GitHub Check: Test Backend (3.11)
- GitHub Check: Test Frontend
- GitHub Check: E2E Tests (Shard 2/3)
- GitHub Check: E2E Tests (Shard 1/3)
- GitHub Check: E2E Tests (Shard 3/3)
🔇 Additional comments (18)
frontend/package.json (1)
49-49: All dependency checks pass. @radix-ui/[email protected] exists, declares React 19 support in its peer dependencies ("react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc"), and the package-lock.json has been updated in this PR.frontend/src/features/flows/components/FlowForm.tsx (3)
101-126: LGTM!Form reset logic correctly handles both edit (populating from flow) and create (resetting to defaults) modes, and properly triggers when the drawer opens.
366-492: LGTM!The form UI is well-structured with proper labeling, disabled states during loading, and clear visual hierarchy. The required field indicators (*) improve usability.
166-172: Backend already validates flow name uniqueness with clear error messaging.The backend correctly validates that flow names are unique per user/namespace (lines 61-78 of
backend/app/services/flow.py), raising a 400 error with the message "Flow with name '{flow_in.name}' already exists". The frontend API client properly extracts this error message and surfaces it to the user via the toast notification in the error catch block. No action needed—duplicate names are properly prevented.frontend/src/features/flows/components/FlowList.tsx (5)
52-57: LGTM!Clean icon mapping for trigger types with consistent sizing.
67-84: LGTM!Per-item loading state prevents multiple concurrent actions on the same flow, and the refresh after toggle ensures consistency with server state.
102-117: LGTM!Delete handler properly guards against null state and cleans up both loading and confirmation states in the finally block.
168-260: LGTM!The list rendering handles loading, empty, and populated states well. Per-item actions are properly disabled during loading, and responsive design hides stats on small screens.
280-304: LGTM!Delete confirmation dialog follows good UX patterns with clear messaging and destructive styling for the delete action.
backend/tests/api/endpoints/test_flows.py (5)
25-44: LGTM!The test team fixture properly creates a Kind entity with appropriate CRD structure for testing flows.
46-81: LGTM!The test flow fixture creates a complete FlowResource with proper CRD structure and all required fields.
198-215: LGTM!Good negative test case validating that the backend rejects flows with non-existent team IDs.
267-284: LGTM!The test execution fixture properly creates a completed execution record with timing data for testing the timeline/execution endpoints.
312-330: LGTM!Filter tests verify both flow_id and status filtering work correctly. The assertion on line 323 properly validates all returned items match the filter criteria.
backend/app/api/endpoints/adapter/flows.py (4)
38-66: LGTM!List flows endpoint properly handles pagination and optional filters. The B008 warnings about
Query/Dependsin defaults are false positives for FastAPI's dependency injection pattern.
88-90: Good documentation of route ordering.The comment clearly explains why static routes must precede dynamic routes. This prevents
/executionsfrom being matched as aflow_id.
210-226: LGTM!Delete endpoint properly uses 204 NO_CONTENT status code and delegates soft delete to the service layer.
249-265: LGTM!Manual trigger endpoint properly creates an execution record and returns the execution details. Good docstring explaining the use case.
| import * as React from 'react' | ||
| import * as CollapsiblePrimitive from '@radix-ui/react-collapsible' | ||
|
|
||
| const Collapsible = CollapsiblePrimitive.Root | ||
|
|
||
| const CollapsibleTrigger = CollapsiblePrimitive.CollapsibleTrigger | ||
|
|
||
| const CollapsibleContent = CollapsiblePrimitive.CollapsibleContent | ||
|
|
||
| export { Collapsible, CollapsibleTrigger, CollapsibleContent } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🌐 Web query:
@radix-ui/react-collapsible exported components Root Trigger Content
💡 Result:
Yes — @radix-ui/react-collapsible (Radix Primitives) exposes Collapsible.Root, Collapsible.Trigger and Collapsible.Content. Example usage:
import { Collapsible } from "radix-ui"
...
<Collapsible.Root>
<Collapsible.Trigger />
<Collapsible.Content />
</Collapsible.Root>
(see Radix docs). [1]
Sources
[1] Radix Primitives — Collapsible documentation.
Fix incorrect Radix export names (Trigger/Content)—the code references non-existent exports.
@radix-ui/react-collapsible exports Root, Trigger, and Content—not CollapsibleTrigger/CollapsibleContent. Update lines 9 and 11 to reference the correct export names.
Proposed fix
import * as React from 'react'
import * as CollapsiblePrimitive from '@radix-ui/react-collapsible'
const Collapsible = CollapsiblePrimitive.Root
-const CollapsibleTrigger = CollapsiblePrimitive.CollapsibleTrigger
+const CollapsibleTrigger = CollapsiblePrimitive.Trigger
-const CollapsibleContent = CollapsiblePrimitive.CollapsibleContent
+const CollapsibleContent = CollapsiblePrimitive.Content
export { Collapsible, CollapsibleTrigger, CollapsibleContent }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| import * as React from 'react' | |
| import * as CollapsiblePrimitive from '@radix-ui/react-collapsible' | |
| const Collapsible = CollapsiblePrimitive.Root | |
| const CollapsibleTrigger = CollapsiblePrimitive.CollapsibleTrigger | |
| const CollapsibleContent = CollapsiblePrimitive.CollapsibleContent | |
| export { Collapsible, CollapsibleTrigger, CollapsibleContent } | |
| import * as React from 'react' | |
| import * as CollapsiblePrimitive from '@radix-ui/react-collapsible' | |
| const Collapsible = CollapsiblePrimitive.Root | |
| const CollapsibleTrigger = CollapsiblePrimitive.Trigger | |
| const CollapsibleContent = CollapsiblePrimitive.Content | |
| export { Collapsible, CollapsibleTrigger, CollapsibleContent } |
🤖 Prompt for AI Agents
In @frontend/src/components/ui/collapsible.tsx around lines 7 - 16, The file
imports Radix Collapsible primitives but references non-existent exports; change
the assigned symbols so CollapsibleTrigger is assigned from
CollapsiblePrimitive.Trigger and CollapsibleContent is assigned from
CollapsiblePrimitive.Content (keep Collapsible = CollapsiblePrimitive.Root
unchanged), then re-export Collapsible, CollapsibleTrigger, and
CollapsibleContent.
| const defaultTriggerConfig: Record<FlowTriggerType, Record<string, unknown>> = { | ||
| cron: { expression: '0 9 * * *', timezone: 'UTC' }, | ||
| interval: { value: 1, unit: 'hours' }, | ||
| one_time: { execute_at: new Date().toISOString() }, | ||
| event: { event_type: 'webhook' }, | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Static default creates stale initial timestamp for one_time trigger.
defaultTriggerConfig.one_time.execute_at is set to new Date().toISOString() at module load time. This means every new flow created with one_time trigger will default to the timestamp when the module was first loaded, which could be hours or days in the past.
Proposed fix: generate timestamp dynamically
-const defaultTriggerConfig: Record<FlowTriggerType, Record<string, unknown>> = {
- cron: { expression: '0 9 * * *', timezone: 'UTC' },
- interval: { value: 1, unit: 'hours' },
- one_time: { execute_at: new Date().toISOString() },
- event: { event_type: 'webhook' },
+const getDefaultTriggerConfig = (type: FlowTriggerType): Record<string, unknown> => {
+ const configs: Record<FlowTriggerType, Record<string, unknown>> = {
+ cron: { expression: '0 9 * * *', timezone: 'UTC' },
+ interval: { value: 1, unit: 'hours' },
+ one_time: { execute_at: new Date().toISOString() },
+ event: { event_type: 'webhook' },
+ }
+ return configs[type]
}Then update usages:
- Line 69:
getDefaultTriggerConfig('cron') - Line 120:
getDefaultTriggerConfig('cron') - Line 131:
getDefaultTriggerConfig(value)
🤖 Prompt for AI Agents
In @frontend/src/features/flows/components/FlowForm.tsx around lines 51 - 56,
defaultTriggerConfig currently sets one_time.execute_at with new
Date().toISOString() at module load, creating a stale timestamp; replace the
static map with a function getDefaultTriggerConfig(triggerType: FlowTriggerType)
that returns the appropriate config object and generates execute_at dynamically
(e.g., { execute_at: new Date().toISOString() }) for 'one_time'; update call
sites that used defaultTriggerConfig (the locations calling
getDefaultTriggerConfig('cron') and the calls at the places previously using
getDefaultTriggerConfig(value)) to call this new function so each new flow gets
a fresh timestamp.
| case 'one_time': | ||
| return ( | ||
| <div> | ||
| <Label>{t('execute_at')}</Label> | ||
| <Input | ||
| type="datetime-local" | ||
| value={ | ||
| triggerConfig.execute_at | ||
| ? new Date(triggerConfig.execute_at as string) | ||
| .toISOString() | ||
| .slice(0, 16) | ||
| : '' | ||
| } | ||
| onChange={e => | ||
| setTriggerConfig({ | ||
| ...triggerConfig, | ||
| execute_at: new Date(e.target.value).toISOString(), | ||
| }) | ||
| } | ||
| /> | ||
| </div> | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Timezone handling may cause unexpected behavior.
The datetime-local input returns local time, but new Date(e.target.value).toISOString() converts it to UTC. When displaying, new Date(triggerConfig.execute_at).toISOString().slice(0, 16) converts from UTC back to UTC string format, not local time. This could show a different time than what the user selected.
Consider adding timezone awareness or documentation
One approach is to keep the value as a local datetime string and convert to UTC only on submit:
value={
- triggerConfig.execute_at
- ? new Date(triggerConfig.execute_at as string)
- .toISOString()
- .slice(0, 16)
- : ''
+ (triggerConfig.execute_at as string)?.slice(0, 16) || ''
}
onChange={e =>
setTriggerConfig({
...triggerConfig,
- execute_at: new Date(e.target.value).toISOString(),
+ execute_at: e.target.value,
})
}Then convert to ISO on submit if needed by the backend.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| case 'one_time': | |
| return ( | |
| <div> | |
| <Label>{t('execute_at')}</Label> | |
| <Input | |
| type="datetime-local" | |
| value={ | |
| triggerConfig.execute_at | |
| ? new Date(triggerConfig.execute_at as string) | |
| .toISOString() | |
| .slice(0, 16) | |
| : '' | |
| } | |
| onChange={e => | |
| setTriggerConfig({ | |
| ...triggerConfig, | |
| execute_at: new Date(e.target.value).toISOString(), | |
| }) | |
| } | |
| /> | |
| </div> | |
| ) | |
| case 'one_time': | |
| return ( | |
| <div> | |
| <Label>{t('execute_at')}</Label> | |
| <Input | |
| type="datetime-local" | |
| value={ | |
| (triggerConfig.execute_at as string)?.slice(0, 16) || '' | |
| } | |
| onChange={e => | |
| setTriggerConfig({ | |
| ...triggerConfig, | |
| execute_at: e.target.value, | |
| }) | |
| } | |
| /> | |
| </div> | |
| ) |
🤖 Prompt for AI Agents
In @frontend/src/features/flows/components/FlowForm.tsx around lines 271 - 292,
The datetime-local input is mixing local and UTC conversions causing displayed
values to shift; stop converting triggerConfig.execute_at to ISO for the input
and instead store a local datetime string for the form field (use the existing
triggerConfig and setTriggerConfig for this), then convert to a proper UTC/ISO
timestamp only when the form is submitted or when building the payload for the
backend; update the input's value logic and onChange to read/write a local
datetime string and add conversion to ISO in the submit handler that currently
reads triggerConfig.execute_at.
- Remove unused React import from collapsible.tsx - Rename unused name state to _name in FlowForm.tsx - Replace 'any' with proper types in FlowForm.tsx - Remove unused useCallback import from FlowTimeline.tsx - Remove unused FlowTriggerType import from flowContext.tsx
Add explicit return type annotation and ensure all return paths return proper string values to fix TypeScript strict mode errors.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (6)
frontend/src/features/flows/components/FlowForm.tsx (4)
51-56:one_timedefault config uses stale timestamp.
defaultTriggerConfig.one_time.execute_atis evaluated once at module load time. If the user opens the form much later, they'll see a datetime in the past as the default.Suggested fix: compute default dynamically
-const defaultTriggerConfig: Record<FlowTriggerType, Record<string, unknown>> = { - cron: { expression: '0 9 * * *', timezone: 'UTC' }, - interval: { value: 1, unit: 'hours' }, - one_time: { execute_at: new Date().toISOString() }, - event: { event_type: 'webhook' }, -} +const getDefaultTriggerConfig = (type: FlowTriggerType): Record<string, unknown> => { + const configs: Record<FlowTriggerType, Record<string, unknown>> = { + cron: { expression: '0 9 * * *', timezone: 'UTC' }, + interval: { value: 1, unit: 'hours' }, + one_time: { execute_at: new Date().toISOString() }, + event: { event_type: 'webhook' }, + } + return configs[type] +}Then update
handleTriggerTypeChangeand initial state accordingly.
84-99: Silent failure when loading teams.If team loading fails, the user sees no indication—just an empty dropdown with no explanation. Consider showing a toast or inline error message.
Suggested fix
} catch (error) { console.error('Failed to load teams:', error) + toast.error(t('load_teams_failed')) } finally {
276-291: Potential issues with datetime-local parsing.The
datetime-localinput value doesn't include timezone info. When the user enters a local time and it's converted vianew Date(e.target.value).toISOString(), the result depends on the browser's local timezone, which may differ from the server's expected timezone. Additionally, iftriggerConfig.execute_atcontains an invalid date string,toISOString().slice(0, 16)could throw.Consider documenting the expected timezone behavior or using a date library for consistent handling. Also add defensive parsing:
value={ - triggerConfig.execute_at - ? new Date(triggerConfig.execute_at as string) - .toISOString() - .slice(0, 16) - : '' + (() => { + if (!triggerConfig.execute_at) return '' + const d = new Date(triggerConfig.execute_at as string) + return isNaN(d.getTime()) ? '' : d.toISOString().slice(0, 16) + })() }
312-347: Repetitive type assertions forgit_pushconfig.The type assertion
as { repository?: string; branch?: string } | undefinedis repeated four times. Extract a type alias to improve readability and maintainability.Suggested refactor
// Add near the top of the file or in types/flow.ts type GitPushConfig = { repository?: string; branch?: string } // Then use: const gitPush = triggerConfig.git_push as GitPushConfig | undefinedfrontend/src/features/flows/components/FlowList.tsx (2)
86-100: Consider refreshing flows after manual trigger.After
flowApis.triggerFlowsucceeds, theexecution_countdisplayed in the UI won't update until the user manually refreshes or navigates away. Consider callingrefreshFlows()or updating the execution count optimistically.Suggested fix
try { await flowApis.triggerFlow(flow.id) + await refreshFlows() toast.success(t('trigger_success'))Note: Add
refreshFlowsto the dependency array.
297-302: Delete button doesn't show loading state.When
handleDeleteis executing, the dialog's action button remains enabled and shows no indication of progress. Users might click multiple times.Suggested fix
<AlertDialogAction onClick={handleDelete} className="bg-destructive text-destructive-foreground hover:bg-destructive/90" + disabled={actionLoading === deleteConfirmFlow?.id} > - {t('common:actions.delete')} + {actionLoading === deleteConfirmFlow?.id + ? t('common:actions.loading') + : t('common:actions.delete')} </AlertDialogAction>
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
frontend/src/components/ui/collapsible.tsxfrontend/src/features/flows/components/FlowForm.tsxfrontend/src/features/flows/components/FlowList.tsxfrontend/src/features/flows/components/FlowTimeline.tsxfrontend/src/features/flows/contexts/flowContext.tsx
🚧 Files skipped from review as they are similar to previous changes (2)
- frontend/src/features/flows/components/FlowTimeline.tsx
- frontend/src/features/flows/contexts/flowContext.tsx
🧰 Additional context used
🧠 Learnings (3)
📚 Learning: 2025-12-18T02:09:09.776Z
Learnt from: CR
Repo: wecode-ai/Wegent PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-18T02:09:09.776Z
Learning: Applies to frontend/src/components/**/*.tsx : Extract reusable UI logic: place shared business components in `src/components/common/`, and pure UI components in `src/components/ui/` (shadcn/ui)
Applied to files:
frontend/src/components/ui/collapsible.tsx
📚 Learning: 2025-12-18T02:09:09.776Z
Learnt from: CR
Repo: wecode-ai/Wegent PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-18T02:09:09.776Z
Learning: Applies to frontend/src/**/*.tsx : Frontend component files named with CRD terms: team-list.tsx, bot-form.tsx, model-selector.tsx
Applied to files:
frontend/src/components/ui/collapsible.tsxfrontend/src/features/flows/components/FlowForm.tsxfrontend/src/features/flows/components/FlowList.tsx
📚 Learning: 2025-12-18T02:09:09.776Z
Learnt from: CR
Repo: wecode-ai/Wegent PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-18T02:09:09.776Z
Learning: Component extraction checklist: verify 2+ usage locations, self-contained logic, clear TypeScript types for props, flexible for different use cases, and single responsibility principle
Applied to files:
frontend/src/components/ui/collapsible.tsx
🧬 Code graph analysis (2)
frontend/src/features/flows/components/FlowForm.tsx (3)
frontend/src/types/flow.ts (5)
Flow(59-83)FlowTriggerType(13-13)FlowTaskType(10-10)FlowUpdateRequest(102-113)FlowCreateRequest(86-99)frontend/src/hooks/useTranslation.ts (1)
useTranslation(8-30)frontend/src/apis/flow.ts (1)
flowApis(21-126)
frontend/src/features/flows/components/FlowList.tsx (4)
frontend/src/types/flow.ts (2)
Flow(59-83)FlowTriggerType(13-13)frontend/src/hooks/useTranslation.ts (1)
useTranslation(8-30)frontend/src/features/flows/contexts/flowContext.tsx (1)
useFlowContext(217-223)frontend/src/apis/flow.ts (1)
flowApis(21-126)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
- GitHub Check: E2E Tests (Shard 2/3)
- GitHub Check: E2E Tests (Shard 1/3)
- GitHub Check: E2E Tests (Shard 3/3)
- GitHub Check: Test Backend (3.10)
- GitHub Check: Test wegent CLI Integration
🔇 Additional comments (3)
frontend/src/features/flows/components/FlowForm.tsx (1)
355-513: Form implementation looks solid overall.The form properly handles create vs. edit modes, validates required fields before submission, shows appropriate loading states, and provides user feedback via toasts. The drawer-based UX with labeled fields and dynamic trigger configuration is well-structured.
frontend/src/components/ui/collapsible.tsx (1)
1-15: Clean re-export of Radix Collapsible primitives.This follows the established shadcn/ui pattern for wrapping Radix primitives, as per the project's coding guidelines for pure UI components in
src/components/ui/.frontend/src/features/flows/components/FlowList.tsx (1)
52-57: Well-structured list component with good UX patterns.The component properly handles loading states, empty states, pagination, and per-item action loading. The trigger-type icon mapping and delete confirmation dialog are nice touches for UX. Error handling with toasts provides good user feedback.
Also applies to: 143-260
- Fix mutable default argument in webhook endpoint (flows.py) - Fix trigger_reason schema/model mismatch (Optional instead of required) - Increase lock expiry time to 2x interval for flow scheduler - Fix FlowTimeline separator dot display when team_name is empty - Add refreshExecutions call after manual flow trigger - Move json import to top of flow.py service file
68addf8 to
637707c
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In @backend/app/services/flow.py:
- Around line 805-811: The ONE_TIME branch assumes trigger_config contains
"execute_at" and calls datetime.fromisoformat on its value, which will TypeError
if missing or None; update the FlowTriggerType.ONE_TIME handling in the function
that builds FlowTriggerConfig to validate that trigger_config.get("execute_at")
is present and non-empty before calling datetime.fromisoformat, and if missing
raise a clear ValueError (or return a descriptive error) mentioning the missing
"execute_at" for OneTimeTriggerConfig so callers can handle it; reference
FlowTriggerType.ONE_TIME, FlowTriggerConfig, OneTimeTriggerConfig, and
datetime.fromisoformat when making the change.
- Around line 524-555: The loop in enrich logic re-queries FlowResource to get
team info even though flow data was cached in flow_cache when populating
"name"/"display_name"/"task_type"; update the code to use the cached flow data
(stored in flow_cache[exec.flow_id]) to read team_id and avoid the extra
db.query, or introduce a team_cache keyed by team_id to cache Kind lookups;
specifically, when filling exec_dict (after Flow.model_validate and setting
flow_cache[exec.flow_id]), read team_id from the cached entry (or cache and
reuse Kind results) and only query Kind for missing team_ids, then set
exec_dict["team_name"] before appending FlowExecutionInDB.
🧹 Nitpick comments (7)
frontend/src/features/flows/components/FlowList.tsx (2)
121-125: Consider using user's locale for date formatting.
toLocaleString()without arguments uses the browser's default locale, which is generally correct. However, if the app has explicit i18n locale settings, you might want to pass the current language for consistency.💡 Suggested improvement
const formatNextExecution = (dateStr?: string) => { if (!dateStr) return '-' const date = new Date(dateStr) - return date.toLocaleString() + return date.toLocaleString(i18n.language) }This would require accessing
i18nfrom theuseTranslationhook.
228-232: Ensure toggle switch doesn't trigger during pending actions on other flows.The
disabledprop only checks ifactionLoading === flow.id, which is correct. However, if multiple rapid toggles occur, consider if the toggle should also be disabled whileflowsLoadingis true to prevent race conditions during list refresh.backend/app/services/flow_scheduler.py (3)
168-214: Remove unused variables:flow_crd,git_url,git_repo,git_domain,branch_name.These variables are assigned but never used. The workspace metadata extraction appears to be incomplete or placeholder code.
♻️ Suggested fix
def _trigger_task_execution(db, flow: FlowResource, execution) -> None: try: from app.models.task import TaskResource from app.models.kind import Kind - flow_crd = Flow.model_validate(flow.json) - # Get team team = ( db.query(Kind) .filter(Kind.id == flow.team_id, Kind.kind == "Team", Kind.is_active == True) .first() ) if not team: # ... error handling return # Get workspace if specified workspace = None - git_url = "" - git_repo = "" - git_domain = "" - branch_name = "" if flow.workspace_id: workspace = ( db.query(TaskResource) .filter( TaskResource.id == flow.workspace_id, TaskResource.kind == "Workspace", TaskResource.is_active == True, ) .first() ) - if workspace: - ws_json = workspace.json - git_url = ws_json.get("spec", {}).get("repository", {}).get("url", "") - git_repo = ws_json.get("spec", {}).get("repository", {}).get("name", "") - git_domain = ws_json.get("spec", {}).get("repository", {}).get("domain", "") - branch_name = ws_json.get("spec", {}).get("repository", {}).get("branch", "")If this metadata is intended for future use, consider adding a TODO comment explaining the planned usage.
290-353: Consider usinglogger.exception()instead oflogger.error()for better stack traces.When catching exceptions,
logger.exception()automatically includes the stack trace, which aids debugging production issues.💡 Suggested improvement for error logging
except Exception as e: - logger.error(f"[flow_scheduler] Worker error: {str(e)}") + logger.exception("[flow_scheduler] Worker error")Apply similar changes to other
logger.errorcalls in exception handlers (lines 51, 60, 156, 280, 336, 343).
298-345: Event loop created per cycle - consider reusing for efficiency.A new event loop is created and closed on each scheduler cycle. While functional, this adds overhead. Consider creating the loop once at worker start.
💡 Suggested optimization
def flow_scheduler_worker(stop_event: threading.Event) -> None: logger.info("[flow_scheduler] Flow scheduler worker started") + loop = asyncio.new_event_loop() + asyncio.set_event_loop(loop) while not stop_event.is_set(): try: - loop = asyncio.new_event_loop() - asyncio.set_event_loop(loop) - lock_acquired = loop.run_until_complete(acquire_flow_scheduler_lock()) # ... rest of logic ... - loop.close() except Exception as e: logger.exception("[flow_scheduler] Worker error") stop_event.wait(timeout=FLOW_SCHEDULER_INTERVAL_SECONDS) + loop.close() logger.info("[flow_scheduler] Flow scheduler worker stopped")backend/app/services/flow.py (1)
47-52: AddClassVarannotation for class-level mutable attributes.While this dictionary is not mutated at runtime, adding the
ClassVarannotation makes the intent clear and satisfies the linter.💡 Suggested fix
+from typing import ClassVar, Callable + class FlowService: """Service class for AI Flow operations.""" # Supported prompt template variables - TEMPLATE_VARIABLES = { + TEMPLATE_VARIABLES: ClassVar[Dict[str, Callable[[], str]]] = { "date": lambda: datetime.utcnow().strftime("%Y-%m-%d"),backend/app/schemas/flow.py (1)
57-64: Consider constrainingunitfield to valid values.The
unitfield accepts any string, but only'minutes','hours', and'days'are valid per the service logic.♻️ Suggested fix using Literal or Enum
+from typing import Literal + +IntervalUnit = Literal["minutes", "hours", "days"] + class IntervalTriggerConfig(BaseModel): """Interval trigger configuration.""" value: int = Field(..., description="Interval value") - unit: str = Field( - ..., description="Interval unit: 'minutes', 'hours', 'days'" - ) + unit: IntervalUnit = Field( + ..., description="Interval unit: 'minutes', 'hours', 'days'" + )This provides validation at the schema level rather than relying on service logic.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
backend/app/api/endpoints/adapter/flows.pybackend/app/schemas/flow.pybackend/app/services/flow.pybackend/app/services/flow_scheduler.pyfrontend/src/features/flows/components/FlowList.tsxfrontend/src/features/flows/components/FlowTimeline.tsx
🚧 Files skipped from review as they are similar to previous changes (1)
- frontend/src/features/flows/components/FlowTimeline.tsx
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-12-18T02:09:09.776Z
Learnt from: CR
Repo: wecode-ai/Wegent PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-18T02:09:09.776Z
Learning: Applies to frontend/src/**/*.tsx : Frontend component files named with CRD terms: team-list.tsx, bot-form.tsx, model-selector.tsx
Applied to files:
frontend/src/features/flows/components/FlowList.tsx
🧬 Code graph analysis (3)
frontend/src/features/flows/components/FlowList.tsx (4)
frontend/src/types/flow.ts (2)
Flow(59-83)FlowTriggerType(13-13)frontend/src/hooks/useTranslation.ts (1)
useTranslation(8-30)frontend/src/features/flows/contexts/flowContext.tsx (1)
useFlowContext(217-223)frontend/src/apis/flow.ts (1)
flowApis(21-126)
backend/app/services/flow_scheduler.py (5)
backend/app/models/flow.py (1)
FlowResource(25-81)backend/app/schemas/flow.py (3)
Flow(172-179)FlowExecutionStatus(38-46)FlowTriggerType(22-28)backend/app/core/cache.py (1)
setnx(121-135)backend/app/services/flow.py (4)
_create_execution(653-691)_extract_trigger_config(833-858)_calculate_next_execution_time(860-905)update_execution_status(595-649)backend/app/models/task.py (1)
TaskResource(26-77)
backend/app/schemas/flow.py (1)
frontend/src/types/flow.ts (14)
FlowTaskType(10-10)FlowTriggerType(13-13)FlowEventType(16-16)FlowExecutionStatus(19-25)CronTriggerConfig(28-31)IntervalTriggerConfig(33-36)OneTimeTriggerConfig(38-40)GitPushEventConfig(42-45)EventTriggerConfig(47-50)FlowTriggerConfig(52-56)Flow(59-83)FlowListResponse(116-119)FlowExecutionListResponse(146-149)FlowTimelineFilter(152-160)
🪛 Ruff (0.14.10)
backend/app/services/flow_scheduler.py
49-49: Consider moving this statement to an else block
(TRY300)
50-50: Do not catch blind exception: Exception
(BLE001)
51-51: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
51-51: Use explicit conversion flag
Replace with conversion flag
(RUF010)
59-59: Do not catch blind exception: Exception
(BLE001)
60-60: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
60-60: Use explicit conversion flag
Replace with conversion flag
(RUF010)
77-77: Avoid equality comparisons to True; use FlowResource.is_active: for truth checks
Replace with FlowResource.is_active
(E712)
78-78: Avoid equality comparisons to True; use FlowResource.enabled: for truth checks
Replace with FlowResource.enabled
(E712)
79-79: Comparison to None should be cond is not None
Replace with cond is not None
(E711)
154-154: Do not catch blind exception: Exception
(BLE001)
155-157: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
156-156: Use explicit conversion flag
Replace with conversion flag
(RUF010)
172-172: Local variable flow_crd is assigned to but never used
Remove assignment to unused variable flow_crd
(F841)
177-177: Avoid equality comparisons to True; use Kind.is_active: for truth checks
Replace with Kind.is_active
(E712)
204-204: Avoid equality comparisons to True; use TaskResource.is_active: for truth checks
Replace with TaskResource.is_active
(E712)
210-210: Local variable git_url is assigned to but never used
Remove assignment to unused variable git_url
(F841)
211-211: Local variable git_repo is assigned to but never used
Remove assignment to unused variable git_repo
(F841)
212-212: Local variable git_domain is assigned to but never used
Remove assignment to unused variable git_domain
(F841)
213-213: Local variable branch_name is assigned to but never used
Remove assignment to unused variable branch_name
(F841)
278-278: Do not catch blind exception: Exception
(BLE001)
279-281: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
280-280: Use explicit conversion flag
Replace with conversion flag
(RUF010)
335-335: Do not catch blind exception: Exception
(BLE001)
336-336: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
336-336: Use explicit conversion flag
Replace with conversion flag
(RUF010)
342-342: Do not catch blind exception: Exception
(BLE001)
343-343: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
343-343: Use explicit conversion flag
Replace with conversion flag
(RUF010)
347-347: Do not catch blind exception: Exception
(BLE001)
348-348: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
348-348: Use explicit conversion flag
Replace with conversion flag
(RUF010)
backend/app/services/flow.py
47-52: Mutable class attributes should be annotated with typing.ClassVar
(RUF012)
70-70: Avoid equality comparisons to True; use FlowResource.is_active: for truth checks
Replace with FlowResource.is_active
(E712)
87-87: Avoid equality comparisons to True; use Kind.is_active: for truth checks
Replace with Kind.is_active
(E712)
106-106: Avoid equality comparisons to True; use TaskResource.is_active: for truth checks
Replace with TaskResource.is_active
(E712)
165-165: Avoid equality comparisons to True; use FlowResource.is_active: for truth checks
Replace with FlowResource.is_active
(E712)
188-188: Avoid equality comparisons to True; use FlowResource.is_active: for truth checks
Replace with FlowResource.is_active
(E712)
221-221: Avoid equality comparisons to True; use FlowResource.is_active: for truth checks
Replace with FlowResource.is_active
(E712)
239-239: Avoid equality comparisons to True; use Kind.is_active: for truth checks
Replace with Kind.is_active
(E712)
260-260: Avoid equality comparisons to True; use TaskResource.is_active: for truth checks
Replace with TaskResource.is_active
(E712)
358-358: Avoid equality comparisons to True; use FlowResource.is_active: for truth checks
Replace with FlowResource.is_active
(E712)
387-387: Avoid equality comparisons to True; use FlowResource.is_active: for truth checks
Replace with FlowResource.is_active
(E712)
430-430: Avoid equality comparisons to True; use FlowResource.is_active: for truth checks
Replace with FlowResource.is_active
(E712)
458-458: Avoid equality comparisons to True; use FlowResource.is_active: for truth checks
Replace with FlowResource.is_active
(E712)
459-459: Avoid equality comparisons to True; use FlowResource.enabled: for truth checks
Replace with FlowResource.enabled
(E712)
831-831: Avoid specifying long messages outside the exception class
(TRY003)
882-882: Do not catch blind exception: Exception
(BLE001)
backend/app/api/endpoints/adapter/flows.py
43-45: Do not perform function call Query in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
46-46: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
47-47: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
72-72: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
73-73: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
97-99: Do not perform function call Query in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
100-100: Do not perform function call Query in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
101-101: Do not perform function call Query in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
102-102: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
103-103: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
130-130: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
131-131: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
153-153: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
177-177: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
178-178: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
194-194: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
195-195: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
213-213: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
214-214: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
232-232: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
233-233: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
252-252: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
253-253: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
- GitHub Check: E2E Tests (Shard 1/3)
- GitHub Check: E2E Tests (Shard 3/3)
- GitHub Check: Test Backend (3.10)
- GitHub Check: E2E Tests (Shard 2/3)
- GitHub Check: Test wegent CLI Integration
🔇 Additional comments (18)
frontend/src/features/flows/components/FlowList.tsx (5)
59-66: LGTM - Component setup with proper context and state management.The component correctly uses
useFlowContextfor accessing shared state, and local state for UI-specific concerns (delete confirmation, action loading).
67-84: LGTM - Toggle handler with proper error handling and feedback.The optimistic loading state and toast feedback provide good UX. The handler correctly awaits both the API call and the refresh.
86-102: Good fix: Refreshing executions after manual trigger.The
refreshExecutions()call ensures the timeline view reflects the newly triggered execution. This aligns with the commit message noting this fix.
127-143: LGTM - Trigger label helper with comprehensive coverage.The function handles all trigger types with sensible fallbacks. The default case returning
flow.trigger_type || 'Unknown'provides a safe fallback.
284-307: LGTM - Delete confirmation dialog implementation.The AlertDialog properly handles the open state, displays the flow name in the confirmation message, and provides clear cancel/delete actions with appropriate styling for the destructive action.
backend/app/api/endpoints/adapter/flows.py (4)
38-66: LGTM - List flows endpoint with proper pagination and filtering.The endpoint correctly calculates skip offset from page/limit, delegates to service layer, and returns a properly typed response. The static analysis B008 warnings are false positives -
Query()andDepends()in FastAPI route signatures are the idiomatic pattern.
88-90: Good practice: Static routes defined before dynamic routes.The comment correctly notes that
/executionsmust come before/{flow_id}to avoid path conflicts. This is a common FastAPI gotcha that's been properly addressed.
210-225: LGTM - Delete endpoint with soft delete and proper status code.The endpoint returns 204 No Content as appropriate for a successful delete. The service layer handles the soft delete logic (setting
is_active=False).
149-167: Webhook endpoint security relies solely on token - ensure tokens are sufficiently random.The endpoint correctly uses the webhook token as authentication (no user auth required). The service layer generates tokens with
secrets.token_urlsafe(32), which provides adequate entropy (256 bits). Token generation occurs when creating EVENT trigger flows and when switching trigger types to EVENT.backend/app/services/flow_scheduler.py (3)
26-30: LGTM - Lock expiry configuration at 2× interval.Setting the lock expiry to twice the scheduler interval provides adequate margin for processing time while ensuring recovery if the lock holder crashes. This matches the commit message noting this was a stability fix.
64-88: LGTM - Due flows query with proper filtering.The query correctly filters for active, enabled flows with scheduled triggers (excluding event-based). The static analysis E711/E712 warnings about
== Trueand!= Noneare necessary for SQLAlchemy filter expressions where identity comparison generates different SQL.
374-390: LGTM - Graceful shutdown with timeout.The
stop_flow_schedulerfunction properly signals the stop event and joins with a 5-second timeout, preventing indefinite blocking during shutdown.backend/app/services/flow.py (2)
693-721: Prompt template resolution is safe from injection.The template uses
{{variable}}syntax with explicit replacement. Thejson.dumpsfor dict/list values withensure_ascii=Falsecorrectly handles Unicode. No code execution risk from user-provided templates.
117-120: LGTM - Secure webhook token generation.Using
secrets.token_urlsafe(32)provides 256 bits of entropy, which is cryptographically secure for webhook tokens.backend/app/schemas/flow.py (4)
15-47: LGTM - Well-defined enums matching frontend types.The enum values align with the frontend TypeScript types in
frontend/src/types/flow.ts, ensuring API contract consistency.
258-265: Good fix:trigger_reasonis correctly marked as Optional.This aligns with the commit message noting this was fixed to match the schema/model definition.
273-295: LGTM - FlowExecutionInDB with joined fields for display.The optional joined fields (
flow_name,flow_display_name,team_name,task_type) allow the schema to represent enriched execution data without requiring separate queries on the client side.
230-248: LGTM - FlowInDB correctly configured for ORM mode.The
Config.from_attributes = Trueenables Pydantic v2's ORM mode for proper attribute access from SQLAlchemy models.
| # Enrich with flow details | ||
| result = [] | ||
| flow_cache = {} | ||
| for exec in executions: | ||
| exec_dict = self._convert_execution_to_dict(exec) | ||
|
|
||
| # Get flow details (cached) | ||
| if exec.flow_id not in flow_cache: | ||
| flow = db.query(FlowResource).filter(FlowResource.id == exec.flow_id).first() | ||
| if flow: | ||
| flow_crd = Flow.model_validate(flow.json) | ||
| flow_cache[exec.flow_id] = { | ||
| "name": flow.name, | ||
| "display_name": flow_crd.spec.displayName, | ||
| "task_type": flow_crd.spec.taskType.value, | ||
| } | ||
|
|
||
| flow_info = flow_cache.get(exec.flow_id, {}) | ||
| exec_dict["flow_name"] = flow_info.get("name") | ||
| exec_dict["flow_display_name"] = flow_info.get("display_name") | ||
| exec_dict["task_type"] = flow_info.get("task_type") | ||
|
|
||
| # Get team name if available | ||
| if exec.flow_id in flow_cache: | ||
| flow = db.query(FlowResource).filter(FlowResource.id == exec.flow_id).first() | ||
| if flow and flow.team_id: | ||
| team = db.query(Kind).filter(Kind.id == flow.team_id).first() | ||
| if team: | ||
| exec_dict["team_name"] = team.name | ||
|
|
||
| result.append(FlowExecutionInDB(**exec_dict)) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
N+1 query issue: Re-fetching flow for each execution to get team name.
The flow is already cached in flow_cache, but a second query is made to get team info. This should use the cached flow.
🐛 Proposed fix
for exec in executions:
exec_dict = self._convert_execution_to_dict(exec)
# Get flow details (cached)
if exec.flow_id not in flow_cache:
flow = db.query(FlowResource).filter(FlowResource.id == exec.flow_id).first()
if flow:
flow_crd = Flow.model_validate(flow.json)
flow_cache[exec.flow_id] = {
"name": flow.name,
"display_name": flow_crd.spec.displayName,
"task_type": flow_crd.spec.taskType.value,
+ "team_id": flow.team_id,
}
flow_info = flow_cache.get(exec.flow_id, {})
exec_dict["flow_name"] = flow_info.get("name")
exec_dict["flow_display_name"] = flow_info.get("display_name")
exec_dict["task_type"] = flow_info.get("task_type")
# Get team name if available
- if exec.flow_id in flow_cache:
- flow = db.query(FlowResource).filter(FlowResource.id == exec.flow_id).first()
- if flow and flow.team_id:
- team = db.query(Kind).filter(Kind.id == flow.team_id).first()
- if team:
- exec_dict["team_name"] = team.name
+ team_id = flow_info.get("team_id")
+ if team_id:
+ # Cache team lookups too
+ if team_id not in getattr(self, '_team_cache', {}):
+ if not hasattr(self, '_team_cache'):
+ self._team_cache = {}
+ team = db.query(Kind).filter(Kind.id == team_id).first()
+ self._team_cache[team_id] = team.name if team else None
+ exec_dict["team_name"] = self._team_cache.get(team_id)
result.append(FlowExecutionInDB(**exec_dict))Alternatively, use a local team_cache dict similar to flow_cache.
🤖 Prompt for AI Agents
In @backend/app/services/flow.py around lines 524 - 555, The loop in enrich
logic re-queries FlowResource to get team info even though flow data was cached
in flow_cache when populating "name"/"display_name"/"task_type"; update the code
to use the cached flow data (stored in flow_cache[exec.flow_id]) to read team_id
and avoid the extra db.query, or introduce a team_cache keyed by team_id to
cache Kind lookups; specifically, when filling exec_dict (after
Flow.model_validate and setting flow_cache[exec.flow_id]), read team_id from the
cached entry (or cache and reuse Kind results) and only query Kind for missing
team_ids, then set exec_dict["team_name"] before appending FlowExecutionInDB.
| # Update flow statistics | ||
| flow = db.query(FlowResource).filter(FlowResource.id == execution.flow_id).first() | ||
| if flow: | ||
| flow.last_execution_time = datetime.utcnow() | ||
| flow.last_execution_status = status.value | ||
| flow.execution_count += 1 | ||
|
|
||
| if status == FlowExecutionStatus.COMPLETED: | ||
| flow.success_count += 1 | ||
| elif status == FlowExecutionStatus.FAILED: | ||
| flow.failure_count += 1 | ||
|
|
||
| # Update CRD status | ||
| flow_crd = Flow.model_validate(flow.json) | ||
| if flow_crd.status is None: | ||
| flow_crd.status = FlowStatus() | ||
| flow_crd.status.lastExecutionTime = datetime.utcnow() | ||
| flow_crd.status.lastExecutionStatus = status | ||
| flow_crd.status.executionCount = flow.execution_count | ||
| flow_crd.status.successCount = flow.success_count | ||
| flow_crd.status.failureCount = flow.failure_count | ||
|
|
||
| flow.json = flow_crd.model_dump(mode="json") | ||
| flag_modified(flow, "json") | ||
|
|
||
| db.commit() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Execution count incremented on every status update - potential over-counting.
flow.execution_count += 1 is called in update_execution_status, but this is also called when transitioning to RUNNING status. This could increment the count multiple times for a single execution (PENDING → RUNNING → COMPLETED).
🐛 Proposed fix
Only increment counts on terminal states:
# Update flow statistics
flow = db.query(FlowResource).filter(FlowResource.id == execution.flow_id).first()
if flow:
flow.last_execution_time = datetime.utcnow()
flow.last_execution_status = status.value
- flow.execution_count += 1
if status == FlowExecutionStatus.COMPLETED:
+ flow.execution_count += 1
flow.success_count += 1
elif status == FlowExecutionStatus.FAILED:
+ flow.execution_count += 1
flow.failure_count += 1Or track if the execution was already counted to prevent double-counting.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Update flow statistics | |
| flow = db.query(FlowResource).filter(FlowResource.id == execution.flow_id).first() | |
| if flow: | |
| flow.last_execution_time = datetime.utcnow() | |
| flow.last_execution_status = status.value | |
| flow.execution_count += 1 | |
| if status == FlowExecutionStatus.COMPLETED: | |
| flow.success_count += 1 | |
| elif status == FlowExecutionStatus.FAILED: | |
| flow.failure_count += 1 | |
| # Update CRD status | |
| flow_crd = Flow.model_validate(flow.json) | |
| if flow_crd.status is None: | |
| flow_crd.status = FlowStatus() | |
| flow_crd.status.lastExecutionTime = datetime.utcnow() | |
| flow_crd.status.lastExecutionStatus = status | |
| flow_crd.status.executionCount = flow.execution_count | |
| flow_crd.status.successCount = flow.success_count | |
| flow_crd.status.failureCount = flow.failure_count | |
| flow.json = flow_crd.model_dump(mode="json") | |
| flag_modified(flow, "json") | |
| db.commit() | |
| # Update flow statistics | |
| flow = db.query(FlowResource).filter(FlowResource.id == execution.flow_id).first() | |
| if flow: | |
| flow.last_execution_time = datetime.utcnow() | |
| flow.last_execution_status = status.value | |
| if status == FlowExecutionStatus.COMPLETED: | |
| flow.execution_count += 1 | |
| flow.success_count += 1 | |
| elif status == FlowExecutionStatus.FAILED: | |
| flow.execution_count += 1 | |
| flow.failure_count += 1 | |
| # Update CRD status | |
| flow_crd = Flow.model_validate(flow.json) | |
| if flow_crd.status is None: | |
| flow_crd.status = FlowStatus() | |
| flow_crd.status.lastExecutionTime = datetime.utcnow() | |
| flow_crd.status.lastExecutionStatus = status | |
| flow_crd.status.executionCount = flow.execution_count | |
| flow_crd.status.successCount = flow.success_count | |
| flow_crd.status.failureCount = flow.failure_count | |
| flow.json = flow_crd.model_dump(mode="json") | |
| flag_modified(flow, "json") | |
| db.commit() |
| elif trigger_type_enum == FlowTriggerType.ONE_TIME: | ||
| return FlowTriggerConfig( | ||
| type=trigger_type_enum, | ||
| one_time=OneTimeTriggerConfig( | ||
| execute_at=datetime.fromisoformat(trigger_config.get("execute_at")), | ||
| ), | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potential crash if execute_at key is missing.
trigger_config.get("execute_at") could return None, which would cause datetime.fromisoformat(None) to fail with a TypeError.
🐛 Proposed fix
elif trigger_type_enum == FlowTriggerType.ONE_TIME:
+ execute_at = trigger_config.get("execute_at")
+ if not execute_at:
+ raise ValueError("execute_at is required for one_time trigger")
return FlowTriggerConfig(
type=trigger_type_enum,
one_time=OneTimeTriggerConfig(
- execute_at=datetime.fromisoformat(trigger_config.get("execute_at")),
+ execute_at=datetime.fromisoformat(execute_at),
),
)🤖 Prompt for AI Agents
In @backend/app/services/flow.py around lines 805 - 811, The ONE_TIME branch
assumes trigger_config contains "execute_at" and calls datetime.fromisoformat on
its value, which will TypeError if missing or None; update the
FlowTriggerType.ONE_TIME handling in the function that builds FlowTriggerConfig
to validate that trigger_config.get("execute_at") is present and non-empty
before calling datetime.fromisoformat, and if missing raise a clear ValueError
(or return a descriptive error) mentioning the missing "execute_at" for
OneTimeTriggerConfig so callers can handle it; reference
FlowTriggerType.ONE_TIME, FlowTriggerConfig, OneTimeTriggerConfig, and
datetime.fromisoformat when making the change.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 16
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (5)
start.sh (1)
382-389: PinDEFAULT_EXECUTOR_IMAGE(tag or digest) to avoid non-reproducible / potentially unsafe pulls.
Switching the default to an unversioned image makes local dev behavior change over time and can break flows/scheduler runs unexpectedly.Proposed fix
-DEFAULT_EXECUTOR_IMAGE="ghcr.io/wecode-ai/wegent-executor" +DEFAULT_EXECUTOR_IMAGE="ghcr.io/wecode-ai/wegent-executor:1.0.29" +# (Optional) Prefer a digest pin if you want full immutability: +# DEFAULT_EXECUTOR_IMAGE="ghcr.io/wecode-ai/wegent-executor@sha256:<digest>"backend/app/services/chat/trigger/core.py (2)
1104-1154: BUG:_stream_with_bridgedoesn’t actually guardnamespace is None(doc != code).
This will crash whenWebSocketBridge(namespace, ...)is called withNone.Proposed fix
async def _stream_with_bridge( @@ namespace: Optional[Any], ) -> None: @@ + if namespace is None: + logger.error("[BRIDGE] namespace is None; bridge mode requires WebSocket namespace") + return + from chat_shell.agent import AgentConfig, ChatAgent @@ # Create WebSocket bridge for Redis -> WebSocket forwarding bridge = WebSocketBridge(namespace, task_room, task_id)
245-255: Replace string forward references with proper type imports usingTYPE_CHECKINGorfrom __future__ import annotations.The parameter type
Optional["ChatEventEmitter"]at lines 254 and 591 uses string forward references as a workaround for runtime imports. While this avoids circular imports by deferring type evaluation, it's not idiomatic. Use either:
from __future__ import annotations(recommended): Add at the top of the file; removes need for string quotes throughoutTYPE_CHECKINGpattern: Conditionally importChatEventEmitterat module level for type checkers onlyAlso applies to: 579-592
backend/app/services/adapters/executor_kinds.py (2)
153-180: MySQL JSON comparisons likely needJSON_UNQUOTEfor correctness.
JSON_EXTRACT(...)= 'offline'patterns often compare against JSON strings (e.g."offline"). If existing queries “worked by accident”, the new compound predicate may be brittle.
132-186: Exclude only Chat Shell type Flow tasks, not all source='flow' tasks—executor-based Flow execution is broken.Flow-triggered tasks are created with
source='flow'for both Chat Shell and Executor types (flow_scheduler.py:281). The flow scheduler determines which path to take:
- Chat Shell type: Triggers AI response directly (lines 323–331)
- Executor type: Subtasks are "picked up by executor_manager automatically" (line 337)
However,
_get_first_subtasks_for_tasksexcludes allsource='flow'tasks (lines 159, 178), preventing executor_manager from ever processing executor-type Flow tasks. This breaks the entire executor-based Flow execution path. TheflowExecutionIdlabel infrastructure added for completion tracking will never be reached.The exclusion logic must be refined: either use a different source label for Chat Shell type Flow tasks, or filter by task/shell type instead of source, so executor-type Flow tasks can still be picked up by executor_manager.
🤖 Fix all issues with AI agents
In @backend/app/schemas/flow.py:
- Around line 15-36: Replace the Chinese inline comments in the Flow enums with
English equivalents: update comments for FlowTaskType (replace "执行类任务" and
"信息采集类任务"), FlowTriggerType (replace "定时计划", "固定间隔", "一次性定时", "事件触发"), and
FlowEventType (replace "Webhook 触发", "Git Push 触发") to concise English
descriptions; also locate and replace any other non-English comments around the
later enum or usage block referenced (the section around the Flow enums at lines
~124-143) to maintain consistent English-only comments across the file.
- Around line 90-106: FlowTriggerConfig currently allows mismatched combinations
(e.g., type == 'cron' with cron == None), so add a Pydantic v2 model_validator
on FlowTriggerConfig to enforce type↔config consistency: in the validator for
the whole model (using @model_validator(mode="after") or equivalent) inspect the
FlowTriggerConfig.type and ensure the corresponding field (cron, interval,
one_time, or event) is present and non-null and that no other trigger-config
fields are set when not expected; raise a ValueError with a clear message when
invalid. Reference the FlowTriggerConfig class and its fields type, cron,
interval, one_time, and event when locating where to add the validator.
In @backend/app/services/flow_scheduler.py:
- Around line 302-310: The code assumes task_crd.metadata.labels is a dict but
may be None; update the block in flow_scheduler.py (where task is validated into
task_crd) to ensure metadata.labels is initialized before assignment: if
task_crd.metadata.labels is None, set task_crd.metadata.labels = {}, then assign
task_crd.metadata.labels["flowId"], ["executionId"], and ["flowExecutionId"] as
before, then proceed to model_dump and db.commit; reference
task_crd.metadata.labels and the surrounding task handling code to place the
initialization.
- Around line 295-296: Remove the duplicated comment line "# Add flow-specific
labels to the task" so only one instance remains; locate the duplicate in the
flow_scheduler module where flow-specific labels are added to the task and
delete the redundant comment to avoid repetition.
In @backend/app/services/flow.py:
- Around line 54-80: The create_flow method currently checks
FlowResource.is_active == True but the DB unique index still blocks inserts for
soft-deleted rows; update create_flow to either treat an existing inactive row
as a conflict or revive it: query for existing =
db.query(FlowResource).filter(FlowResource.user_id==user_id,
FlowResource.kind=="Flow", FlowResource.name==flow_in.name,
FlowResource.namespace==flow_in.namespace).first() and if existing and
existing.is_active raise the existing HTTPException, but if existing and not
existing.is_active either reactivate the row by setting existing.is_active=True
and updating fields (and commit) or return a clear 400 that the name is
reserved; apply the same fix to the corresponding logic in the other function
referenced (the block around lines 130-145).
- Around line 489-565: list_executions currently does N+1 queries because it
re-queries FlowResource for each execution again when looking up team name;
instead, after retrieving executions, collect all exec.flow_id values and
batch-load the FlowResource rows and associated Kind (team) rows, build
flow_cache entries (including "name", "display_name", "task_type", and
"team_name") by iterating the fetched FlowResource results and using
Flow.model_validate on flow.json, and then use that single flow_cache lookup per
execution; update the loop that sets
exec_dict["flow_name"/"flow_display_name"/"task_type"/"team_name"] to use the
prebuilt flow_cache and remove the per-exec DB queries and the second
FlowResource fetch currently guarded by "if exec.flow_id in flow_cache".
- Around line 876-953: Summary: _calculate_next_execution_time currently ignores
user_tz, uses naive local now_local, and returns a naive local datetime instead
of a UTC-aware datetime as promised; croniter is also missing from dependencies.
Fix: inside _calculate_next_execution_time (function name), build a
timezone-aware base time using ZoneInfo(timezone_str) (replace now_local =
datetime.now() with now_local = datetime.now(user_tz)), pass that tz-aware base
to croniter(cron_expr, now_local), get next_local as a tz-aware datetime,
convert it to UTC via next_utc = next_local.astimezone(ZoneInfo("UTC")) and
return next_utc (UTC-aware) for storage; keep the existing exception handling
but ensure user_tz is actually used; finally add croniter to project
dependencies (pyproject.toml) so imports succeed.
In @frontend/src/app/(tasks)/flow/page.tsx:
- Around line 89-92: TopNavigation is being passed an unused prop activePage
that exists on the TopNavigationProps type but isn't destructured or used in the
TopNavigation component; remove activePage from the TopNavigationProps type and
from any callers (e.g., the TopNavigation usage in flow/page.tsx), or
alternatively implement the feature by accepting and destructuring activePage in
the TopNavigation component and using it to set the active/highlighted nav item
(update the TopNavigation function signature and its render logic to read
activePage and apply the active class). Ensure all usages and the prop type stay
consistent (either remove references to activePage across callers or add
handling in TopNavigation).
In @frontend/src/app/(tasks)/flow/subscriptions/page.tsx:
- Around line 26-30: Replace the non-English comment text "(我的订阅)" in the file
header comment with its English equivalent "(My Subscriptions)" to comply with
the repo comment rules for /*.{py,ts,tsx} files; locate the header block
containing "Flow Subscriptions Management Page" and update the parenthetical
text accordingly so the comment reads in English.
- Around line 61-67: The icon-only Back button (Button with onClick={handleBack}
containing ArrowLeft) lacks an accessible label, so screen readers can't convey
its purpose; update the Button to include an accessible name by adding either an
aria-label="Back" (or aria-label={t('back')} to use translations) or wrap the
icon with a visually-hidden <span> providing the text, ensuring the accessible
label references the same intent as the h1; keep the onClick handler
(handleBack) unchanged and apply the change where the Button and ArrowLeft are
rendered in page.tsx.
In @frontend/src/features/flows/components/FlowForm.tsx:
- Around line 310-361: Add a validation branch inside handleSubmit to enforce
that when triggerType === 'event' and triggerConfig.event_type === 'git_push'
the repository field on triggerConfig.git_push is non-empty; specifically, read
triggerConfig.git_push (cast as { repository?: string; branch?: string } |
undefined), if repository is missing or blank call
toast.error(t('validation_git_repository_required')) and return early to prevent
submission. Update handleSubmit to perform this check before any API call or
save logic so empty repositories cannot be persisted.
🧹 Nitpick comments (13)
backend/app/services/chat/trigger/emitter.py (1)
369-392: Preferstatus: FlowExecutionStatusoverstatus: strin FlowEventEmitter.
This avoids runtime ValueErrors and keeps status values consistent across the codebase.backend/app/schemas/flow.py (2)
57-62: ConstrainIntervalTriggerConfig.unitto allowed values.
Avoid persisting invalid units that break next-run calculation.
233-251: Consider updating Pydantic v2 config style (ConfigDict) forfrom_attributes.
class Configworks in some setups but is not the preferred v2 style.Also applies to: 278-300
backend/app/services/adapters/executor_kinds.py (1)
1-2421: This file exceeds the 1000-line guideline; consider extracting the new Flow-related logic.
At minimum, move_update_flow_execution_statusinto a dedicated Flow integration module/service to keep this adapter focused.backend/app/services/flow_scheduler.py (4)
46-75: Uselogging.exceptionfor better error diagnostics in lock operations.The lock acquisition and release functions catch broad
Exceptionand uselogger.errorwith string formatting. Per static analysis hints and Python best practices, uselogging.exceptionto include stack traces, and use explicit string conversion.♻️ Suggested improvements
async def acquire_flow_scheduler_lock() -> bool: """ Try to acquire distributed lock to ensure only one instance executes the scheduler. """ try: acquired = await cache_manager.setnx( FLOW_SCHEDULER_LOCK_KEY, True, expire=FLOW_SCHEDULER_LOCK_EXPIRY ) if acquired: logger.debug( f"[flow_scheduler] Successfully acquired distributed lock: {FLOW_SCHEDULER_LOCK_KEY}" ) - else: + return acquired + else: logger.debug( f"[flow_scheduler] Lock is held by another instance: {FLOW_SCHEDULER_LOCK_KEY}" ) - return acquired + return acquired except Exception as e: - logger.error(f"[flow_scheduler] Error acquiring distributed lock: {str(e)}") + logger.exception("[flow_scheduler] Error acquiring distributed lock: %s", e) return False async def release_flow_scheduler_lock() -> bool: """Release distributed lock.""" try: return await cache_manager.delete(FLOW_SCHEDULER_LOCK_KEY) except Exception as e: - logger.error(f"[flow_scheduler] Error releasing lock: {str(e)}") + logger.exception("[flow_scheduler] Error releasing lock: %s", e) return False
106-176: Potential race condition:flow_crdis parsed twice unnecessarily.
flow_crd = Flow.model_validate(flow.json)is called at line 113 and again at line 143. The second parse is redundant since no modifications occur between them.♻️ Remove duplicate parsing
try: flow_crd = Flow.model_validate(flow.json) trigger_type = flow.trigger_type # Determine trigger reason based on trigger type # ... (trigger_reason logic) # Create execution record execution = flow_service._create_execution(...) logger.info(...) # Update flow's next execution time - flow_crd = Flow.model_validate(flow.json) trigger_config = flow_service._extract_trigger_config(flow_crd.spec.trigger)
449-469: Event loop created but may leak on exceptions beforeloop.close().If an exception occurs during
loop.run_until_complete(...), the loop is still closed infinally, which is correct. However,asyncio.set_event_loop(loop)sets a global event loop in a thread context—consider usingasyncio.run()(Python 3.7+) for cleaner lifecycle management, or ensure no side effects from setting the event loop.♻️ Consider using asyncio.run() for simpler lifecycle
- loop = asyncio.new_event_loop() - asyncio.set_event_loop(loop) - try: - loop.run_until_complete( - _stream_chat_response( - stream_data=stream_data, - message=message, - payload=payload, - task_room=task_room, - namespace=None, - trace_context=None, - otel_context=None, - user_subtask_id=user_subtask.id if user_subtask else None, - event_emitter=flow_emitter, - ) - ) - logger.info(...) - finally: - loop.close() + asyncio.run( + _stream_chat_response( + stream_data=stream_data, + message=message, + payload=payload, + task_room=task_room, + namespace=None, + trace_context=None, + otel_context=None, + user_subtask_id=user_subtask.id if user_subtask else None, + event_emitter=flow_emitter, + ) + ) + logger.info(...)Note:
asyncio.run()handles loop creation and cleanup automatically.
478-541: Event loop created on every scheduler cycle—consider reusing or usingasyncio.run().Similar to the chat shell response,
flow_scheduler_workercreates a new event loop on every iteration. This is acceptable but could be optimized by reusing the loop or usingasyncio.run()for the lock operations. The current pattern works but is slightly inefficient.Additionally, consider using
logger.exceptioninstead oflogger.errorfor the exception handlers at lines 524, 531, and 536 to capture stack traces.frontend/src/features/flows/components/CronSchedulePicker.tsx (1)
139-153: Potential render loop if parent doesn't memoizeonChange.The
useEffectat lines 140-143 callssetSchedule(parsed)whenevervaluechanges. ThenupdateSchedulecallsonChange(expression)which may cause the parent to re-render with a new (but equivalent) value, potentially triggering another cycle.This is mitigated because
parseCronExpression+generateCronExpressionshould produce stable outputs, but consider adding a guard to prevent callingonChangeif the expression hasn't actually changed:♻️ Guard against unnecessary onChange calls
const updateSchedule = useCallback( (newSchedule: ScheduleState) => { setSchedule(newSchedule) const expression = generateCronExpression(newSchedule) + if (expression !== value) { onChange(expression) + } }, - [onChange] + [onChange, value] )frontend/src/features/flows/components/FlowTimeline.tsx (2)
156-176: Consider dark mode compatibility for avatar ring and status indicator.The
ring-4 ring-whiteandborder-2 border-whiteclasses assume a light background. In dark mode, white rings/borders may look inconsistent. Consider using theme-aware tokens likering-backgroundorborder-background.♻️ Use theme-aware color tokens
- <div className="h-10 w-10 rounded-full bg-gradient-to-br from-primary/20 to-primary/5 flex items-center justify-center ring-4 ring-white"> + <div className="h-10 w-10 rounded-full bg-gradient-to-br from-primary/20 to-primary/5 flex items-center justify-center ring-4 ring-background"> <Bot className="h-5 w-5 text-primary" /> </div> {/* Status indicator dot */} <div - className={`absolute -bottom-0.5 -right-0.5 h-3.5 w-3.5 rounded-full border-2 border-white flex items-center justify-center ${ + className={`absolute -bottom-0.5 -right-0.5 h-3.5 w-3.5 rounded-full border-2 border-background flex items-center justify-center ${
223-234: Error message card uses hardcoded light-theme colors.The error card uses
border-red-200 bg-red-50andtext-red-600/text-red-700which may not render well in dark mode. Consider using semantic color tokens or dark mode variants.♻️ Use dark mode compatible classes
{exec.error_message && ( - <div className="mt-3 rounded-2xl border border-red-200 bg-red-50 overflow-hidden"> + <div className="mt-3 rounded-2xl border border-red-200 dark:border-red-800 bg-red-50 dark:bg-red-950 overflow-hidden"> <div className="px-4 py-3"> - <div className="flex items-center gap-1.5 text-xs font-medium text-red-600 mb-1"> + <div className="flex items-center gap-1.5 text-xs font-medium text-red-600 dark:text-red-400 mb-1"> <AlertCircle className="h-3.5 w-3.5" /> {t('feed.error_occurred')} </div> - <div className="text-sm text-red-700 line-clamp-3">{exec.error_message}</div> + <div className="text-sm text-red-700 dark:text-red-300 line-clamp-3">{exec.error_message}</div> </div> </div> )}frontend/src/features/flows/components/FlowForm.tsx (2)
64-64: Unused state variable_name.The
_namestate variable is set fromflow.nameduring editing but is never used—the name is auto-generated for new flows (lines 169-174) and the existing name is preserved via the API for updates. Consider removing this unused state or documenting why it's retained.♻️ Remove unused state if not needed
// Form state - const [_name, setName] = useState('') const [displayName, setDisplayName] = useState('')And update the effect:
if (flow) { - setName(flow.name) setDisplayName(flow.display_name)
169-174: Generated flow name may cause collisions for similar display names.The name generation logic creates a slug from
displayName, but two flows with display names like "Daily Report" and "Daily-Report" would generate the samename. The fallbackflow-${Date.now()}handles empty cases, but consider adding a unique suffix (e.g., timestamp or short random string) to prevent collisions:♻️ Add uniqueness to generated name
const generatedName = - displayName + `${displayName .toLowerCase() .replace(/[^a-z0-9]+/g, '-') .replace(/^-|-$/g, '') - .slice(0, 50) || `flow-${Date.now()}` + .slice(0, 40)}-${Date.now().toString(36)}` || `flow-${Date.now()}`This adds a base-36 timestamp suffix for uniqueness while keeping the name readable.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (2)
backend/uv.lockis excluded by!**/*.lockexecutor/uv.lockis excluded by!**/*.lock
📒 Files selected for processing (21)
backend/app/api/endpoints/adapter/models.pybackend/app/models/flow.pybackend/app/schemas/flow.pybackend/app/services/adapters/executor_kinds.pybackend/app/services/chat/trigger/core.pybackend/app/services/chat/trigger/emitter.pybackend/app/services/flow.pybackend/app/services/flow_scheduler.pyfrontend/src/app/(tasks)/flow/page.tsxfrontend/src/app/(tasks)/flow/subscriptions/page.tsxfrontend/src/features/flows/components/CronSchedulePicker.tsxfrontend/src/features/flows/components/FlowForm.tsxfrontend/src/features/flows/components/FlowPage.tsxfrontend/src/features/flows/components/FlowTimeline.tsxfrontend/src/features/tasks/components/sidebar/TaskSidebar.tsxfrontend/src/i18n/locales/en/common.jsonfrontend/src/i18n/locales/en/flow.jsonfrontend/src/i18n/locales/zh-CN/common.jsonfrontend/src/i18n/locales/zh-CN/flow.jsonshared/telemetry/context/events.pystart.sh
🚧 Files skipped from review as they are similar to previous changes (5)
- frontend/src/i18n/locales/zh-CN/flow.json
- frontend/src/i18n/locales/en/common.json
- frontend/src/features/flows/components/FlowPage.tsx
- backend/app/models/flow.py
- frontend/src/i18n/locales/en/flow.json
🧰 Additional context used
📓 Path-based instructions (4)
**/*.{ts,tsx}
📄 CodeRabbit inference engine (AGENTS.md)
**/*.{ts,tsx}: Use TypeScript strict mode, prefer functional components over class components, use Prettier for formatting, ESLint for linting, single quotes, and no semicolons
Useconstoverlet, never usevarin TypeScript/React code
Files:
frontend/src/app/(tasks)/flow/page.tsxfrontend/src/app/(tasks)/flow/subscriptions/page.tsxfrontend/src/features/flows/components/CronSchedulePicker.tsxfrontend/src/features/flows/components/FlowTimeline.tsxfrontend/src/features/tasks/components/sidebar/TaskSidebar.tsxfrontend/src/features/flows/components/FlowForm.tsx
**/*.{py,ts,tsx}
📄 CodeRabbit inference engine (AGENTS.md)
All code comments must be written in English
Files:
frontend/src/app/(tasks)/flow/page.tsxbackend/app/api/endpoints/adapter/models.pyfrontend/src/app/(tasks)/flow/subscriptions/page.tsxfrontend/src/features/flows/components/CronSchedulePicker.tsxfrontend/src/features/flows/components/FlowTimeline.tsxfrontend/src/features/tasks/components/sidebar/TaskSidebar.tsxbackend/app/services/flow.pybackend/app/services/chat/trigger/core.pybackend/app/services/chat/trigger/emitter.pyfrontend/src/features/flows/components/FlowForm.tsxshared/telemetry/context/events.pybackend/app/services/adapters/executor_kinds.pybackend/app/schemas/flow.pybackend/app/services/flow_scheduler.py
**/*.{py,ts,tsx,js,jsx}
📄 CodeRabbit inference engine (AGENTS.md)
Keep all files under 1000 lines of code; split files that exceed this limit into multiple sub-modules
Files:
frontend/src/app/(tasks)/flow/page.tsxbackend/app/api/endpoints/adapter/models.pyfrontend/src/app/(tasks)/flow/subscriptions/page.tsxfrontend/src/features/flows/components/CronSchedulePicker.tsxfrontend/src/features/flows/components/FlowTimeline.tsxfrontend/src/features/tasks/components/sidebar/TaskSidebar.tsxbackend/app/services/flow.pybackend/app/services/chat/trigger/core.pybackend/app/services/chat/trigger/emitter.pyfrontend/src/features/flows/components/FlowForm.tsxshared/telemetry/context/events.pybackend/app/services/adapters/executor_kinds.pybackend/app/schemas/flow.pybackend/app/services/flow_scheduler.py
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 standards, use Black formatter with line length 88, and use isort for import sorting in Python code
Use type hints in all Python code, include descriptive docstrings for public functions and classes, and extract magic numbers to named constants
Implement database models inapp/models/, create API endpoints inapp/api/, define schemas inapp/schemas/, and implement business logic inapp/services/
Limit Python functions to a maximum of 50 lines (preferred); functions should have single responsibility
Always useuv runto execute Python commands instead of direct python/pytest commands
Files:
backend/app/api/endpoints/adapter/models.pybackend/app/services/flow.pybackend/app/services/chat/trigger/core.pybackend/app/services/chat/trigger/emitter.pyshared/telemetry/context/events.pybackend/app/services/adapters/executor_kinds.pybackend/app/schemas/flow.pybackend/app/services/flow_scheduler.py
🧠 Learnings (4)
📚 Learning: 2026-01-12T03:59:29.111Z
Learnt from: CR
Repo: wecode-ai/Wegent PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-01-12T03:59:29.111Z
Learning: Applies to src/pages/**/*.{ts,tsx} : Use dynamic imports and component routing logic in page-level components; separate Mobile and Desktop component implementations when layout differences exceed 30% or different interaction patterns exist
Applied to files:
frontend/src/app/(tasks)/flow/page.tsxfrontend/src/app/(tasks)/flow/subscriptions/page.tsx
📚 Learning: 2025-12-18T02:09:09.776Z
Learnt from: CR
Repo: wecode-ai/Wegent PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-18T02:09:09.776Z
Learning: Applies to frontend/src/**/*.tsx : Frontend component files named with CRD terms: team-list.tsx, bot-form.tsx, model-selector.tsx
Applied to files:
frontend/src/features/flows/components/CronSchedulePicker.tsxfrontend/src/features/flows/components/FlowForm.tsx
📚 Learning: 2025-12-18T02:09:09.776Z
Learnt from: CR
Repo: wecode-ai/Wegent PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-18T02:09:09.776Z
Learning: Applies to frontend/src/i18n/zh-CN.json : Frontend i18n Chinese values (zh-CN) MUST use UI terms: `'智能体列表'`, `'机器人配置'`
Applied to files:
frontend/src/i18n/locales/zh-CN/common.json
📚 Learning: 2026-01-12T03:59:29.111Z
Learnt from: CR
Repo: wecode-ai/Wegent PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-01-12T03:59:29.111Z
Learning: Applies to src/i18n/locales/**/*.{json,ts} : Add new translation keys to the appropriate namespace file matching the feature in `src/i18n/locales/{lang}/`; use format `namespace:key.subkey` to reference keys from other namespaces
Applied to files:
frontend/src/i18n/locales/zh-CN/common.json
🧬 Code graph analysis (9)
frontend/src/app/(tasks)/flow/page.tsx (9)
frontend/src/features/flows/components/FlowPage.tsx (1)
FlowPage(22-28)frontend/src/hooks/useTranslation.ts (1)
useTranslation(8-30)frontend/src/features/layout/hooks/useMediaQuery.ts (1)
useIsMobile(48-50)frontend/src/features/tasks/components/sidebar/CollapsedSidebarButtons.tsx (1)
CollapsedSidebarButtons(18-76)frontend/src/features/tasks/components/sidebar/ResizableSidebar.tsx (1)
ResizableSidebar(37-176)frontend/src/features/tasks/components/sidebar/TaskSidebar.tsx (1)
TaskSidebar(46-691)frontend/src/features/layout/TopNavigation.tsx (1)
TopNavigation(28-101)frontend/src/components/ui/button.tsx (1)
Button(56-56)frontend/src/features/theme/ThemeToggle.tsx (1)
ThemeToggle(11-50)
frontend/src/app/(tasks)/flow/subscriptions/page.tsx (6)
frontend/src/hooks/useTranslation.ts (1)
useTranslation(8-30)frontend/src/features/flows/contexts/flowContext.tsx (2)
useFlowContext(217-223)FlowProvider(69-215)frontend/src/features/layout/hooks/useMediaQuery.ts (1)
useIsMobile(48-50)frontend/src/features/tasks/components/sidebar/CollapsedSidebarButtons.tsx (1)
CollapsedSidebarButtons(18-76)frontend/src/features/tasks/components/sidebar/ResizableSidebar.tsx (1)
ResizableSidebar(37-176)frontend/src/features/layout/TopNavigation.tsx (1)
TopNavigation(28-101)
frontend/src/features/flows/components/CronSchedulePicker.tsx (4)
frontend/src/hooks/useTranslation.ts (1)
useTranslation(8-30)frontend/src/lib/utils.ts (1)
cn(8-10)frontend/src/components/ui/label.tsx (1)
Label(21-21)frontend/src/components/ui/select.tsx (5)
Select(147-147)SelectTrigger(150-150)SelectValue(149-149)SelectContent(151-151)SelectItem(153-153)
frontend/src/features/tasks/components/sidebar/TaskSidebar.tsx (1)
frontend/src/config/paths.ts (1)
paths(7-55)
backend/app/services/flow.py (2)
backend/app/models/flow.py (2)
FlowExecution(89-143)FlowResource(25-86)backend/app/schemas/flow.py (18)
Flow(177-184)FlowCreate(212-215)FlowExecutionInDB(278-299)FlowExecutionStatus(38-46)FlowMetadata(168-174)FlowSpec(124-142)FlowStatus(145-165)FlowTaskType(15-19)FlowTriggerConfig(90-105)FlowTriggerType(22-28)FlowWorkspaceRef(116-120)FlowTeamRef(109-113)CronTriggerConfig(50-54)EventTriggerConfig(79-87)FlowEventType(31-35)GitPushEventConfig(72-76)IntervalTriggerConfig(57-61)OneTimeTriggerConfig(64-69)
backend/app/services/chat/trigger/core.py (2)
backend/app/services/chat/adapters/http.py (1)
chat(182-272)backend/app/services/chat/trigger/emitter.py (23)
ChatEventEmitter(24-97)WebSocketEventEmitter(100-215)emit_chat_error(68-76)emit_chat_error(165-181)emit_chat_error(262-272)emit_chat_error(336-353)emit_chat_start(33-41)emit_chat_start(107-123)emit_chat_start(226-236)emit_chat_chunk(44-53)emit_chat_chunk(125-143)emit_chat_chunk(238-247)emit_chat_done(56-65)emit_chat_done(145-163)emit_chat_done(249-260)emit_chat_done(316-334)emit_chat_bot_complete(88-97)emit_chat_bot_complete(197-215)emit_chat_bot_complete(283-294)emit_chat_cancelled(79-85)emit_chat_cancelled(183-195)emit_chat_cancelled(274-281)emit_chat_cancelled(355-367)
backend/app/services/chat/trigger/emitter.py (3)
backend/app/services/chat/ws_emitter.py (1)
get_ws_emitter(710-717)backend/app/schemas/flow.py (1)
FlowExecutionStatus(38-46)backend/app/services/flow.py (1)
update_execution_status(605-663)
frontend/src/features/flows/components/FlowForm.tsx (2)
frontend/src/types/flow.ts (5)
Flow(59-83)FlowTriggerType(13-13)FlowTaskType(10-10)FlowUpdateRequest(102-113)FlowCreateRequest(86-99)frontend/src/apis/flow.ts (1)
flowApis(21-126)
backend/app/services/adapters/executor_kinds.py (2)
backend/app/schemas/kind.py (1)
Task(462-469)backend/app/models/flow.py (2)
FlowExecution(89-143)FlowResource(25-86)
🪛 Gitleaks (8.30.0)
frontend/src/features/flows/components/CronSchedulePicker.tsx
[high] 52-52: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.
(generic-api-key)
🪛 Ruff (0.14.10)
backend/app/services/flow.py
47-52: Mutable class attributes should be annotated with typing.ClassVar
(RUF012)
70-70: Avoid equality comparisons to True; use FlowResource.is_active: for truth checks
Replace with FlowResource.is_active
(E712)
87-87: Avoid equality comparisons to True; use Kind.is_active: for truth checks
Replace with Kind.is_active
(E712)
106-106: Avoid equality comparisons to True; use TaskResource.is_active: for truth checks
Replace with TaskResource.is_active
(E712)
165-165: Avoid equality comparisons to True; use FlowResource.is_active: for truth checks
Replace with FlowResource.is_active
(E712)
188-188: Avoid equality comparisons to True; use FlowResource.is_active: for truth checks
Replace with FlowResource.is_active
(E712)
221-221: Avoid equality comparisons to True; use FlowResource.is_active: for truth checks
Replace with FlowResource.is_active
(E712)
239-239: Avoid equality comparisons to True; use Kind.is_active: for truth checks
Replace with Kind.is_active
(E712)
260-260: Avoid equality comparisons to True; use TaskResource.is_active: for truth checks
Replace with TaskResource.is_active
(E712)
358-358: Avoid equality comparisons to True; use FlowResource.is_active: for truth checks
Replace with FlowResource.is_active
(E712)
387-387: Avoid equality comparisons to True; use FlowResource.is_active: for truth checks
Replace with FlowResource.is_active
(E712)
430-430: Avoid equality comparisons to True; use FlowResource.is_active: for truth checks
Replace with FlowResource.is_active
(E712)
458-458: Avoid equality comparisons to True; use FlowResource.is_active: for truth checks
Replace with FlowResource.is_active
(E712)
459-459: Avoid equality comparisons to True; use FlowResource.enabled: for truth checks
Replace with FlowResource.enabled
(E712)
847-847: Avoid specifying long messages outside the exception class
(TRY003)
910-910: Do not catch blind exception: Exception
(BLE001)
914-914: Local variable user_tz is assigned to but never used
Remove assignment to unused variable user_tz
(F841)
929-929: Consider moving this statement to an else block
(TRY300)
930-930: Do not catch blind exception: Exception
(BLE001)
backend/app/services/chat/trigger/core.py
254-254: Undefined name ChatEventEmitter
(F821)
591-591: Undefined name ChatEventEmitter
(F821)
backend/app/services/chat/trigger/emitter.py
230-230: Unused method argument: message_id
(ARG002)
254-254: Unused method argument: result
(ARG002)
255-255: Unused method argument: message_id
(ARG002)
267-267: Unused method argument: message_id
(ARG002)
288-288: Unused method argument: content
(ARG002)
289-289: Unused method argument: result
(ARG002)
320-320: Unused method argument: offset
(ARG002)
322-322: Unused method argument: message_id
(ARG002)
341-341: Unused method argument: message_id
(ARG002)
401-401: Do not catch blind exception: Exception
(BLE001)
402-405: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
backend/app/services/adapters/executor_kinds.py
2346-2346: Use explicit conversion flag
Replace with conversion flag
(RUF010)
backend/app/services/flow_scheduler.py
62-62: Consider moving this statement to an else block
(TRY300)
63-63: Do not catch blind exception: Exception
(BLE001)
64-64: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
64-64: Use explicit conversion flag
Replace with conversion flag
(RUF010)
72-72: Do not catch blind exception: Exception
(BLE001)
73-73: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
73-73: Use explicit conversion flag
Replace with conversion flag
(RUF010)
90-90: Avoid equality comparisons to True; use FlowResource.is_active: for truth checks
Replace with FlowResource.is_active
(E712)
91-91: Avoid equality comparisons to True; use FlowResource.enabled: for truth checks
Replace with FlowResource.enabled
(E712)
92-92: Comparison to None should be cond is not None
Replace with cond is not None
(E711)
171-171: Do not catch blind exception: Exception
(BLE001)
172-174: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
173-173: Use explicit conversion flag
Replace with conversion flag
(RUF010)
200-200: Avoid equality comparisons to True; use Kind.is_active: for truth checks
Replace with Kind.is_active
(E712)
244-244: Avoid equality comparisons to True; use TaskResource.is_active: for truth checks
Replace with TaskResource.is_active
(E712)
342-342: Use explicit conversion flag
Replace with conversion flag
(RUF010)
473-473: Use explicit conversion flag
Replace with conversion flag
(RUF010)
523-523: Do not catch blind exception: Exception
(BLE001)
524-524: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
524-524: Use explicit conversion flag
Replace with conversion flag
(RUF010)
530-530: Do not catch blind exception: Exception
(BLE001)
531-531: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
531-531: Use explicit conversion flag
Replace with conversion flag
(RUF010)
535-535: Do not catch blind exception: Exception
(BLE001)
536-536: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
536-536: Use explicit conversion flag
Replace with conversion flag
(RUF010)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
- GitHub Check: E2E Tests (Shard 3/3)
- GitHub Check: Test Frontend
- GitHub Check: E2E Tests (Shard 1/3)
- GitHub Check: E2E Tests (Shard 2/3)
- GitHub Check: Test wegent CLI Integration
🔇 Additional comments (13)
shared/telemetry/context/events.py (1)
18-19: LGTM!The new telemetry error event constants follow the established naming conventions and are correctly placed in the "Error events" section alongside similar constants like
BOT_NOT_FOUNDandPROVIDER_CREATION_FAILED.backend/app/api/endpoints/adapter/models.py (1)
607-609: LGTM!Formatting-only change splitting the dictionary entry across multiple lines. No functional impact.
backend/app/services/chat/trigger/core.py (1)
110-115: Good change: makingnamespaceoptional for HTTP mode reduces coupling to WebSocket.Also applies to: 175-185
frontend/src/app/(tasks)/flow/subscriptions/page.tsx (1)
139-148: No issue found.TopNavigationfully supports theactivePageprop, which is defined as an optional prop accepting'chat' | 'code' | 'wiki' | 'dashboard'. The usage withactivePage="dashboard"is correct and will build without errors.Likely an incorrect or invalid review comment.
backend/app/services/flow_scheduler.py (1)
87-103: SQLAlchemy boolean/None comparisons are intentional—static analysis hints can be ignored here.The
== True,== False, and!= Nonecomparisons are required for SQLAlchemy filter expressions. Usingisor implicit truth checks would not generate correct SQL. The E711/E712 hints are false positives in this context.frontend/src/i18n/locales/zh-CN/common.json (1)
59-59: Translation entry looks good.The navigation entry
"flow": "智能流"is consistent with the PR's feature naming convention and follows the i18n patterns established in the codebase. Based on learnings, the Chinese translation appropriately uses the established UI term.frontend/src/features/tasks/components/sidebar/TaskSidebar.tsx (1)
113-118: Flow navigation button integration looks correct.The new Flow navigation button follows the established pattern for other navigation items. Note that unlike the "code" button, the Flow button doesn't define a
tooltipproperty—this is appropriate since the Flow page doesn't support the "new task on click when active" behavior that the code page has.frontend/src/app/(tasks)/flow/page.tsx (1)
30-115: Page structure follows established patterns—LGTM.The FlowPage correctly implements the standard page layout with resizable sidebar, top navigation, and main content area. The localStorage persistence for collapsed state and mobile handling are consistent with other task pages.
frontend/src/features/flows/components/CronSchedulePicker.tsx (2)
52-52: Static analysis false positive—this is a translation key, not an API key.The Gitleaks warning about a "Generic API Key" at line 52 is a false positive. The
key: 'monthly_1st_9am'is a localization/translation key used to look up UI strings, not a secret or API credential.
56-113: Robust cron parsing with sensible fallbacks.The
parseCronExpressionfunction handles various cron patterns well and falls back to a preset when patterns don't match. The logic correctly identifies hourly, daily, weekly, and monthly patterns.frontend/src/features/flows/components/FlowTimeline.tsx (1)
67-316: Well-structured timeline component with proper loading and empty states.The component follows good patterns: memoized grouping logic, proper loading/empty state handling, and clean separation of concerns with
renderPost. The status configuration approach is maintainable.frontend/src/features/flows/components/FlowForm.tsx (2)
103-127: Form resets wheneveropenchanges, even when editing the same flow.The
useEffectdependencies includeopen, which means the form will reset every time the dialog is opened, even if the sameflowobject is passed. This is likely intentional to ensure fresh state, but be aware that ifflowis the same object reference, users might expect their unsaved changes to persist if they close and reopen the dialog quickly.
59-514: Well-structured form component with proper state management.The FlowForm component handles create/edit modes cleanly, with appropriate loading states, validation feedback via toast, and proper error handling. The trigger configuration rendering is well-organized with the
renderTriggerConfigfunction.
| class FlowTaskType(str, Enum): | ||
| """Flow task type enumeration.""" | ||
|
|
||
| EXECUTION = "execution" # 执行类任务 | ||
| COLLECTION = "collection" # 信息采集类任务 | ||
|
|
||
|
|
||
| class FlowTriggerType(str, Enum): | ||
| """Flow trigger type enumeration.""" | ||
|
|
||
| CRON = "cron" # 定时计划 | ||
| INTERVAL = "interval" # 固定间隔 | ||
| ONE_TIME = "one_time" # 一次性定时 | ||
| EVENT = "event" # 事件触发 | ||
|
|
||
|
|
||
| class FlowEventType(str, Enum): | ||
| """Event trigger sub-type enumeration.""" | ||
|
|
||
| WEBHOOK = "webhook" # Webhook 触发 | ||
| GIT_PUSH = "git_push" # Git Push 触发 | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Replace Chinese inline comments with English to match repo guidelines.
This file has multiple non-English comments (e.g., task/trigger annotations).
Also applies to: 124-143
🤖 Prompt for AI Agents
In @backend/app/schemas/flow.py around lines 15 - 36, Replace the Chinese inline
comments in the Flow enums with English equivalents: update comments for
FlowTaskType (replace "执行类任务" and "信息采集类任务"), FlowTriggerType (replace "定时计划",
"固定间隔", "一次性定时", "事件触发"), and FlowEventType (replace "Webhook 触发", "Git Push
触发") to concise English descriptions; also locate and replace any other
non-English comments around the later enum or usage block referenced (the
section around the Flow enums at lines ~124-143) to maintain consistent
English-only comments across the file.
| class FlowTriggerConfig(BaseModel): | ||
| """Flow trigger configuration.""" | ||
|
|
||
| type: FlowTriggerType = Field(..., description="Trigger type") | ||
| cron: Optional[CronTriggerConfig] = Field( | ||
| None, description="Cron configuration (when type is 'cron')" | ||
| ) | ||
| interval: Optional[IntervalTriggerConfig] = Field( | ||
| None, description="Interval configuration (when type is 'interval')" | ||
| ) | ||
| one_time: Optional[OneTimeTriggerConfig] = Field( | ||
| None, description="One-time configuration (when type is 'one_time')" | ||
| ) | ||
| event: Optional[EventTriggerConfig] = Field( | ||
| None, description="Event configuration (when type is 'event')" | ||
| ) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add schema-level validation for FlowTriggerConfig (type ↔ config consistency).
Right now type='cron' with cron=None will validate, pushing failures into runtime scheduling.
Proposed fix (Pydantic v2 model_validator)
from pydantic import BaseModel, Field
+from pydantic import model_validator
@@
class FlowTriggerConfig(BaseModel):
@@
event: Optional[EventTriggerConfig] = Field(
None, description="Event configuration (when type is 'event')"
)
+
+ @model_validator(mode="after")
+ def _validate_one_of(self) -> "FlowTriggerConfig":
+ mapping = {
+ FlowTriggerType.CRON: self.cron,
+ FlowTriggerType.INTERVAL: self.interval,
+ FlowTriggerType.ONE_TIME: self.one_time,
+ FlowTriggerType.EVENT: self.event,
+ }
+ selected = mapping.get(self.type)
+ if selected is None:
+ raise ValueError(f"Missing config for trigger type: {self.type}")
+ # Also ensure no extra configs are set
+ extras = [k for k, v in mapping.items() if k != self.type and v is not None]
+ if extras:
+ raise ValueError(f"Unexpected configs set for trigger type {self.type}: {extras}")
+ return self🤖 Prompt for AI Agents
In @backend/app/schemas/flow.py around lines 90 - 106, FlowTriggerConfig
currently allows mismatched combinations (e.g., type == 'cron' with cron ==
None), so add a Pydantic v2 model_validator on FlowTriggerConfig to enforce
type↔config consistency: in the validator for the whole model (using
@model_validator(mode="after") or equivalent) inspect the FlowTriggerConfig.type
and ensure the corresponding field (cron, interval, one_time, or event) is
present and non-null and that no other trigger-config fields are set when not
expected; raise a ValueError with a clear message when invalid. Reference the
FlowTriggerConfig class and its fields type, cron, interval, one_time, and event
when locating where to add the validator.
| # Update Flow execution status if this is a Flow task | ||
| self._update_flow_execution_status(db, task_id, task_crd) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Flow execution updates here should be idempotent and go through FlowService.update_execution_status.
Current implementation risks double-counting, doesn’t set FlowExecution.task_id, doesn’t update execution_count, and diverges from the CRD/status sync logic already in backend/app/services/flow.py.
Suggested direction (high-level)
- If
execution.statusis already final (COMPLETED/FAILED/CANCELLED), return early. - Set
execution.task_id = task_idif missing. - Call
flow_service.update_execution_status(...)instead of duplicating logic, so CRD + stats are consistent.
Also applies to: 2252-2349
| # Track active streams only if namespace is provided (WebSocket mode) | ||
| # In HTTP mode (e.g., Flow Scheduler), namespace is None and tracking is not needed | ||
| if namespace is not None: | ||
| namespace._active_streams[assistant_subtask.id] = stream_task | ||
| namespace._stream_versions[assistant_subtask.id] = "v2" | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Defensive namespace tracking: avoid assuming private dicts exist on namespace.
If namespace changes implementation, this can crash.
Proposed fix
- if namespace is not None:
- namespace._active_streams[assistant_subtask.id] = stream_task
- namespace._stream_versions[assistant_subtask.id] = "v2"
+ if namespace is not None:
+ active_streams = getattr(namespace, "_active_streams", None)
+ stream_versions = getattr(namespace, "_stream_versions", None)
+ if isinstance(active_streams, dict) and isinstance(stream_versions, dict):
+ active_streams[assistant_subtask.id] = stream_task
+ stream_versions[assistant_subtask.id] = "v2"
+ else:
+ logger.warning("[ai_trigger] namespace missing stream tracking dicts; skipping")| class FlowEventEmitter(NoOpEventEmitter): | ||
| """Event emitter for Flow Scheduler tasks. | ||
| Extends NoOpEventEmitter to update FlowExecution status when | ||
| chat streaming completes or fails. This ensures Flow execution | ||
| status is properly tracked even without WebSocket connections. | ||
| Args: | ||
| execution_id: The FlowExecution ID to update on completion/error | ||
| """ | ||
|
|
||
| def __init__(self, execution_id: int): | ||
| """Initialize FlowEventEmitter. | ||
| Args: | ||
| execution_id: The FlowExecution ID to update | ||
| """ | ||
| self.execution_id = execution_id | ||
|
|
||
| async def emit_chat_done( | ||
| self, | ||
| task_id: int, | ||
| subtask_id: int, | ||
| offset: int, | ||
| result: Optional[Dict[str, Any]] = None, | ||
| message_id: Optional[int] = None, | ||
| ) -> None: | ||
| """Emit chat:done event and update FlowExecution status to COMPLETED.""" | ||
| logger.info( | ||
| f"[FlowEmitter] chat:done task={task_id} subtask={subtask_id} " | ||
| f"execution_id={self.execution_id}" | ||
| ) | ||
|
|
||
| # Update FlowExecution status to COMPLETED | ||
| await self._update_execution_status( | ||
| status="COMPLETED", | ||
| result_summary=self._extract_result_summary(result), | ||
| ) | ||
|
|
||
| async def emit_chat_error( | ||
| self, | ||
| task_id: int, | ||
| subtask_id: int, | ||
| error: str, | ||
| message_id: Optional[int] = None, | ||
| ) -> None: | ||
| """Emit chat:error event and update FlowExecution status to FAILED.""" | ||
| logger.warning( | ||
| f"[FlowEmitter] chat:error task={task_id} subtask={subtask_id} " | ||
| f"execution_id={self.execution_id} error={error}" | ||
| ) | ||
|
|
||
| # Update FlowExecution status to FAILED | ||
| await self._update_execution_status( | ||
| status="FAILED", | ||
| error_message=error, | ||
| ) | ||
|
|
||
| async def emit_chat_cancelled( | ||
| self, | ||
| task_id: int, | ||
| subtask_id: int, | ||
| ) -> None: | ||
| """Emit chat:cancelled event and update FlowExecution status to CANCELLED.""" | ||
| logger.info( | ||
| f"[FlowEmitter] chat:cancelled task={task_id} subtask={subtask_id} " | ||
| f"execution_id={self.execution_id}" | ||
| ) | ||
|
|
||
| # Update FlowExecution status to CANCELLED | ||
| await self._update_execution_status(status="CANCELLED") | ||
|
|
||
| async def _update_execution_status( | ||
| self, | ||
| status: str, | ||
| result_summary: Optional[str] = None, | ||
| error_message: Optional[str] = None, | ||
| ) -> None: | ||
| """Update FlowExecution status in database. | ||
| Args: | ||
| status: New status (COMPLETED, FAILED, CANCELLED) | ||
| result_summary: Optional result summary for COMPLETED status | ||
| error_message: Optional error message for FAILED status | ||
| """ | ||
| try: | ||
| from app.db.session import SessionLocal | ||
| from app.schemas.flow import FlowExecutionStatus | ||
| from app.services.flow import flow_service | ||
|
|
||
| db = SessionLocal() | ||
| try: | ||
| flow_service.update_execution_status( | ||
| db, | ||
| execution_id=self.execution_id, | ||
| status=FlowExecutionStatus(status), | ||
| result_summary=result_summary, | ||
| error_message=error_message, | ||
| ) | ||
| logger.info( | ||
| f"[FlowEmitter] Updated execution {self.execution_id} status to {status}" | ||
| ) | ||
| finally: | ||
| db.close() | ||
| except Exception as e: | ||
| logger.error( | ||
| f"[FlowEmitter] Failed to update execution {self.execution_id} " | ||
| f"status to {status}: {e}" | ||
| ) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid blocking the event loop: FlowEventEmitter does sync DB work inside async methods.
This can stall streaming/HTTP workers at completion/error boundaries.
Proposed fix (minimal, using asyncio.to_thread)
+import asyncio
...
class FlowEventEmitter(NoOpEventEmitter):
@@
async def _update_execution_status(
@@
try:
from app.db.session import SessionLocal
from app.schemas.flow import FlowExecutionStatus
from app.services.flow import flow_service
- db = SessionLocal()
- try:
- flow_service.update_execution_status(
- db,
- execution_id=self.execution_id,
- status=FlowExecutionStatus(status),
- result_summary=result_summary,
- error_message=error_message,
- )
- logger.info(
- f"[FlowEmitter] Updated execution {self.execution_id} status to {status}"
- )
- finally:
- db.close()
- except Exception as e:
- logger.error(
- f"[FlowEmitter] Failed to update execution {self.execution_id} "
- f"status to {status}: {e}"
- )
+ def _run() -> None:
+ db = SessionLocal()
+ try:
+ flow_service.update_execution_status(
+ db,
+ execution_id=self.execution_id,
+ status=FlowExecutionStatus(status),
+ result_summary=result_summary,
+ error_message=error_message,
+ )
+ finally:
+ db.close()
+
+ await asyncio.to_thread(_run)
+ logger.info(
+ f"[FlowEmitter] Updated execution {self.execution_id} status to {status}"
+ )
+ except Exception:
+ logger.exception(
+ f"[FlowEmitter] Failed to update execution {self.execution_id} status to {status}"
+ )🧰 Tools
🪛 Ruff (0.14.10)
320-320: Unused method argument: offset
(ARG002)
322-322: Unused method argument: message_id
(ARG002)
341-341: Unused method argument: message_id
(ARG002)
401-401: Do not catch blind exception: Exception
(BLE001)
402-405: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
| def _calculate_next_execution_time( | ||
| self, | ||
| trigger_type: FlowTriggerType, | ||
| trigger_config: Dict[str, Any], | ||
| ) -> Optional[datetime]: | ||
| """Calculate the next execution time based on trigger configuration. | ||
| For cron triggers, the timezone from trigger_config is used to interpret | ||
| the cron expression. The returned datetime is always in UTC for storage. | ||
| For example, if cron is "0 9 * * *" with timezone "Asia/Shanghai", | ||
| it means 9:00 AM Shanghai time, which is 1:00 AM UTC. | ||
| """ | ||
| trigger_type_enum = ( | ||
| trigger_type | ||
| if isinstance(trigger_type, FlowTriggerType) | ||
| else FlowTriggerType(trigger_type) | ||
| ) | ||
|
|
||
| now = datetime.now() | ||
|
|
||
| if trigger_type_enum == FlowTriggerType.CRON: | ||
| # Use croniter to calculate next run with timezone support | ||
| try: | ||
| from zoneinfo import ZoneInfo | ||
|
|
||
| from croniter import croniter | ||
|
|
||
| cron_expr = trigger_config.get("expression", "0 9 * * *") | ||
| timezone_str = trigger_config.get("timezone", "UTC") | ||
|
|
||
| # Get the user's timezone | ||
| try: | ||
| user_tz = ZoneInfo(timezone_str) | ||
| except Exception: | ||
| logger.warning( | ||
| f"Invalid timezone '{timezone_str}', falling back to UTC" | ||
| ) | ||
| user_tz = ZoneInfo("UTC") | ||
|
|
||
| # Use local time for cron calculation | ||
| now_local = datetime.now() | ||
|
|
||
| # Calculate next execution in local timezone | ||
| iter = croniter(cron_expr, now_local) | ||
| next_local = iter.get_next(datetime) | ||
|
|
||
| logger.debug( | ||
| f"Cron calculation: expr={cron_expr}, tz={timezone_str}, " | ||
| f"now_local={now_local}, next_local={next_local}" | ||
| ) | ||
|
|
||
| # Return the next execution time in local timezone | ||
| return next_local | ||
| except Exception as e: | ||
| logger.warning(f"Failed to parse cron expression: {e}") | ||
| return None | ||
|
|
||
| elif trigger_type_enum == FlowTriggerType.INTERVAL: | ||
| value = trigger_config.get("value", 1) | ||
| unit = trigger_config.get("unit", "hours") | ||
|
|
||
| if unit == "minutes": | ||
| return now + timedelta(minutes=value) | ||
| elif unit == "hours": | ||
| return now + timedelta(hours=value) | ||
| elif unit == "days": | ||
| return now + timedelta(days=value) | ||
|
|
||
| elif trigger_type_enum == FlowTriggerType.ONE_TIME: | ||
| execute_at = trigger_config.get("execute_at") | ||
| if execute_at: | ||
| if isinstance(execute_at, str): | ||
| return datetime.fromisoformat(execute_at.replace("Z", "+00:00")) | ||
| return execute_at | ||
|
|
||
| # Event triggers don't have scheduled next execution | ||
| return None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check if croniter is in project dependencies
echo "=== Checking for croniter in dependencies ==="
find . -name "pyproject.toml" -o -name "requirements*.txt" | head -20 | xargs grep -l "croniter" 2>/dev/null || echo "No croniter found in initial search"
# More comprehensive check
echo -e "\n=== All occurrences of croniter ==="
rg "croniter" --type py --type toml --type txt 2>/dev/null | head -20
# Check the specific file mentioned
echo -e "\n=== File info ==="
wc -l backend/app/services/flow.py 2>/dev/null || echo "File not found at that path"Repository: wecode-ai/Wegent
Length of output: 489
🏁 Script executed:
#!/bin/bash
# Read the specific function to verify the timezone handling claims
echo "=== Reading the _calculate_next_execution_time function ==="
sed -n '876,953p' backend/app/services/flow.pyRepository: wecode-ai/Wegent
Length of output: 3110
🏁 Script executed:
#!/bin/bash
# Check imports at the top of the file to understand what's available
echo "=== Checking imports at top of flow.py ==="
head -50 backend/app/services/flow.pyRepository: wecode-ai/Wegent
Length of output: 1358
🏁 Script executed:
#!/bin/bash
# Search for all usages of _calculate_next_execution_time to understand how return value is used
echo "=== Usages of _calculate_next_execution_time ==="
rg "_calculate_next_execution_time" -A 3 -B 3 backend/app/services/flow.pyRepository: wecode-ai/Wegent
Length of output: 1013
Timezone / UTC handling in cron scheduling is incorrect (docstring contradicts code).
user_tz is unused, now_local is naive, and the function returns a local naive datetime instead of UTC as documented. This will cause incorrect scheduled execution times for users in any timezone other than the server's local timezone.
Proposed fix (tz-aware cron calc, store UTC)
def _calculate_next_execution_time(
@@
if trigger_type_enum == FlowTriggerType.CRON:
# Use croniter to calculate next run with timezone support
try:
from zoneinfo import ZoneInfo
from croniter import croniter
@@
try:
user_tz = ZoneInfo(timezone_str)
except Exception:
logger.warning(
f"Invalid timezone '{timezone_str}', falling back to UTC"
)
user_tz = ZoneInfo("UTC")
- # Use local time for cron calculation
- now_local = datetime.now()
-
- # Calculate next execution in local timezone
- iter = croniter(cron_expr, now_local)
- next_local = iter.get_next(datetime)
+ # Use tz-aware time for cron calculation in user's timezone
+ now_local = datetime.now(tz=user_tz)
+ itr = croniter(cron_expr, now_local)
+ next_local = itr.get_next(datetime) # tz-aware in user_tz
+
+ # Convert to UTC for storage
+ next_utc = next_local.astimezone(ZoneInfo("UTC"))
@@
- return next_local
+ return next_utc.replace(tzinfo=None)
except Exception as e:
logger.warning(f"Failed to parse cron expression: {e}")
return NoneAlso note: croniter is imported but not declared in project dependencies (pyproject.toml).
🧰 Tools
🪛 Ruff (0.14.10)
910-910: Do not catch blind exception: Exception
(BLE001)
914-914: Local variable user_tz is assigned to but never used
Remove assignment to unused variable user_tz
(F841)
929-929: Consider moving this statement to an else block
(TRY300)
930-930: Do not catch blind exception: Exception
(BLE001)
🤖 Prompt for AI Agents
In @backend/app/services/flow.py around lines 876 - 953, Summary:
_calculate_next_execution_time currently ignores user_tz, uses naive local
now_local, and returns a naive local datetime instead of a UTC-aware datetime as
promised; croniter is also missing from dependencies. Fix: inside
_calculate_next_execution_time (function name), build a timezone-aware base time
using ZoneInfo(timezone_str) (replace now_local = datetime.now() with now_local
= datetime.now(user_tz)), pass that tz-aware base to croniter(cron_expr,
now_local), get next_local as a tz-aware datetime, convert it to UTC via
next_utc = next_local.astimezone(ZoneInfo("UTC")) and return next_utc
(UTC-aware) for storage; keep the existing exception handling but ensure user_tz
is actually used; finally add croniter to project dependencies (pyproject.toml)
so imports succeed.
| <TopNavigation | ||
| activePage="dashboard" | ||
| variant="with-sidebar" | ||
| title={t('common:navigation.flow')} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check how activePage is used in TopNavigation and what values other pages use
rg -n "activePage=" --type=tsx -C2 frontend/src/appRepository: wecode-ai/Wegent
Length of output: 87
🏁 Script executed:
#!/bin/bash
# Find activePage usage in tsx files
rg -n "activePage=" frontend/src/app -A2 -B2 | head -100Repository: wecode-ai/Wegent
Length of output: 6452
🏁 Script executed:
#!/bin/bash
# Find TopNavigation component to see how activePage is defined/used
fd -e tsx -e ts "TopNavigation" frontend/srcRepository: wecode-ai/Wegent
Length of output: 106
🏁 Script executed:
#!/bin/bash
# Read the TopNavigation component to see how activePage is used
cat -n frontend/src/features/layout/TopNavigation.tsxRepository: wecode-ai/Wegent
Length of output: 4306
🏁 Script executed:
#!/bin/bash
# Double-check activePage is not used anywhere in TopNavigation.tsx
rg -n "activePage" frontend/src/features/layout/TopNavigation.tsxRepository: wecode-ai/Wegent
Length of output: 116
🏁 Script executed:
#!/bin/bash
# Verify activePage is not destructured in the function parameters
sed -n '28,38p' frontend/src/features/layout/TopNavigation.tsxRepository: wecode-ai/Wegent
Length of output: 297
Remove unused activePage prop—it's defined in the type but never used in the component.
The activePage prop is typed in TopNavigationProps but not destructured in the function signature and has no effect on rendering. Either remove it from the type definition or implement the feature to highlight the active navigation item.
🤖 Prompt for AI Agents
In @frontend/src/app/(tasks)/flow/page.tsx around lines 89 - 92, TopNavigation
is being passed an unused prop activePage that exists on the TopNavigationProps
type but isn't destructured or used in the TopNavigation component; remove
activePage from the TopNavigationProps type and from any callers (e.g., the
TopNavigation usage in flow/page.tsx), or alternatively implement the feature by
accepting and destructuring activePage in the TopNavigation component and using
it to set the active/highlighted nav item (update the TopNavigation function
signature and its render logic to read activePage and apply the active class).
Ensure all usages and the prop type stay consistent (either remove references to
activePage across callers or add handling in TopNavigation).
| /** | ||
| * Flow Subscriptions Management Page | ||
| * | ||
| * Page for managing flow subscriptions (我的订阅). | ||
| */ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Replace non-English comment text to match repo guidelines.
(我的订阅) should be in English per **/*.{py,ts,tsx} comment rules.
Proposed fix
- * Page for managing flow subscriptions (我的订阅).
+ * Page for managing flow subscriptions (My subscriptions).📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| /** | |
| * Flow Subscriptions Management Page | |
| * | |
| * Page for managing flow subscriptions (我的订阅). | |
| */ | |
| /** | |
| * Flow Subscriptions Management Page | |
| * | |
| * Page for managing flow subscriptions (My subscriptions). | |
| */ |
🤖 Prompt for AI Agents
In @frontend/src/app/(tasks)/flow/subscriptions/page.tsx around lines 26 - 30,
Replace the non-English comment text "(我的订阅)" in the file header comment with
its English equivalent "(My Subscriptions)" to comply with the repo comment
rules for /*.{py,ts,tsx} files; locate the header block containing "Flow
Subscriptions Management Page" and update the parenthetical text accordingly so
the comment reads in English.
| {/* Back button header */} | ||
| <div className="flex items-center gap-3 px-4 py-3 border-b border-border bg-white"> | ||
| <Button variant="ghost" size="icon" className="h-9 w-9" onClick={handleBack}> | ||
| <ArrowLeft className="h-5 w-5" /> | ||
| </Button> | ||
| <h1 className="text-lg font-semibold">{t('my_flows')}</h1> | ||
| </div> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add an accessible label for the icon-only Back button.
Right now screen readers will announce it ambiguously.
Proposed fix
- <Button variant="ghost" size="icon" className="h-9 w-9" onClick={handleBack}>
+ <Button
+ variant="ghost"
+ size="icon"
+ className="h-9 w-9"
+ onClick={handleBack}
+ aria-label={t('common:common.back')}
+ >
<ArrowLeft className="h-5 w-5" />
</Button>📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| {/* Back button header */} | |
| <div className="flex items-center gap-3 px-4 py-3 border-b border-border bg-white"> | |
| <Button variant="ghost" size="icon" className="h-9 w-9" onClick={handleBack}> | |
| <ArrowLeft className="h-5 w-5" /> | |
| </Button> | |
| <h1 className="text-lg font-semibold">{t('my_flows')}</h1> | |
| </div> | |
| {/* Back button header */} | |
| <div className="flex items-center gap-3 px-4 py-3 border-b border-border bg-white"> | |
| <Button | |
| variant="ghost" | |
| size="icon" | |
| className="h-9 w-9" | |
| onClick={handleBack} | |
| aria-label={t('common:common.back')} | |
| > | |
| <ArrowLeft className="h-5 w-5" /> | |
| </Button> | |
| <h1 className="text-lg font-semibold">{t('my_flows')}</h1> | |
| </div> |
🤖 Prompt for AI Agents
In @frontend/src/app/(tasks)/flow/subscriptions/page.tsx around lines 61 - 67,
The icon-only Back button (Button with onClick={handleBack} containing
ArrowLeft) lacks an accessible label, so screen readers can't convey its
purpose; update the Button to include an accessible name by adding either an
aria-label="Back" (or aria-label={t('back')} to use translations) or wrap the
icon with a visually-hidden <span> providing the text, ensuring the accessible
label references the same intent as the h1; keep the onClick handler
(handleBack) unchanged and apply the change where the Button and ArrowLeft are
rendered in page.tsx.
| {triggerConfig.event_type === 'git_push' && ( | ||
| <div className="mt-3 space-y-3"> | ||
| <div> | ||
| <Label>{t('git_repository')}</Label> | ||
| <Input | ||
| value={ | ||
| ( | ||
| triggerConfig.git_push as | ||
| | { repository?: string; branch?: string } | ||
| | undefined | ||
| )?.repository || '' | ||
| } | ||
| onChange={e => | ||
| setTriggerConfig({ | ||
| ...triggerConfig, | ||
| git_push: { | ||
| ...(triggerConfig.git_push as | ||
| | { repository?: string; branch?: string } | ||
| | undefined), | ||
| repository: e.target.value, | ||
| }, | ||
| }) | ||
| } | ||
| placeholder="owner/repo" | ||
| /> | ||
| </div> | ||
| <div> | ||
| <Label>{t('git_branch')}</Label> | ||
| <Input | ||
| value={ | ||
| ( | ||
| triggerConfig.git_push as | ||
| | { repository?: string; branch?: string } | ||
| | undefined | ||
| )?.branch || '' | ||
| } | ||
| onChange={e => | ||
| setTriggerConfig({ | ||
| ...triggerConfig, | ||
| git_push: { | ||
| ...(triggerConfig.git_push as | ||
| | { repository?: string; branch?: string } | ||
| | undefined), | ||
| branch: e.target.value, | ||
| }, | ||
| }) | ||
| } | ||
| placeholder="main" | ||
| /> | ||
| </div> | ||
| </div> | ||
| )} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# First, let's find and read the FlowForm.tsx file to understand its structure
fd "FlowForm.tsx" --type fRepository: wecode-ai/Wegent
Length of output: 111
🏁 Script executed:
# Read the FlowForm.tsx file to understand its structure and find the handleSubmit function
wc -l frontend/src/features/flows/components/FlowForm.tsxRepository: wecode-ai/Wegent
Length of output: 115
🏁 Script executed:
# Read the file to find the handleSubmit function and validation logic
cat -n frontend/src/features/flows/components/FlowForm.tsx | head -100Repository: wecode-ai/Wegent
Length of output: 3643
🏁 Script executed:
# Continue reading to find the handleSubmit function
cat -n frontend/src/features/flows/components/FlowForm.tsx | sed -n '100,250p'Repository: wecode-ai/Wegent
Length of output: 5496
🏁 Script executed:
# Continue to see the git_push section and rest of renderTriggerConfig
cat -n frontend/src/features/flows/components/FlowForm.tsx | sed -n '250,400p'Repository: wecode-ai/Wegent
Length of output: 6572
🏁 Script executed:
# Let's check the flow type definitions to understand if git_push fields are required
fd "flow" -t f -e "ts" -e "tsx" | grep -i "type\|interface" | head -20Repository: wecode-ai/Wegent
Length of output: 86
🏁 Script executed:
# Read the flow types to understand the data structure
cat frontend/src/types/flow.tsRepository: wecode-ai/Wegent
Length of output: 3513
🏁 Script executed:
# Double-check if there's any server-side validation or other validation logic
rg "git_push" --type ts --type tsx -A 3 -B 3 | head -40Repository: wecode-ai/Wegent
Length of output: 87
🏁 Script executed:
# Search for git_push validation or usage across the codebase
rg "git_push" -A 3 -B 3 | head -60Repository: wecode-ai/Wegent
Length of output: 4630
🏁 Script executed:
# Check if there's any backend validation that might catch this
rg "repository" --type js | grep -i "valid\|required" | head -20Repository: wecode-ai/Wegent
Length of output: 42
🏁 Script executed:
# Check the flow APIs to see if backend validates this
fd "flow" -path "*/apis/*" -type fRepository: wecode-ai/Wegent
Length of output: 228
🏁 Script executed:
# Find flow API files
find . -path "*apis*flow*" -type fRepository: wecode-ai/Wegent
Length of output: 87
🏁 Script executed:
# Read the flow API file to check for validation
cat frontend/src/apis/flow.tsRepository: wecode-ai/Wegent
Length of output: 2985
Add validation for required Git Push event fields in handleSubmit.
When event_type === 'git_push', the repository field is required by the type contract but can be saved empty. Add validation before submitting:
Validation code
if (triggerType === 'event' && triggerConfig.event_type === 'git_push') {
const gitPush = triggerConfig.git_push as { repository?: string; branch?: string } | undefined
if (!gitPush?.repository?.trim()) {
toast.error(t('validation_git_repository_required'))
return
}
}🤖 Prompt for AI Agents
In @frontend/src/features/flows/components/FlowForm.tsx around lines 310 - 361,
Add a validation branch inside handleSubmit to enforce that when triggerType ===
'event' and triggerConfig.event_type === 'git_push' the repository field on
triggerConfig.git_push is non-empty; specifically, read triggerConfig.git_push
(cast as { repository?: string; branch?: string } | undefined), if repository is
missing or blank call toast.error(t('validation_git_repository_required')) and
return early to prevent submission. Update handleSubmit to perform this check
before any API call or save logic so empty repositories cannot be persisted.
Summary
This PR introduces the AI Flow (智能流) module which enables automated task execution with configurable triggers. The module provides:
Backend Features
Frontend Features
/flow): Main page with tab navigation between timeline and configurationDatabase Changes
flowstable: Store flow configurations with CRD-style JSON storageflow_executionstable: Track execution history with status and resultsNavigation
Key Implementation Details
Trigger Types:
Prompt Variables: Support for
{{date}},{{time}},{{datetime}},{{flow_name}},{{webhook_data}}Retry Mechanism: Configurable retry count (0-3) for failed executions
Task Type Classification:
Test Plan
Summary by CodeRabbit
Release Notes
✏️ Tip: You can customize this high-level summary in your review settings.