-
Notifications
You must be signed in to change notification settings - Fork 23
feat(pipeline): add Next Stage button to skip stage confirmation #4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Add "Next Stage" button in PipelineStageIndicator component that allows
users to skip manual confirmation and proceed directly to the next Bot
in pipeline mode using historical context.
Changes:
- Frontend: Add Next Stage button in PipelineStageIndicator component
- Frontend: Add skipPipelineStageConfirmation API method
- Frontend: Add i18n translations for next_stage related texts
- Backend: Add POST /tasks/{task_id}/skip-stage-confirmation endpoint
- Backend: Implement skip_pipeline_stage_confirmation service method
- Backend: Add context retrieval logic from last completed stage
- Backend: Add AI summarization for long context (configurable threshold)
- Backend: Add PIPELINE_CONTEXT_MAX_LENGTH and PIPELINE_SUMMARY_MAX_LENGTH config
- Backend: Update executor_kinds to handle from_skip_confirmation flag
…ation Remove duplicate _create_next_stage_subtask_with_context method and add from_skip_confirmation parameter to existing _create_next_stage_subtask method to support both confirmation types.
|
Warning Rate limit exceeded@qdaxb has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 5 minutes and 7 seconds before requesting another review. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📒 Files selected for processing (3)
📝 WalkthroughWalkthroughThis PR introduces a skip pipeline stage confirmation feature that allows users to automatically progress to the next stage without explicit confirmation, reusing context from the previous stage. It includes backend endpoint, service layer implementations for stage advancement with optional AI summarization, and frontend UI with skip button and state management. Changes
Sequence Diagram(s)sequenceDiagram
participant User as User (Frontend)
participant UI as PipelineStageIndicator
participant API as Task API
participant TaskService as TaskKindsService
participant PipelineService as PipelineStageService
participant DB as Database
User->>UI: Clicks "Next" button
UI->>UI: Set isSkipping=true, show spinner
UI->>API: skipPipelineStageConfirmation(taskId)
API->>TaskService: skip_pipeline_stage_confirmation(task_id, user_id)
TaskService->>DB: Fetch TaskResource by id
TaskService->>DB: Verify user ownership & task status
TaskService->>PipelineService: skip_stage_confirmation(task, team)
PipelineService->>DB: Get last completed stage result
PipelineService->>PipelineService: Extract & summarize context (if needed)
alt Next stage exists
PipelineService->>DB: Create subtask for next stage with context
PipelineService->>DB: Mark current stage completed
else End of pipeline
PipelineService->>DB: Mark task completed
end
PipelineService-->>TaskService: Return confirmation response
TaskService-->>API: Return ConfirmStageResponse
API-->>UI: Response received
UI->>API: getPipelineStageInfo(taskId)
API-->>UI: Updated stage info
UI->>UI: isSkipping=false, show success toast
UI->>User: Display next stage (or completion)
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~22 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
frontend/src/apis/tasks.ts (1)
161-214: Fix duplicate exported type declarations (will break TypeScript build)
ConfirmStageRequest,ConfirmStageResponse, andPipelineStageInfoare declared twice (Line 161-214), which should cause duplicate identifier errors.Proposed fix (remove the duplicate block)
@@ -export interface ConfirmStageRequest { - confirmed_prompt: string // The edited/confirmed prompt to pass to next stage - action: 'continue' | 'retry' // "continue" to proceed to next stage, "retry" to stay at current stage -} - -export interface ConfirmStageResponse { - message: string - task_id: number - current_stage: number // 0-indexed current pipeline stage - total_stages: number // Total number of pipeline stages - next_stage_name: string | null // Name of the next stage (bot name) -} - -export interface PipelineStageInfo { - current_stage: number // 0-indexed current pipeline stage - total_stages: number // Total number of pipeline stages - current_stage_name: string // Name of current stage (bot name) - is_pending_confirmation: boolean // Whether waiting for user confirmation - stages: Array<{ - index: number - name: string - require_confirmation: boolean - status: 'pending' | 'running' | 'completed' | 'failed' | 'pending_confirmation' - }> -} +// (Removed duplicate declarations; keep the first set above.)
🧹 Nitpick comments (6)
backend/app/core/config.py (1)
272-292: Custom dotenv source wiring is correct, but document interpolation behavior changeReplacing the default dotenv source means
${VAR}interpolation (if previously relied on) will no longer happen from.envfiles; worth a short note in docs/README to avoid surprises.frontend/src/i18n/locales/zh-CN/chat.json (1)
461-465: i18n additions are consistent; consider “下一阶段” for clarity“下一步” can be interpreted as a generic step; “下一阶段” may better match pipeline semantics.
backend/app/services/adapters/executor_kinds.py (1)
802-829: Avoid mutatingsubtaskDB fields inside the response formatter
_format_subtasks_response()updatessubtask.resultandsubtask.updated_at(Line 814-816, 822-824), but doesn’t persist them; this makes the response potentially misleading and couples formatting with state changes.Proposed fix (keep flags local, don’t mutate ORM state)
@@ confirmed_prompt_from_stage = None # Check if this subtask has context from skip confirmation context_from_skip = None @@ new_session = False if subtask.result and isinstance(subtask.result, dict): if subtask.result.get("from_stage_confirmation"): confirmed_prompt_from_stage = subtask.result.get("confirmed_prompt") @@ new_session = True - # Clear the temporary result so it doesn't interfere with execution - subtask.result = None - subtask.updated_at = datetime.now() elif subtask.result.get("from_skip_confirmation"): # Handle skip confirmation - use context from previous stage context_from_skip = subtask.result.get("context") # Mark that this subtask should use a new session new_session = True - # Clear the temporary result so it doesn't interfere with execution - subtask.result = None - subtask.updated_at = datetime.now() - if context_from_skip is not None: + if context_from_skip is not None: # Use the context from skip confirmation # Format it as previous stage output for the next bot - aggregated_prompt = f"Previous stage output:\n{context_from_skip}" + aggregated_prompt = f"Previous stage output:\n{context_from_skip if isinstance(context_from_skip, str) else str(context_from_skip)}" elif confirmed_prompt_from_stage: # Use the confirmed prompt from stage confirmation instead of building from previous results aggregated_prompt = confirmed_prompt_from_stagebackend/app/services/adapters/task_kinds.py (1)
2123-2198: Add concurrency guard to prevent duplicate subtask creation on concurrent skip requestsThe
skip_stage_confirmationand underlying_create_next_stage_subtasklack protection against concurrent calls. Two simultaneous skip requests could both pass the status check, retrieve the same last subtask, calculate identicalnext_message_idvalues, and both create new subtasks with the same message ID—since the Subtask model has no uniqueness constraint on(task_id, message_id)and the code uses no row-level locking.Unlike
confirm_stage(which updates existing subtasks and transitions task status),skip_stage_confirmationonly creates new subtasks and doesn't update task status for normal progression, leaving no lock-based safeguard. Add either a database-level unique constraint on(task_id, message_id), an application-level idempotency check (e.g., query for existing next-stage subtask before creation), or row-level locking withwith_for_update()on the task.frontend/src/features/tasks/components/chat/PipelineStageIndicator.tsx (1)
78-107: Consider consolidating toast notifications and handling refresh failures.Two minor suggestions:
- Dual toasts may be redundant: Lines 88 + 91-95 show both a success toast and an info toast in quick succession. Consider consolidating into a single toast with the stage transition info:
- toast.success(t('pipeline.next_stage_success')) - - // Trigger stage info refresh if (response.next_stage_name) { - toast.info(t('pipeline.proceeding_to_stage', { stage: response.next_stage_name })) + toast.success(t('pipeline.proceeding_to_stage', { stage: response.next_stage_name })) } else { - toast.info(t('pipeline.pipeline_completed')) + toast.success(t('pipeline.pipeline_completed')) }
- Silent refresh failure: If
getPipelineStageInfofails after a successful skip, the UI state could be stale. Consider handling this case:// Refresh stage info try { const info = await taskApis.getPipelineStageInfo(taskId) setStageInfo(info) onStageInfoChange?.(info) } catch (refreshError) { console.error('Failed to refresh stage info after skip:', refreshError) // State may be stale - parent should eventually re-fetch }backend/app/services/adapters/pipeline_stage.py (1)
780-849: Update type hint to use explicitOptional.Per PEP 484 and static analysis, update the type hint to be explicit:
def get_last_stage_result_as_context( self, db: Session, task_id: int, - max_length: int = None, + max_length: int | None = None, ) -> str:The method logic is sound - it properly handles various result formats (dict with different keys, string, other) and triggers summarization when content exceeds the threshold.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (9)
backend/app/api/endpoints/adapter/tasks.pybackend/app/core/config.pybackend/app/services/adapters/executor_kinds.pybackend/app/services/adapters/pipeline_stage.pybackend/app/services/adapters/task_kinds.pyfrontend/src/apis/tasks.tsfrontend/src/features/tasks/components/chat/PipelineStageIndicator.tsxfrontend/src/i18n/locales/en/chat.jsonfrontend/src/i18n/locales/zh-CN/chat.json
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-12-18T02:09:09.776Z
Learnt from: CR
Repo: wecode-ai/Wegent PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-18T02:09:09.776Z
Learning: Applies to frontend/src/i18n/zh-CN.json : Frontend i18n Chinese values (zh-CN) MUST use UI terms: `'智能体列表'`, `'机器人配置'`
Applied to files:
frontend/src/i18n/locales/zh-CN/chat.json
🧬 Code graph analysis (3)
frontend/src/apis/tasks.ts (2)
backend/app/schemas/task.py (1)
ConfirmStageResponse(185-192)frontend/src/apis/client.ts (1)
apiClient(146-146)
backend/app/services/adapters/task_kinds.py (4)
backend/app/models/task.py (1)
TaskResource(26-77)backend/app/services/task_member_service.py (1)
is_member(57-72)backend/app/schemas/kind.py (2)
Task(462-469)Team(344-351)backend/app/services/adapters/pipeline_stage.py (2)
get_team_for_task(610-682)skip_stage_confirmation(684-778)
backend/app/api/endpoints/adapter/tasks.py (4)
backend/app/schemas/task.py (1)
ConfirmStageResponse(185-192)backend/app/services/adapters/task_kinds.py (1)
skip_pipeline_stage_confirmation(2123-2197)backend/app/api/dependencies.py (2)
with_task_telemetry(49-81)get_db(37-46)backend/app/core/security.py (1)
get_current_user(43-65)
🪛 Ruff (0.14.10)
backend/app/services/adapters/pipeline_stage.py
784-784: PEP 484 prohibits implicit Optional
Convert to T | None
(RUF013)
851-851: PEP 484 prohibits implicit Optional
Convert to T | None
(RUF013)
913-913: Do not catch blind exception: Exception
(BLE001)
914-914: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
914-914: Use explicit conversion flag
Replace with conversion flag
(RUF010)
backend/app/api/endpoints/adapter/tasks.py
302-302: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
303-303: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
🔇 Additional comments (9)
backend/app/core/config.py (2)
262-266: New pipeline max-length settings implemented correctlyThe constants
PIPELINE_CONTEXT_MAX_LENGTHandPIPELINE_SUMMARY_MAX_LENGTHare properly referenced from settings in the pipeline code (pipeline_stage.py lines 798 and 866), with no hardcoded duplicates. Reasonable defaults.
18-40: No action required. The override ofDotEnvSettingsSource._static_read_env_fileis safe for pydantic-settings v2.1.0: the method is documented public API with a stable signature that matches exactly. The use ofdotenv_values(..., interpolate=False)is also a documented, stable parameter in python-dotenv v1.0.0 that correctly preserves template variables like${{user.name}}. The implementation is properly integrated viasettings_customise_sources()and functions as intended.Likely an incorrect or invalid review comment.
frontend/src/apis/tasks.ts (1)
434-442: Skip-stage API helper matches the backend endpoint implementationThe backend endpoint at
POST /tasks/{task_id}/skip-stage-confirmationis properly implemented with the correct response model (ConfirmStageResponse), confirming the frontend API call is correct.frontend/src/i18n/locales/en/chat.json (1)
461-465: LGTM!The new i18n keys for the skip stage confirmation feature are well-structured and consistent with the existing localization patterns. The translations are clear and appropriate for the UI context.
backend/app/api/endpoints/adapter/tasks.py (2)
299-323: LGTM!The new endpoint follows the established patterns in this file correctly:
- Proper authentication via
get_current_user- Task telemetry tracking via
with_task_telemetry- Returns the appropriate
ConfirmStageResponseschema (matching the confirm-stage endpoint)- Clear docstring explaining the behavior
The static analysis warnings about
Depends()in argument defaults (B008) are false positives for FastAPI, which specifically requires this pattern for dependency injection.
326-331: Verify: Telemetry dependency removed fromshare_taskendpoint.The
task_idparameter changed fromtask_id: int = Depends(with_task_telemetry)to a plaintask_id: int. This removes OpenTelemetry task context tracking for this endpoint.Was this change intentional? If task telemetry should be preserved, restore the dependency:
@router.post("/{task_id}/share", response_model=TaskShareResponse) def share_task( - task_id: int, + task_id: int = Depends(with_task_telemetry), current_user: User = Depends(security.get_current_user), db: Session = Depends(get_db), ):frontend/src/features/tasks/components/chat/PipelineStageIndicator.tsx (1)
315-331: LGTM!The Next Stage button implementation is clean:
- Conditionally rendered only when
is_pending_confirmationis true- Properly disabled during the skip operation
- Shows appropriate loading state with spinner
- Consistent styling with the design system
backend/app/services/adapters/pipeline_stage.py (2)
476-477: LGTM!The refactored
_create_next_stage_subtaskmethod cleanly handles both confirmation flows via thefrom_skip_confirmationparameter. The conditionalresult_dataconstruction properly differentiates between:
- Skip confirmation: stores
contextwithfrom_skip_confirmation: True- Stage confirmation: stores
confirmed_promptwithfrom_stage_confirmation: TrueThis enables the executor to handle each case appropriately downstream.
Also applies to: 567-577, 596-596
684-778: LGTM!The
skip_stage_confirmationmethod is well-structured:
- Properly handles pipeline completion when at the last stage
- Gracefully handles missing context with a warning log
- Reuses
_create_next_stage_subtaskwith the new flag (good DRY practice)- Correctly updates task status and commits the transaction
| def _summarize_context_sync(self, content: str, target_length: int = None) -> str: | ||
| """ | ||
| Use AI to summarize long context content (synchronous version). | ||
| Calls the configured LLM with a summarization prompt to compress | ||
| the content while preserving key information. | ||
| Args: | ||
| content: The content to summarize | ||
| target_length: Target length for summary (default from settings) | ||
| Returns: | ||
| Summarized content | ||
| """ | ||
| if target_length is None: | ||
| target_length = settings.PIPELINE_SUMMARY_MAX_LENGTH | ||
|
|
||
| try: | ||
| # Import here to avoid circular imports | ||
| from app.services.chat.llm_client import get_default_llm_client | ||
|
|
||
| llm_client = get_default_llm_client() | ||
| if not llm_client: | ||
| logger.warning( | ||
| "Pipeline context summarization: no LLM client available, truncating instead" | ||
| ) | ||
| # Fallback to simple truncation | ||
| return ( | ||
| content[:target_length] + "..." | ||
| if len(content) > target_length | ||
| else content | ||
| ) | ||
|
|
||
| # Create summarization prompt | ||
| system_prompt = ( | ||
| "You are a helpful assistant that summarizes text. " | ||
| "Preserve key information and main points. " | ||
| "Be concise and clear." | ||
| ) | ||
| user_prompt = ( | ||
| f"Please summarize the following text to approximately {target_length} characters, " | ||
| f"preserving the key information and main points:\n\n{content}" | ||
| ) | ||
|
|
||
| # Call LLM for summarization (synchronous) | ||
| import asyncio | ||
|
|
||
| loop = asyncio.new_event_loop() | ||
| try: | ||
| summary = loop.run_until_complete( | ||
| llm_client.complete( | ||
| messages=[ | ||
| {"role": "system", "content": system_prompt}, | ||
| {"role": "user", "content": user_prompt}, | ||
| ], | ||
| max_tokens=target_length * 2, # Allow some buffer | ||
| ) | ||
| ) | ||
| return summary if summary else content[:target_length] | ||
| finally: | ||
| loop.close() | ||
|
|
||
| except Exception as e: | ||
| logger.error(f"Pipeline context summarization failed: {str(e)}") | ||
| # Fallback to simple truncation | ||
| return ( | ||
| content[:target_length] + "..." | ||
| if len(content) > target_length | ||
| else content | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# First, let's locate and examine the file
find . -name "pipeline_stage.py" -type f | head -20Repository: wecode-ai/Wegent
Length of output: 109
🏁 Script executed:
# Check Python version requirements
find . -name "pyproject.toml" -o -name "setup.py" -o -name ".python-version" | head -10Repository: wecode-ai/Wegent
Length of output: 218
🏁 Script executed:
# Search for how _summarize_context_sync is called
rg "_summarize_context_sync" --type py -B 2 -A 2Repository: wecode-ai/Wegent
Length of output: 825
🏁 Script executed:
# Check how this method is used in the codebase and what calls it
rg "skip_stage_confirmation\|_summarize_context" --type py -A 3 -B 1Repository: wecode-ai/Wegent
Length of output: 42
🏁 Script executed:
# Let's examine the actual file to understand context better
cat -n ./backend/app/services/adapters/pipeline_stage.py | sed -n '800,930p'Repository: wecode-ai/Wegent
Length of output: 5418
🏁 Script executed:
# Check Python version requirement in pyproject.toml
cat ./backend/pyproject.toml | grep -A 10 "python"Repository: wecode-ai/Wegent
Length of output: 146
🏁 Script executed:
# Search for FastAPI usage in the codebase to understand if this is in async context
rg "FastAPI\|@app\|@router" --type py backend/ | head -20Repository: wecode-ai/Wegent
Length of output: 42
🏁 Script executed:
# Check how the pipeline_stage module is used
rg "pipeline_stage\|PipelineStage" --type py -A 2 | head -40Repository: wecode-ai/Wegent
Length of output: 42
🏁 Script executed:
# Look for asyncio usage patterns in the codebase
rg "asyncio\.(run|new_event_loop)" --type py -B 2 -A 2Repository: wecode-ai/Wegent
Length of output: 12341
🏁 Script executed:
# Check where get_last_stage_result_as_context is called
rg "get_last_stage_result_as_context" --type py -B 3 -A 3Repository: wecode-ai/Wegent
Length of output: 1702
🏁 Script executed:
# Check the Python version requirement more specifically
cat ./backend/pyproject.toml | head -50Repository: wecode-ai/Wegent
Length of output: 1436
🏁 Script executed:
# Look at how knowledge_service handles the similar scenario (get_running_loop pattern)
cat -n ./backend/app/services/knowledge_service.py | grep -A 15 "get_running_loop"Repository: wecode-ai/Wegent
Length of output: 788
🏁 Script executed:
# Check if there's documentation about asyncio patterns in the codebase
rg "asyncio.run\(\)" --type py -B 1 -A 1 | head -60Repository: wecode-ai/Wegent
Length of output: 326
🏁 Script executed:
# Look at the specific exception scenario - what happens when loop is already running
rg "RuntimeError.*already running" --type py -B 2 -A 2Repository: wecode-ai/Wegent
Length of output: 42
🏁 Script executed:
# Let's find where skip_stage_confirmation is called to understand the call context
rg "skip_stage_confirmation" --type py -B 2 -A 5Repository: wecode-ai/Wegent
Length of output: 1826
🏁 Script executed:
# Check if the method is called from async endpoints or sync context
rg "def skip_stage_confirmation\|skip_stage_confirmation\(" --type py -B 3 -A 1 | head -40Repository: wecode-ai/Wegent
Length of output: 42
🏁 Script executed:
# Look for FastAPI endpoint usage of this method
rg "@router\|@app\|async def" --type py backend/app/api/ | head -30Repository: wecode-ai/Wegent
Length of output: 42
🏁 Script executed:
# Find where skip_stage_confirmation is called from task_kinds
rg "skip_stage_confirmation" --type py backend/app/ -B 5 | head -50Repository: wecode-ai/Wegent
Length of output: 1470
🏁 Script executed:
# Check the task_kinds file for the method that calls skip_stage_confirmation
cat -n ./backend/app/services/adapters/task_kinds.py | grep -B 20 -A 5 "skip_stage_confirmation"Repository: wecode-ai/Wegent
Length of output: 1190
🏁 Script executed:
# Search for endpoints that use task_kinds to understand the flow
rg "task_kinds\|TaskKinds" --type py backend/app/api/ -A 2 | head -40Repository: wecode-ai/Wegent
Length of output: 42
🏁 Script executed:
# Check for FastAPI route handlers that might call this indirectly
rg "def.*task" --type py backend/app/api/endpoints/ | head -20Repository: wecode-ai/Wegent
Length of output: 1412
🏁 Script executed:
# Find the full method in task_kinds.py that calls skip_stage_confirmation
cat -n ./backend/app/services/adapters/task_kinds.py | grep -B 50 "skip_stage_confirmation" | tail -60Repository: wecode-ai/Wegent
Length of output: 2227
🏁 Script executed:
# Check if the method in task_kinds is async or sync
rg "def.*skip.*stage\|async def.*skip" --type py backend/app/services/adapters/task_kinds.py -A 1Repository: wecode-ai/Wegent
Length of output: 42
🏁 Script executed:
# Find the endpoint that calls task_kinds method
rg "task_kinds\." --type py backend/app/ -B 5 -A 2 | head -60Repository: wecode-ai/Wegent
Length of output: 761
🏁 Script executed:
# Check the specific endpoint handler for skip_stage_confirmation
rg "skip.*stage\|stage.*confirm" --type py backend/app/api/endpoints/ -B 3 -A 3Repository: wecode-ai/Wegent
Length of output: 42
🏁 Script executed:
# Find where skip_stage_confirmation is called from task_kinds.py class
cat -n ./backend/app/services/adapters/task_kinds.py | grep -B 100 "skip_stage_confirmation" | head -120Repository: wecode-ai/Wegent
Length of output: 4258
🏁 Script executed:
# Search for the actual endpoint definition
rg "skip.*stage\|stage.*confirmation" --type py backend/app/api/endpoints/ -B 5 -A 1Repository: wecode-ai/Wegent
Length of output: 42
🏁 Script executed:
# Look for the method name that contains skip_stage_confirmation logic
rg "def.*confirm\|def.*skip" --type py backend/app/services/adapters/task_kinds.py -A 1Repository: wecode-ai/Wegent
Length of output: 42
🏁 Script executed:
# Check what the actual endpoint calling method looks like
cat -n ./backend/app/services/adapters/task_kinds.py | sed -n '2100,2200p'Repository: wecode-ai/Wegent
Length of output: 4182
🏁 Script executed:
# Find where skip_pipeline_stage_confirmation is called from
rg "skip_pipeline_stage_confirmation" --type py -B 3 -A 1Repository: wecode-ai/Wegent
Length of output: 1215
🏁 Script executed:
# Check the endpoint that calls this method
cat -n ./backend/app/api/endpoints/adapter/tasks.py | grep -B 10 -A 10 "skip"Repository: wecode-ai/Wegent
Length of output: 6095
🏁 Script executed:
# Check what knowledge_service does as it has a similar pattern (get_running_loop handling)
cat -n ./backend/app/services/knowledge_service.py | sed -n '710,735p'Repository: wecode-ai/Wegent
Length of output: 1249
🏁 Script executed:
# Look at jobs.py to see how it handles asyncio.new_event_loop with set_event_loop
cat -n ./backend/app/services/jobs.py | grep -B 5 -A 15 "new_event_loop"Repository: wecode-ai/Wegent
Length of output: 950
🏁 Script executed:
# Check if llm_client has a sync version of complete method
rg "def complete\|async def complete" --type py backend/app/services/chat/ -B 2 -A 3Repository: wecode-ai/Wegent
Length of output: 42
🏁 Script executed:
# Look at the llm_client structure to understand available methods
find ./backend/app/services/chat -name "*llm*" -type fRepository: wecode-ai/Wegent
Length of output: 42
🏁 Script executed:
# Find all files in chat services
find ./backend/app/services/chat -type f -name "*.py" | head -20Repository: wecode-ai/Wegent
Length of output: 1033
🏁 Script executed:
# Search for the llm_client module
rg "llm_client\|get_default_llm_client" --type py backend/ -l | head -10Repository: wecode-ai/Wegent
Length of output: 42
🏁 Script executed:
# Look for the LLM client implementation
rg "class.*LLM\|def.*complete" --type py backend/app/services/chat/ -A 1 | head -40Repository: wecode-ai/Wegent
Length of output: 42
🏁 Script executed:
# Find where get_default_llm_client is defined
rg "def get_default_llm_client" --type py backend/Repository: wecode-ai/Wegent
Length of output: 42
🏁 Script executed:
# Search in the entire backend for llm_client
fd "llm" backend/app/services/chat/ --type fRepository: wecode-ai/Wegent
Length of output: 42
🏁 Script executed:
# Let's check the actual import in pipeline_stage.py
cat -n ./backend/app/services/adapters/pipeline_stage.py | sed -n '868,872p'Repository: wecode-ai/Wegent
Length of output: 286
🏁 Script executed:
# Search for complete method definition in backend services
rg "def complete\|async def complete" --type py backend/app/Repository: wecode-ai/Wegent
Length of output: 42
🏁 Script executed:
# Search for llm_client in all files
rg "llm_client" --type py | head -20Repository: wecode-ai/Wegent
Length of output: 449
🏁 Script executed:
# Find where get_default_llm_client is defined
rg "get_default_llm_client" --type py -B 1 -A 5Repository: wecode-ai/Wegent
Length of output: 897
🏁 Script executed:
# Check the main chat services directory structure
ls -la ./backend/app/services/chat/Repository: wecode-ai/Wegent
Length of output: 902
🏁 Script executed:
# Look in the chat directory more carefully
find ./backend/app/services/chat -name "*.py" -exec grep -l "llm_client\|get_default_llm_client" {} \;Repository: wecode-ai/Wegent
Length of output: 42
🏁 Script executed:
# Search the entire repository for llm_client.py
find . -name "llm_client.py" -type fRepository: wecode-ai/Wegent
Length of output: 42
🏁 Script executed:
# Search for where this module might be defined or referenced
rg "llm_client" --type py -A 10 | head -80Repository: wecode-ai/Wegent
Length of output: 2280
🏁 Script executed:
# Check the __init__.py in chat services to see what's exported
cat ./backend/app/services/chat/__init__.pyRepository: wecode-ai/Wegent
Length of output: 952
🏁 Script executed:
# Search in the broader codebase for complete method that's async
rg "async def complete" --type py -A 3 | head -40Repository: wecode-ai/Wegent
Length of output: 348
🏁 Script executed:
# Check if the llm_client might be in a different location or if this is a missing module
rg "class.*LLMClient\|class.*LLM" --type pyRepository: wecode-ai/Wegent
Length of output: 42
🏁 Script executed:
# Search for where complete is called elsewhere
rg "\.complete\(" --type py -B 2 -A 2 | head -60Repository: wecode-ai/Wegent
Length of output: 517
🏁 Script executed:
# Check the broader imports and how async operations are handled
rg "from.*llm\|import.*llm" --type py backend/app/services/Repository: wecode-ai/Wegent
Length of output: 42
🏁 Script executed:
# Let's verify what the actual module structure is
git ls-files | grep -i llmRepository: wecode-ai/Wegent
Length of output: 201
🏁 Script executed:
# Verify the async pattern used in executor services
cat -n ./backend/app/services/adapters/executor_kinds.py | sed -n '1,50p'Repository: wecode-ai/Wegent
Length of output: 1917
🏁 Script executed:
# Look at how other sync methods handle async operations
rg "asyncio.run\|new_event_loop" --type py backend/app/services/ -B 3 -A 3 | head -80Repository: wecode-ai/Wegent
Length of output: 42
🏁 Script executed:
# Check if this is actually an issue by confirming if sync endpoints run in FastAPI async context
rg "def skip_pipeline_stage_confirmation\|async def" --type py backend/app/api/endpoints/adapter/tasks.py -A 1 | head -20Repository: wecode-ai/Wegent
Length of output: 42
🏁 Script executed:
# Verify the actual runtime behavior - check how FastAPI handles this
cat -n ./backend/app/api/endpoints/adapter/tasks.py | sed -n '299,324p'Repository: wecode-ai/Wegent
Length of output: 1077
Use a safe asyncio pattern to handle running event loops in sync context.
The asyncio.new_event_loop() call can fail at runtime when this sync method is invoked from a FastAPI async context. Since the endpoint POST /{task_id}/skip-stage-confirmation is a sync handler running within FastAPI's async event loop, there will already be a running loop in the thread, causing asyncio.new_event_loop() to raise a RuntimeError.
Instead, follow the pattern already used elsewhere in the codebase (e.g., knowledge_service.py lines 716-726):
- Check for a running loop with
asyncio.get_running_loop() - If none exists, safely use
asyncio.run() - If one exists, run the coroutine in a separate thread via
ThreadPoolExecutor
Additionally:
- Update the type hint from
int = Nonetoint | None(Python 3.10+ syntax) - Replace
logger.error(f"...{str(e)}")withlogger.exception("...")to include the full stack trace for debugging
🧰 Tools
🪛 Ruff (0.14.10)
851-851: PEP 484 prohibits implicit Optional
Convert to T | None
(RUF013)
913-913: Do not catch blind exception: Exception
(BLE001)
914-914: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
914-914: Use explicit conversion flag
Replace with conversion flag
(RUF010)
- Remove duplicate type declarations in tasks.ts - Consolidate toast notifications in PipelineStageIndicator - Add error handling for stage info refresh failures - Fix Optional type hints for max_length and target_length parameters
* Initial plan * fix: update pytest tests to fix failing executor_manager tests Co-authored-by: graindt <[email protected]> --------- Co-authored-by: copilot-swe-agent[bot] <[email protected]> Co-authored-by: graindt <[email protected]>
Summary
Changes
Frontend
skipPipelineStageConfirmationAPI methodnext_stage,next_stage_loading,next_stage_success,next_stage_failedBackend
/tasks/{task_id}/skip-stage-confirmationendpointskip_pipeline_stage_confirmationservice method with permission checksskip_stage_confirmationmethod to handle stage skip logicget_last_stage_result_as_contextto extract context from last completed stage_summarize_context_syncfor AI summarization of long content_create_next_stage_subtask_with_contextto create next stage subtaskfrom_skip_confirmationflag in prompt buildingPIPELINE_CONTEXT_MAX_LENGTH(4000) andPIPELINE_SUMMARY_MAX_LENGTH(2000) settingsTest plan
is_pending_confirmationis trueSummary by CodeRabbit
Release Notes
✏️ Tip: You can customize this high-level summary in your review settings.