Skip to content

Conversation

@qdaxb
Copy link
Contributor

@qdaxb qdaxb commented Jan 10, 2026

Summary

  • Add "Next Stage" button in PipelineStageIndicator component that allows users to skip manual confirmation
  • Implement backend API endpoint to skip stage confirmation using historical context
  • Add AI summarization support for long context content

Changes

Frontend

  • PipelineStageIndicator.tsx: Add "Next Stage" button that appears when pipeline is pending confirmation
  • tasks.ts: Add skipPipelineStageConfirmation API method
  • i18n: Add translations for next_stage, next_stage_loading, next_stage_success, next_stage_failed

Backend

  • tasks.py: Add POST /tasks/{task_id}/skip-stage-confirmation endpoint
  • task_kinds.py: Add skip_pipeline_stage_confirmation service method with permission checks
  • pipeline_stage.py:
    • Add skip_stage_confirmation method to handle stage skip logic
    • Add get_last_stage_result_as_context to extract context from last completed stage
    • Add _summarize_context_sync for AI summarization of long content
    • Add _create_next_stage_subtask_with_context to create next stage subtask
  • executor_kinds.py: Handle from_skip_confirmation flag in prompt building
  • config.py: Add PIPELINE_CONTEXT_MAX_LENGTH (4000) and PIPELINE_SUMMARY_MAX_LENGTH (2000) settings

Test plan

  • Verify "Next Stage" button appears only when is_pending_confirmation is true
  • Verify clicking "Next Stage" shows loading state
  • Verify successful skip creates next stage subtask with correct context
  • Verify long context is summarized using AI
  • Verify task status changes from PENDING_CONFIRMATION to PENDING after skip
  • Verify i18n translations work for both English and Chinese

Summary by CodeRabbit

Release Notes

  • New Features
    • Skip pipeline stage confirmation: Users can now advance to the next stage without explicit confirmation, with automatic context preservation from the previous stage.
    • Smart context handling: When proceeding to the next stage, the system intelligently carries over relevant information and summarizes lengthy context when needed.
    • Enhanced UI: Added a "Next Stage" button to the pipeline indicator for seamless stage progression with real-time feedback on success or failure.

✏️ Tip: You can customize this high-level summary in your review settings.

qdaxb added 2 commits January 11, 2026 01:35
Add "Next Stage" button in PipelineStageIndicator component that allows
users to skip manual confirmation and proceed directly to the next Bot
in pipeline mode using historical context.

Changes:
- Frontend: Add Next Stage button in PipelineStageIndicator component
- Frontend: Add skipPipelineStageConfirmation API method
- Frontend: Add i18n translations for next_stage related texts
- Backend: Add POST /tasks/{task_id}/skip-stage-confirmation endpoint
- Backend: Implement skip_pipeline_stage_confirmation service method
- Backend: Add context retrieval logic from last completed stage
- Backend: Add AI summarization for long context (configurable threshold)
- Backend: Add PIPELINE_CONTEXT_MAX_LENGTH and PIPELINE_SUMMARY_MAX_LENGTH config
- Backend: Update executor_kinds to handle from_skip_confirmation flag
…ation

Remove duplicate _create_next_stage_subtask_with_context method and
add from_skip_confirmation parameter to existing _create_next_stage_subtask
method to support both confirmation types.
@coderabbitai
Copy link

coderabbitai bot commented Jan 10, 2026

Warning

Rate limit exceeded

@qdaxb has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 5 minutes and 7 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📥 Commits

Reviewing files that changed from the base of the PR and between dbf440c and a641b8f.

📒 Files selected for processing (3)
  • backend/app/services/adapters/pipeline_stage.py
  • frontend/src/apis/tasks.ts
  • frontend/src/features/tasks/components/chat/PipelineStageIndicator.tsx
📝 Walkthrough

Walkthrough

This PR introduces a skip pipeline stage confirmation feature that allows users to automatically progress to the next stage without explicit confirmation, reusing context from the previous stage. It includes backend endpoint, service layer implementations for stage advancement with optional AI summarization, and frontend UI with skip button and state management.

Changes

Cohort / File(s) Summary
Configuration
backend/app/core/config.py
Added two new settings for pipeline context and summary length limits (4000 and 2000 characters respectively).
API Endpoints
backend/app/api/endpoints/adapter/tasks.py
Added new POST endpoint skip_pipeline_stage_confirmation for skipping stage confirmation; removed with_task_telemetry dependency from share_task endpoint signature.
Backend Services — Core Logic
backend/app/services/adapters/pipeline_stage.py
Added public methods skip_stage_confirmation and get_last_stage_result_as_context for skipping confirmation and extracting previous stage output; added _summarize_context_sync for AI-based context summarization; extended _create_next_stage_subtask with from_skip_confirmation flag to handle skip-vs-confirm context flows.
Backend Services — Task Orchestration
backend/app/services/adapters/task_kinds.py
Added public method skip_pipeline_stage_confirmation to orchestrate skip confirmation by validating task state, checking pipeline model usage, and delegating to pipeline stage service.
Backend Services — Executor
backend/app/services/adapters/executor_kinds.py
Enhanced subtask result handling to recognize and track skip-confirmation context; modified response formatting to prepend "Previous stage output" when skip context is present.
Frontend API
frontend/src/apis/tasks.ts
Added new API helper skipPipelineStageConfirmation to POST skip-stage-confirmation requests and return stage confirmation response.
Frontend UI
frontend/src/features/tasks/components/chat/PipelineStageIndicator.tsx
Added skip stage UI button (ArrowRight) with loading spinner; introduced isSkipping state and handleNextStage async handler; integrated toast notifications for success/failure; refactored stage rendering with dynamic icon and color support.
Frontend Localization
frontend/src/i18n/locales/en/chat.json, frontend/src/i18n/locales/zh-CN/chat.json
Added four new localization keys: pipeline.next_stage, pipeline.next_stage_loading, pipeline.next_stage_success, pipeline.next_stage_failed in both English and Simplified Chinese.

Sequence Diagram(s)

sequenceDiagram
    participant User as User (Frontend)
    participant UI as PipelineStageIndicator
    participant API as Task API
    participant TaskService as TaskKindsService
    participant PipelineService as PipelineStageService
    participant DB as Database
    
    User->>UI: Clicks "Next" button
    UI->>UI: Set isSkipping=true, show spinner
    UI->>API: skipPipelineStageConfirmation(taskId)
    API->>TaskService: skip_pipeline_stage_confirmation(task_id, user_id)
    TaskService->>DB: Fetch TaskResource by id
    TaskService->>DB: Verify user ownership & task status
    TaskService->>PipelineService: skip_stage_confirmation(task, team)
    PipelineService->>DB: Get last completed stage result
    PipelineService->>PipelineService: Extract & summarize context (if needed)
    alt Next stage exists
        PipelineService->>DB: Create subtask for next stage with context
        PipelineService->>DB: Mark current stage completed
    else End of pipeline
        PipelineService->>DB: Mark task completed
    end
    PipelineService-->>TaskService: Return confirmation response
    TaskService-->>API: Return ConfirmStageResponse
    API-->>UI: Response received
    UI->>API: getPipelineStageInfo(taskId)
    API-->>UI: Updated stage info
    UI->>UI: isSkipping=false, show success toast
    UI->>User: Display next stage (or completion)
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~22 minutes

Possibly related PRs

  • PR #878: Directly modifies the same pipeline stage confirmation code paths (pipeline_stage service, task_kinds, executor_kinds, and task endpoints with frontend pipeline UI).
  • PR #937: Modifies overlapping pipeline stage confirmation flow in backend services (pipeline_stage.py, task_kinds.py) and frontend PipelineStageIndicator component.

Suggested reviewers

  • Micro66
  • feifei325

Poem

🐰 A hop, skip, and stage we go,
No waiting now, just let it flow!
With context passed from stage to stage,
We leap ahead with wisdom's gauge. 🚀

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly and directly describes the main feature addition: a 'Next Stage' button that allows users to skip stage confirmation in the pipeline.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
frontend/src/apis/tasks.ts (1)

161-214: Fix duplicate exported type declarations (will break TypeScript build)

ConfirmStageRequest, ConfirmStageResponse, and PipelineStageInfo are declared twice (Line 161-214), which should cause duplicate identifier errors.

Proposed fix (remove the duplicate block)
@@
-export interface ConfirmStageRequest {
-  confirmed_prompt: string // The edited/confirmed prompt to pass to next stage
-  action: 'continue' | 'retry' // "continue" to proceed to next stage, "retry" to stay at current stage
-}
-
-export interface ConfirmStageResponse {
-  message: string
-  task_id: number
-  current_stage: number // 0-indexed current pipeline stage
-  total_stages: number // Total number of pipeline stages
-  next_stage_name: string | null // Name of the next stage (bot name)
-}
-
-export interface PipelineStageInfo {
-  current_stage: number // 0-indexed current pipeline stage
-  total_stages: number // Total number of pipeline stages
-  current_stage_name: string // Name of current stage (bot name)
-  is_pending_confirmation: boolean // Whether waiting for user confirmation
-  stages: Array<{
-    index: number
-    name: string
-    require_confirmation: boolean
-    status: 'pending' | 'running' | 'completed' | 'failed' | 'pending_confirmation'
-  }>
-}
+// (Removed duplicate declarations; keep the first set above.)
🧹 Nitpick comments (6)
backend/app/core/config.py (1)

272-292: Custom dotenv source wiring is correct, but document interpolation behavior change

Replacing the default dotenv source means ${VAR} interpolation (if previously relied on) will no longer happen from .env files; worth a short note in docs/README to avoid surprises.

frontend/src/i18n/locales/zh-CN/chat.json (1)

461-465: i18n additions are consistent; consider “下一阶段” for clarity

“下一步” can be interpreted as a generic step; “下一阶段” may better match pipeline semantics.

backend/app/services/adapters/executor_kinds.py (1)

802-829: Avoid mutating subtask DB fields inside the response formatter

_format_subtasks_response() updates subtask.result and subtask.updated_at (Line 814-816, 822-824), but doesn’t persist them; this makes the response potentially misleading and couples formatting with state changes.

Proposed fix (keep flags local, don’t mutate ORM state)
@@
             confirmed_prompt_from_stage = None
             # Check if this subtask has context from skip confirmation
             context_from_skip = None
@@
             new_session = False
             if subtask.result and isinstance(subtask.result, dict):
                 if subtask.result.get("from_stage_confirmation"):
                     confirmed_prompt_from_stage = subtask.result.get("confirmed_prompt")
@@
                     new_session = True
-                    # Clear the temporary result so it doesn't interfere with execution
-                    subtask.result = None
-                    subtask.updated_at = datetime.now()
                 elif subtask.result.get("from_skip_confirmation"):
                     # Handle skip confirmation - use context from previous stage
                     context_from_skip = subtask.result.get("context")
                     # Mark that this subtask should use a new session
                     new_session = True
-                    # Clear the temporary result so it doesn't interfere with execution
-                    subtask.result = None
-                    subtask.updated_at = datetime.now()
 
-            if context_from_skip is not None:
+            if context_from_skip is not None:
                 # Use the context from skip confirmation
                 # Format it as previous stage output for the next bot
-                aggregated_prompt = f"Previous stage output:\n{context_from_skip}"
+                aggregated_prompt = f"Previous stage output:\n{context_from_skip if isinstance(context_from_skip, str) else str(context_from_skip)}"
             elif confirmed_prompt_from_stage:
                 # Use the confirmed prompt from stage confirmation instead of building from previous results
                 aggregated_prompt = confirmed_prompt_from_stage
backend/app/services/adapters/task_kinds.py (1)

2123-2198: Add concurrency guard to prevent duplicate subtask creation on concurrent skip requests

The skip_stage_confirmation and underlying _create_next_stage_subtask lack protection against concurrent calls. Two simultaneous skip requests could both pass the status check, retrieve the same last subtask, calculate identical next_message_id values, and both create new subtasks with the same message ID—since the Subtask model has no uniqueness constraint on (task_id, message_id) and the code uses no row-level locking.

Unlike confirm_stage (which updates existing subtasks and transitions task status), skip_stage_confirmation only creates new subtasks and doesn't update task status for normal progression, leaving no lock-based safeguard. Add either a database-level unique constraint on (task_id, message_id), an application-level idempotency check (e.g., query for existing next-stage subtask before creation), or row-level locking with with_for_update() on the task.

frontend/src/features/tasks/components/chat/PipelineStageIndicator.tsx (1)

78-107: Consider consolidating toast notifications and handling refresh failures.

Two minor suggestions:

  1. Dual toasts may be redundant: Lines 88 + 91-95 show both a success toast and an info toast in quick succession. Consider consolidating into a single toast with the stage transition info:
-      toast.success(t('pipeline.next_stage_success'))
-
-      // Trigger stage info refresh
       if (response.next_stage_name) {
-        toast.info(t('pipeline.proceeding_to_stage', { stage: response.next_stage_name }))
+        toast.success(t('pipeline.proceeding_to_stage', { stage: response.next_stage_name }))
       } else {
-        toast.info(t('pipeline.pipeline_completed'))
+        toast.success(t('pipeline.pipeline_completed'))
       }
  1. Silent refresh failure: If getPipelineStageInfo fails after a successful skip, the UI state could be stale. Consider handling this case:
      // Refresh stage info
      try {
        const info = await taskApis.getPipelineStageInfo(taskId)
        setStageInfo(info)
        onStageInfoChange?.(info)
      } catch (refreshError) {
        console.error('Failed to refresh stage info after skip:', refreshError)
        // State may be stale - parent should eventually re-fetch
      }
backend/app/services/adapters/pipeline_stage.py (1)

780-849: Update type hint to use explicit Optional.

Per PEP 484 and static analysis, update the type hint to be explicit:

     def get_last_stage_result_as_context(
         self,
         db: Session,
         task_id: int,
-        max_length: int = None,
+        max_length: int | None = None,
     ) -> str:

The method logic is sound - it properly handles various result formats (dict with different keys, string, other) and triggers summarization when content exceeds the threshold.

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 734a4e1 and dbf440c.

📒 Files selected for processing (9)
  • backend/app/api/endpoints/adapter/tasks.py
  • backend/app/core/config.py
  • backend/app/services/adapters/executor_kinds.py
  • backend/app/services/adapters/pipeline_stage.py
  • backend/app/services/adapters/task_kinds.py
  • frontend/src/apis/tasks.ts
  • frontend/src/features/tasks/components/chat/PipelineStageIndicator.tsx
  • frontend/src/i18n/locales/en/chat.json
  • frontend/src/i18n/locales/zh-CN/chat.json
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-12-18T02:09:09.776Z
Learnt from: CR
Repo: wecode-ai/Wegent PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-18T02:09:09.776Z
Learning: Applies to frontend/src/i18n/zh-CN.json : Frontend i18n Chinese values (zh-CN) MUST use UI terms: `'智能体列表'`, `'机器人配置'`

Applied to files:

  • frontend/src/i18n/locales/zh-CN/chat.json
🧬 Code graph analysis (3)
frontend/src/apis/tasks.ts (2)
backend/app/schemas/task.py (1)
  • ConfirmStageResponse (185-192)
frontend/src/apis/client.ts (1)
  • apiClient (146-146)
backend/app/services/adapters/task_kinds.py (4)
backend/app/models/task.py (1)
  • TaskResource (26-77)
backend/app/services/task_member_service.py (1)
  • is_member (57-72)
backend/app/schemas/kind.py (2)
  • Task (462-469)
  • Team (344-351)
backend/app/services/adapters/pipeline_stage.py (2)
  • get_team_for_task (610-682)
  • skip_stage_confirmation (684-778)
backend/app/api/endpoints/adapter/tasks.py (4)
backend/app/schemas/task.py (1)
  • ConfirmStageResponse (185-192)
backend/app/services/adapters/task_kinds.py (1)
  • skip_pipeline_stage_confirmation (2123-2197)
backend/app/api/dependencies.py (2)
  • with_task_telemetry (49-81)
  • get_db (37-46)
backend/app/core/security.py (1)
  • get_current_user (43-65)
🪛 Ruff (0.14.10)
backend/app/services/adapters/pipeline_stage.py

784-784: PEP 484 prohibits implicit Optional

Convert to T | None

(RUF013)


851-851: PEP 484 prohibits implicit Optional

Convert to T | None

(RUF013)


913-913: Do not catch blind exception: Exception

(BLE001)


914-914: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


914-914: Use explicit conversion flag

Replace with conversion flag

(RUF010)

backend/app/api/endpoints/adapter/tasks.py

302-302: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable

(B008)


303-303: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable

(B008)

🔇 Additional comments (9)
backend/app/core/config.py (2)

262-266: New pipeline max-length settings implemented correctly

The constants PIPELINE_CONTEXT_MAX_LENGTH and PIPELINE_SUMMARY_MAX_LENGTH are properly referenced from settings in the pipeline code (pipeline_stage.py lines 798 and 866), with no hardcoded duplicates. Reasonable defaults.


18-40: No action required. The override of DotEnvSettingsSource._static_read_env_file is safe for pydantic-settings v2.1.0: the method is documented public API with a stable signature that matches exactly. The use of dotenv_values(..., interpolate=False) is also a documented, stable parameter in python-dotenv v1.0.0 that correctly preserves template variables like ${{user.name}}. The implementation is properly integrated via settings_customise_sources() and functions as intended.

Likely an incorrect or invalid review comment.

frontend/src/apis/tasks.ts (1)

434-442: Skip-stage API helper matches the backend endpoint implementation

The backend endpoint at POST /tasks/{task_id}/skip-stage-confirmation is properly implemented with the correct response model (ConfirmStageResponse), confirming the frontend API call is correct.

frontend/src/i18n/locales/en/chat.json (1)

461-465: LGTM!

The new i18n keys for the skip stage confirmation feature are well-structured and consistent with the existing localization patterns. The translations are clear and appropriate for the UI context.

backend/app/api/endpoints/adapter/tasks.py (2)

299-323: LGTM!

The new endpoint follows the established patterns in this file correctly:

  • Proper authentication via get_current_user
  • Task telemetry tracking via with_task_telemetry
  • Returns the appropriate ConfirmStageResponse schema (matching the confirm-stage endpoint)
  • Clear docstring explaining the behavior

The static analysis warnings about Depends() in argument defaults (B008) are false positives for FastAPI, which specifically requires this pattern for dependency injection.


326-331: Verify: Telemetry dependency removed from share_task endpoint.

The task_id parameter changed from task_id: int = Depends(with_task_telemetry) to a plain task_id: int. This removes OpenTelemetry task context tracking for this endpoint.

Was this change intentional? If task telemetry should be preserved, restore the dependency:

 @router.post("/{task_id}/share", response_model=TaskShareResponse)
 def share_task(
-    task_id: int,
+    task_id: int = Depends(with_task_telemetry),
     current_user: User = Depends(security.get_current_user),
     db: Session = Depends(get_db),
 ):
frontend/src/features/tasks/components/chat/PipelineStageIndicator.tsx (1)

315-331: LGTM!

The Next Stage button implementation is clean:

  • Conditionally rendered only when is_pending_confirmation is true
  • Properly disabled during the skip operation
  • Shows appropriate loading state with spinner
  • Consistent styling with the design system
backend/app/services/adapters/pipeline_stage.py (2)

476-477: LGTM!

The refactored _create_next_stage_subtask method cleanly handles both confirmation flows via the from_skip_confirmation parameter. The conditional result_data construction properly differentiates between:

  • Skip confirmation: stores context with from_skip_confirmation: True
  • Stage confirmation: stores confirmed_prompt with from_stage_confirmation: True

This enables the executor to handle each case appropriately downstream.

Also applies to: 567-577, 596-596


684-778: LGTM!

The skip_stage_confirmation method is well-structured:

  • Properly handles pipeline completion when at the last stage
  • Gracefully handles missing context with a warning log
  • Reuses _create_next_stage_subtask with the new flag (good DRY practice)
  • Correctly updates task status and commits the transaction

Comment on lines 851 to 920
def _summarize_context_sync(self, content: str, target_length: int = None) -> str:
"""
Use AI to summarize long context content (synchronous version).
Calls the configured LLM with a summarization prompt to compress
the content while preserving key information.
Args:
content: The content to summarize
target_length: Target length for summary (default from settings)
Returns:
Summarized content
"""
if target_length is None:
target_length = settings.PIPELINE_SUMMARY_MAX_LENGTH

try:
# Import here to avoid circular imports
from app.services.chat.llm_client import get_default_llm_client

llm_client = get_default_llm_client()
if not llm_client:
logger.warning(
"Pipeline context summarization: no LLM client available, truncating instead"
)
# Fallback to simple truncation
return (
content[:target_length] + "..."
if len(content) > target_length
else content
)

# Create summarization prompt
system_prompt = (
"You are a helpful assistant that summarizes text. "
"Preserve key information and main points. "
"Be concise and clear."
)
user_prompt = (
f"Please summarize the following text to approximately {target_length} characters, "
f"preserving the key information and main points:\n\n{content}"
)

# Call LLM for summarization (synchronous)
import asyncio

loop = asyncio.new_event_loop()
try:
summary = loop.run_until_complete(
llm_client.complete(
messages=[
{"role": "system", "content": system_prompt},
{"role": "user", "content": user_prompt},
],
max_tokens=target_length * 2, # Allow some buffer
)
)
return summary if summary else content[:target_length]
finally:
loop.close()

except Exception as e:
logger.error(f"Pipeline context summarization failed: {str(e)}")
# Fallback to simple truncation
return (
content[:target_length] + "..."
if len(content) > target_length
else content
)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# First, let's locate and examine the file
find . -name "pipeline_stage.py" -type f | head -20

Repository: wecode-ai/Wegent

Length of output: 109


🏁 Script executed:

# Check Python version requirements
find . -name "pyproject.toml" -o -name "setup.py" -o -name ".python-version" | head -10

Repository: wecode-ai/Wegent

Length of output: 218


🏁 Script executed:

# Search for how _summarize_context_sync is called
rg "_summarize_context_sync" --type py -B 2 -A 2

Repository: wecode-ai/Wegent

Length of output: 825


🏁 Script executed:

# Check how this method is used in the codebase and what calls it
rg "skip_stage_confirmation\|_summarize_context" --type py -A 3 -B 1

Repository: wecode-ai/Wegent

Length of output: 42


🏁 Script executed:

# Let's examine the actual file to understand context better
cat -n ./backend/app/services/adapters/pipeline_stage.py | sed -n '800,930p'

Repository: wecode-ai/Wegent

Length of output: 5418


🏁 Script executed:

# Check Python version requirement in pyproject.toml
cat ./backend/pyproject.toml | grep -A 10 "python"

Repository: wecode-ai/Wegent

Length of output: 146


🏁 Script executed:

# Search for FastAPI usage in the codebase to understand if this is in async context
rg "FastAPI\|@app\|@router" --type py backend/ | head -20

Repository: wecode-ai/Wegent

Length of output: 42


🏁 Script executed:

# Check how the pipeline_stage module is used
rg "pipeline_stage\|PipelineStage" --type py -A 2 | head -40

Repository: wecode-ai/Wegent

Length of output: 42


🏁 Script executed:

# Look for asyncio usage patterns in the codebase
rg "asyncio\.(run|new_event_loop)" --type py -B 2 -A 2

Repository: wecode-ai/Wegent

Length of output: 12341


🏁 Script executed:

# Check where get_last_stage_result_as_context is called
rg "get_last_stage_result_as_context" --type py -B 3 -A 3

Repository: wecode-ai/Wegent

Length of output: 1702


🏁 Script executed:

# Check the Python version requirement more specifically
cat ./backend/pyproject.toml | head -50

Repository: wecode-ai/Wegent

Length of output: 1436


🏁 Script executed:

# Look at how knowledge_service handles the similar scenario (get_running_loop pattern)
cat -n ./backend/app/services/knowledge_service.py | grep -A 15 "get_running_loop"

Repository: wecode-ai/Wegent

Length of output: 788


🏁 Script executed:

# Check if there's documentation about asyncio patterns in the codebase
rg "asyncio.run\(\)" --type py -B 1 -A 1 | head -60

Repository: wecode-ai/Wegent

Length of output: 326


🏁 Script executed:

# Look at the specific exception scenario - what happens when loop is already running
rg "RuntimeError.*already running" --type py -B 2 -A 2

Repository: wecode-ai/Wegent

Length of output: 42


🏁 Script executed:

# Let's find where skip_stage_confirmation is called to understand the call context
rg "skip_stage_confirmation" --type py -B 2 -A 5

Repository: wecode-ai/Wegent

Length of output: 1826


🏁 Script executed:

# Check if the method is called from async endpoints or sync context
rg "def skip_stage_confirmation\|skip_stage_confirmation\(" --type py -B 3 -A 1 | head -40

Repository: wecode-ai/Wegent

Length of output: 42


🏁 Script executed:

# Look for FastAPI endpoint usage of this method
rg "@router\|@app\|async def" --type py backend/app/api/ | head -30

Repository: wecode-ai/Wegent

Length of output: 42


🏁 Script executed:

# Find where skip_stage_confirmation is called from task_kinds
rg "skip_stage_confirmation" --type py backend/app/ -B 5 | head -50

Repository: wecode-ai/Wegent

Length of output: 1470


🏁 Script executed:

# Check the task_kinds file for the method that calls skip_stage_confirmation
cat -n ./backend/app/services/adapters/task_kinds.py | grep -B 20 -A 5 "skip_stage_confirmation"

Repository: wecode-ai/Wegent

Length of output: 1190


🏁 Script executed:

# Search for endpoints that use task_kinds to understand the flow
rg "task_kinds\|TaskKinds" --type py backend/app/api/ -A 2 | head -40

Repository: wecode-ai/Wegent

Length of output: 42


🏁 Script executed:

# Check for FastAPI route handlers that might call this indirectly
rg "def.*task" --type py backend/app/api/endpoints/ | head -20

Repository: wecode-ai/Wegent

Length of output: 1412


🏁 Script executed:

# Find the full method in task_kinds.py that calls skip_stage_confirmation
cat -n ./backend/app/services/adapters/task_kinds.py | grep -B 50 "skip_stage_confirmation" | tail -60

Repository: wecode-ai/Wegent

Length of output: 2227


🏁 Script executed:

# Check if the method in task_kinds is async or sync
rg "def.*skip.*stage\|async def.*skip" --type py backend/app/services/adapters/task_kinds.py -A 1

Repository: wecode-ai/Wegent

Length of output: 42


🏁 Script executed:

# Find the endpoint that calls task_kinds method
rg "task_kinds\." --type py backend/app/ -B 5 -A 2 | head -60

Repository: wecode-ai/Wegent

Length of output: 761


🏁 Script executed:

# Check the specific endpoint handler for skip_stage_confirmation
rg "skip.*stage\|stage.*confirm" --type py backend/app/api/endpoints/ -B 3 -A 3

Repository: wecode-ai/Wegent

Length of output: 42


🏁 Script executed:

# Find where skip_stage_confirmation is called from task_kinds.py class
cat -n ./backend/app/services/adapters/task_kinds.py | grep -B 100 "skip_stage_confirmation" | head -120

Repository: wecode-ai/Wegent

Length of output: 4258


🏁 Script executed:

# Search for the actual endpoint definition
rg "skip.*stage\|stage.*confirmation" --type py backend/app/api/endpoints/ -B 5 -A 1

Repository: wecode-ai/Wegent

Length of output: 42


🏁 Script executed:

# Look for the method name that contains skip_stage_confirmation logic
rg "def.*confirm\|def.*skip" --type py backend/app/services/adapters/task_kinds.py -A 1

Repository: wecode-ai/Wegent

Length of output: 42


🏁 Script executed:

# Check what the actual endpoint calling method looks like
cat -n ./backend/app/services/adapters/task_kinds.py | sed -n '2100,2200p'

Repository: wecode-ai/Wegent

Length of output: 4182


🏁 Script executed:

# Find where skip_pipeline_stage_confirmation is called from
rg "skip_pipeline_stage_confirmation" --type py -B 3 -A 1

Repository: wecode-ai/Wegent

Length of output: 1215


🏁 Script executed:

# Check the endpoint that calls this method
cat -n ./backend/app/api/endpoints/adapter/tasks.py | grep -B 10 -A 10 "skip"

Repository: wecode-ai/Wegent

Length of output: 6095


🏁 Script executed:

# Check what knowledge_service does as it has a similar pattern (get_running_loop handling)
cat -n ./backend/app/services/knowledge_service.py | sed -n '710,735p'

Repository: wecode-ai/Wegent

Length of output: 1249


🏁 Script executed:

# Look at jobs.py to see how it handles asyncio.new_event_loop with set_event_loop
cat -n ./backend/app/services/jobs.py | grep -B 5 -A 15 "new_event_loop"

Repository: wecode-ai/Wegent

Length of output: 950


🏁 Script executed:

# Check if llm_client has a sync version of complete method
rg "def complete\|async def complete" --type py backend/app/services/chat/ -B 2 -A 3

Repository: wecode-ai/Wegent

Length of output: 42


🏁 Script executed:

# Look at the llm_client structure to understand available methods
find ./backend/app/services/chat -name "*llm*" -type f

Repository: wecode-ai/Wegent

Length of output: 42


🏁 Script executed:

# Find all files in chat services
find ./backend/app/services/chat -type f -name "*.py" | head -20

Repository: wecode-ai/Wegent

Length of output: 1033


🏁 Script executed:

# Search for the llm_client module
rg "llm_client\|get_default_llm_client" --type py backend/ -l | head -10

Repository: wecode-ai/Wegent

Length of output: 42


🏁 Script executed:

# Look for the LLM client implementation
rg "class.*LLM\|def.*complete" --type py backend/app/services/chat/ -A 1 | head -40

Repository: wecode-ai/Wegent

Length of output: 42


🏁 Script executed:

# Find where get_default_llm_client is defined
rg "def get_default_llm_client" --type py backend/

Repository: wecode-ai/Wegent

Length of output: 42


🏁 Script executed:

# Search in the entire backend for llm_client
fd "llm" backend/app/services/chat/ --type f

Repository: wecode-ai/Wegent

Length of output: 42


🏁 Script executed:

# Let's check the actual import in pipeline_stage.py
cat -n ./backend/app/services/adapters/pipeline_stage.py | sed -n '868,872p'

Repository: wecode-ai/Wegent

Length of output: 286


🏁 Script executed:

# Search for complete method definition in backend services
rg "def complete\|async def complete" --type py backend/app/

Repository: wecode-ai/Wegent

Length of output: 42


🏁 Script executed:

# Search for llm_client in all files
rg "llm_client" --type py | head -20

Repository: wecode-ai/Wegent

Length of output: 449


🏁 Script executed:

# Find where get_default_llm_client is defined
rg "get_default_llm_client" --type py -B 1 -A 5

Repository: wecode-ai/Wegent

Length of output: 897


🏁 Script executed:

# Check the main chat services directory structure
ls -la ./backend/app/services/chat/

Repository: wecode-ai/Wegent

Length of output: 902


🏁 Script executed:

# Look in the chat directory more carefully
find ./backend/app/services/chat -name "*.py" -exec grep -l "llm_client\|get_default_llm_client" {} \;

Repository: wecode-ai/Wegent

Length of output: 42


🏁 Script executed:

# Search the entire repository for llm_client.py
find . -name "llm_client.py" -type f

Repository: wecode-ai/Wegent

Length of output: 42


🏁 Script executed:

# Search for where this module might be defined or referenced
rg "llm_client" --type py -A 10 | head -80

Repository: wecode-ai/Wegent

Length of output: 2280


🏁 Script executed:

# Check the __init__.py in chat services to see what's exported
cat ./backend/app/services/chat/__init__.py

Repository: wecode-ai/Wegent

Length of output: 952


🏁 Script executed:

# Search in the broader codebase for complete method that's async
rg "async def complete" --type py -A 3 | head -40

Repository: wecode-ai/Wegent

Length of output: 348


🏁 Script executed:

# Check if the llm_client might be in a different location or if this is a missing module
rg "class.*LLMClient\|class.*LLM" --type py

Repository: wecode-ai/Wegent

Length of output: 42


🏁 Script executed:

# Search for where complete is called elsewhere
rg "\.complete\(" --type py -B 2 -A 2 | head -60

Repository: wecode-ai/Wegent

Length of output: 517


🏁 Script executed:

# Check the broader imports and how async operations are handled
rg "from.*llm\|import.*llm" --type py backend/app/services/

Repository: wecode-ai/Wegent

Length of output: 42


🏁 Script executed:

# Let's verify what the actual module structure is
git ls-files | grep -i llm

Repository: wecode-ai/Wegent

Length of output: 201


🏁 Script executed:

# Verify the async pattern used in executor services
cat -n ./backend/app/services/adapters/executor_kinds.py | sed -n '1,50p'

Repository: wecode-ai/Wegent

Length of output: 1917


🏁 Script executed:

# Look at how other sync methods handle async operations
rg "asyncio.run\|new_event_loop" --type py backend/app/services/ -B 3 -A 3 | head -80

Repository: wecode-ai/Wegent

Length of output: 42


🏁 Script executed:

# Check if this is actually an issue by confirming if sync endpoints run in FastAPI async context
rg "def skip_pipeline_stage_confirmation\|async def" --type py backend/app/api/endpoints/adapter/tasks.py -A 1 | head -20

Repository: wecode-ai/Wegent

Length of output: 42


🏁 Script executed:

# Verify the actual runtime behavior - check how FastAPI handles this
cat -n ./backend/app/api/endpoints/adapter/tasks.py | sed -n '299,324p'

Repository: wecode-ai/Wegent

Length of output: 1077


Use a safe asyncio pattern to handle running event loops in sync context.

The asyncio.new_event_loop() call can fail at runtime when this sync method is invoked from a FastAPI async context. Since the endpoint POST /{task_id}/skip-stage-confirmation is a sync handler running within FastAPI's async event loop, there will already be a running loop in the thread, causing asyncio.new_event_loop() to raise a RuntimeError.

Instead, follow the pattern already used elsewhere in the codebase (e.g., knowledge_service.py lines 716-726):

  • Check for a running loop with asyncio.get_running_loop()
  • If none exists, safely use asyncio.run()
  • If one exists, run the coroutine in a separate thread via ThreadPoolExecutor

Additionally:

  • Update the type hint from int = None to int | None (Python 3.10+ syntax)
  • Replace logger.error(f"...{str(e)}") with logger.exception("...") to include the full stack trace for debugging
🧰 Tools
🪛 Ruff (0.14.10)

851-851: PEP 484 prohibits implicit Optional

Convert to T | None

(RUF013)


913-913: Do not catch blind exception: Exception

(BLE001)


914-914: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


914-914: Use explicit conversion flag

Replace with conversion flag

(RUF010)

qdaxb added 2 commits January 11, 2026 01:56
- Remove duplicate type declarations in tasks.ts
- Consolidate toast notifications in PipelineStageIndicator
- Add error handling for stage info refresh failures
- Fix Optional type hints for max_length and target_length parameters
graindt added a commit to graindt/Wegent that referenced this pull request Jan 16, 2026
* Initial plan

* fix: update pytest tests to fix failing executor_manager tests

Co-authored-by: graindt <[email protected]>

---------

Co-authored-by: copilot-swe-agent[bot] <[email protected]>
Co-authored-by: graindt <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants