Skip to content

Conversation

@feifei325
Copy link
Collaborator

@feifei325 feifei325 commented Jan 4, 2026

Summary

  • Fix infinite streaming spinner bug when new members join a group chat during an active AI stream
  • Add logic in syncBackendMessages to update streaming messages to their final status when backend shows COMPLETED/FAILED/CANCELLED
  • The fix handles the race condition where a user joins mid-stream and the chat:done event arrives before syncBackendMessages runs

Problem

When a new member joins a group chat during an active AI stream:

  1. syncBackendMessages creates a 'streaming' placeholder for RUNNING messages
  2. If the stream completes before the user's WebSocket receives chat:done, the message stays in 'streaming' status indefinitely
  3. This causes an infinite loading spinner in the UI

Solution

Added a check in syncBackendMessages that detects when:

  • Frontend has a message with status: 'streaming'
  • Backend subtask status is NOT RUNNING or PENDING (i.e., it's COMPLETED, FAILED, or CANCELLED)

When this condition is met, the message is updated with:

  • Final status (completed or error)
  • Final content from backend
  • Updated subtaskStatus and result

Test Plan

  • Verify normal group chat streaming still works correctly
  • Test joining a group chat mid-stream and verify spinner disappears when AI completes
  • Test joining a group chat after AI has already completed
  • Test the scenario where chat:done arrives before syncBackendMessages
  • Run existing frontend tests

Summary by CodeRabbit

  • Bug Fixes
    • Fixed an issue where AI messages displayed an infinite loading spinner when new members joined group chats mid-conversation while the backend was still processing. The UI now properly reflects the final message status once backend processing completes.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link

coderabbitai bot commented Jan 4, 2026

Warning

Rate limit exceeded

@feifei325 has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 6 minutes and 48 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📥 Commits

Reviewing files that changed from the base of the PR and between 42315b8 and ac5a65c.

📒 Files selected for processing (1)
  • frontend/src/features/tasks/contexts/chatStreamContext.tsx
📝 Walkthrough

Walkthrough

A handling branch is added to the backend-subtask synchronization flow that finalizes AI streaming messages when the backend task completes. The code updates existing AI messages with final content, computes completion status, and preserves any errors from the subtask.

Changes

Cohort / File(s) Summary
AI Message Streaming Completion Handler
frontend/src/features/tasks/contexts/chatStreamContext.tsx
Adds a new branch within the backend-subtask sync path to handle AI messages in streaming status when the corresponding subtask is no longer running or pending. Updates the message with final content, computes final status (completed or error), updates subtask status, propagates backend result, and preserves subtask errors.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

Suggested reviewers

  • qdaxb
  • Micro66

Poem

🐰 Hop into the stream so bright,
No endless spinners in the night!
When tasks complete and come back home,
Messages bloom—no need to roam.
A rabbit's fix that makes flows flow! ✨

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately summarizes the main fix: resolving an infinite streaming spinner issue for group chat members joining mid-stream. It is concise, specific, and directly reflects the primary change in the changeset.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4d4f5d0 and 42315b8.

📒 Files selected for processing (1)
  • frontend/src/features/tasks/contexts/chatStreamContext.tsx
🧰 Additional context used
📓 Path-based instructions (5)
**/*.{py,ts,tsx,js,jsx}

📄 CodeRabbit inference engine (AGENTS.md)

**/*.{py,ts,tsx,js,jsx}: All code comments MUST be written in English
File size MUST NOT exceed 1000 lines - split into multiple sub-modules if exceeded
Function length SHOULD NOT exceed 50 lines (preferred)

Files:

  • frontend/src/features/tasks/contexts/chatStreamContext.tsx
**/*.{ts,tsx}

📄 CodeRabbit inference engine (AGENTS.md)

TypeScript/React MUST use strict mode, functional components, Prettier for formatting, ESLint for linting, single quotes, and no semicolons

Files:

  • frontend/src/features/tasks/contexts/chatStreamContext.tsx
**/*.{ts,tsx,js,jsx}

📄 CodeRabbit inference engine (AGENTS.md)

TypeScript MUST use const over let, never use var

Files:

  • frontend/src/features/tasks/contexts/chatStreamContext.tsx
frontend/src/**/*.{ts,tsx}

📄 CodeRabbit inference engine (AGENTS.md)

frontend/src/**/*.{ts,tsx}: MUST use useTranslation hook imported from @/hooks/useTranslation, not from react-i18next
MUST use single namespace with useTranslation() - never use array format like useTranslation(['common', 'groups'])
Frontend message data MUST always use messages from useUnifiedMessages() hook as the single source of truth for displaying messages - never use selectedTaskDetail.subtasks
Frontend i18n translation keys MUST use current namespace format t('key.subkey') for keys within namespace and t('namespace:key.subkey') for cross-namespace keys

Files:

  • frontend/src/features/tasks/contexts/chatStreamContext.tsx
frontend/**/*.{ts,tsx,js,jsx}

📄 CodeRabbit inference engine (AGENTS.md)

Frontend MUST only use NEXT_PUBLIC_* environment variables for client-safe values

Files:

  • frontend/src/features/tasks/contexts/chatStreamContext.tsx
🧠 Learnings (1)
📚 Learning: 2025-12-31T03:47:12.173Z
Learnt from: CR
Repo: wecode-ai/Wegent PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-31T03:47:12.173Z
Learning: Applies to frontend/src/**/*.{ts,tsx} : Frontend message data MUST always use `messages` from `useUnifiedMessages()` hook as the single source of truth for displaying messages - never use `selectedTaskDetail.subtasks`

Applied to files:

  • frontend/src/features/tasks/contexts/chatStreamContext.tsx
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
  • GitHub Check: E2E Tests (Shard 1/3)
  • GitHub Check: E2E Tests (Shard 2/3)
  • GitHub Check: Test Backend (3.10)
  • GitHub Check: Test wegent CLI Integration
  • GitHub Check: E2E Tests (Shard 3/3)
  • GitHub Check: Test Frontend
🔇 Additional comments (1)
frontend/src/features/tasks/contexts/chatStreamContext.tsx (1)

1726-1755: Good fix for the streaming race condition.

The logic correctly identifies and finalizes stale streaming messages when the backend has completed. The placement after the RUNNING/PENDING handling ensures this only runs for terminal states, and preserving existingAiMessage.content as a fallback handles cases where the streamed content is richer than the final backend value.

…members

When a new member joins a group chat during an active AI stream:
1. syncBackendMessages creates a 'streaming' placeholder for RUNNING messages
2. If the stream completes before the user's WebSocket receives chat:done,
   the message stays in 'streaming' status indefinitely

This fix adds a check in syncBackendMessages that updates streaming messages
to their final status when the backend shows COMPLETED/FAILED/CANCELLED,
preventing the infinite spinner issue.
@feifei325 feifei325 force-pushed the wegent/fix-group-chat-streaming-spinner branch from 42315b8 to ac5a65c Compare January 4, 2026 08:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants