Skip to content

Conversation

@jaideep329
Copy link
Contributor

@jaideep329 jaideep329 commented Dec 4, 2025

No description provided.

This fixes a race condition where the processor that initiates an
interruption (e.g., LLMUserContextAggregator) would push the new context
to the LLM before downstream processors (e.g., LLMAssistantContextAggregator)
had a chance to save their partial response to the shared context.

The root cause: `_wait_for_interruption` flag is only set on the processor
that calls `push_interruption_task_frame_and_wait()`. Other processors
in the pipeline have this flag as False, so they queue the InterruptionFrame
instead of processing it immediately.

The fix: Remove the `_wait_for_interruption` condition so InterruptionFrame
is processed immediately by ALL processors. This ensures synchronous
propagation through the entire pipeline, guaranteeing that when
`push_interruption_task_frame_and_wait()` returns, all processors have
already processed the interruption and updated their state.
@codecov
Copy link

codecov bot commented Dec 4, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.

Files with missing lines Coverage Δ
src/pipecat/processors/frame_processor.py 83.50% <100.00%> (ø)
🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@jaideep329 jaideep329 closed this Dec 10, 2025
@jaideep329 jaideep329 deleted the fix/interruption-frame-synchronous-processing branch December 10, 2025 07:50
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant