pipecat instrumentation refactor #2509
Draft
+264
−367
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
cc @duncankmckinnon
Note
Refactors the Pipecat OpenInference observer to use turn-tracking with expanded frame handling and precise span lifecycle/metrics, and updates LLM model name extraction to prefer
_full_model_name._observer.py):OpenInferenceObserverto inherit fromTurnTrackingObserverandUserBotLatencyLogObserver; remove custom turn state and delegate to super.LLM*,TTS*,VADUser*,MetricsFrame, etc.) and route through_handle_service_frame.VADUserStartedSpeakingFrame(STT),LLMFullResponseStartFrame(LLM), andTTSStartedFrame(TTS); finish spans onEndFrame,CancelFrame,ErrorFrame,LLMFullResponseEndFrame,BotStoppedSpeakingFrame, and STT completion afterVADUserStoppedSpeakingFrame+ finalTranscriptionFrame.TranscriptionFrame) and bot (TTSTextFrame,LLMTextFrame) text; setinput.value/output.valueon span/turn with spacing based onincludes_inter_frame_spaces.MetricsFrameto set attributes (e.g.,service.processing_time_seconds) and compute span end times; attachconversation.user_to_bot_latency.pipecat.conversation.turn; update turn start/end (_start_turn,_end_turn) and attributes (conversation.turn_number, duration, end reason)._attributes.py):LLMServiceAttributeExtractorto preferservice._full_model_nameforllm.model_name, falling back tomodel_name/model.Written by Cursor Bugbot for commit 0232542. This will update automatically on new commits. Configure here.