Skip to content

Conversation

@kompfner
Copy link
Contributor

@kompfner kompfner commented Dec 9, 2025

For another PR: cross-modal input (switching between voice and text input).

@codecov
Copy link

codecov bot commented Dec 9, 2025

Codecov Report

❌ Patch coverage is 0% with 16 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
src/pipecat/services/aws/nova_sonic/llm.py 0.00% 16 Missing ⚠️
Files with missing lines Coverage Δ
src/pipecat/services/aws/nova_sonic/llm.py 0.00% <0.00%> (ø)
🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@kompfner kompfner changed the title [WIP] Nova 2 Sonic Nova 2 Sonic support Dec 9, 2025
# Simulate a long network delay.
# You can continue chatting while waiting for this to complete.
# With Nova 2 Sonic (the default model), the assistant will respond
# appropriately once the function call is complete.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I'm witnessing a bit of buggy behavior in the model.

When there are multiple in-flight tool calls, the model seems prone to getting confused and "mixing up" the results when they come back.

For example, I see things like this:

User: "Hey, can you tell me the weather in San Diego?"

--> Tool call kicks off with ID "foo"

User: "Actually, hang on. I'm in Washington, D.C."

--> Tool call kicks of with ID "bar"

--> Tool result arrives with ID "foo" (corresponding to San Diego) and is reported to the model.

Assistant: "The weather in Washington, D.C. is nice, with a temperature of 65 degrees."

--> Tool result arrives with ID "bar" (corresponding to Washington, D.C.) and is reported to the model.

Assistant: "Actually in Washington, D.C. the temperature is 70 degrees."

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting. In theory, the tool_call_id should be enough to handle this.
Have you tried invoking different functions at the same time to check whether it works correctly in that case?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm, if I fire all of the tool calls at the exact same time (e.g. "tell me the weather in San Diego, Washington, D.C., and New York"), tool_call_id does seem to be enough to tell the model which is which...the problem seems to arise only when they're offset...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, another clue! Maybe it has something to do with whether you interact with the model while the function calls are in-flight...so, going back to the previous example ("tell me the weather in San Diego, Washington, D.C., and New York"), if I then ask the model to tell me a fun fact while I'm waiting for the weather results, when the weather results arrive they might get scrambled (the model might think San Diego's weather is New York's, etc)

Copy link
Contributor Author

@kompfner kompfner Dec 9, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another thing I'm running into: if there are multiple in-flight tool calls at the same time, then when they come back, it triggers in the model another unnecessary duplicate tool call.

And one last thing: if I hammer the model too hard with interrupting in-flight tool calls with new tool calls, it occasionally crashes with an "System instability detected" error.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But this is all somewhat aggressive testing. In the "normal" case of one tool call at a time (probably most real-world usage falls under this umbrella), things seem to be working reasonably well.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I think so. And in this case, since they are handling the context, I’m not sure there’s much we can do to try to prevent it.

return not self._is_first_generation_sonic_model()

def _is_assistant_response_trigger_needed(self) -> bool:
# Assistant response trigger audio is only needed with the older model
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very glad about this 😄

region: str,
model: str = "amazon.nova-sonic-v1:0",
voice_id: str = "matthew", # matthew, tiffany, amy
model: str = "amazon.nova-2-sonic-v1:0",
Copy link
Contributor Author

@kompfner kompfner Dec 9, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One thing I've noticed in testing this new model is that some guardrails are very strong. Some of the things I usually do during testing are occasionally treated as prohibited.

For example, a few times when I asked the model to tell me a short story it told me it didn't want to accidentally infringe on intellectual property so it couldn't fulfill my request.

Another time, when I asked for suggestions of things to do for fun while in Washington, D.C. it said it didn't want to answer to avoid suggesting criminal or violent activities.

In both cases, once the guardrail was "triggered", it became very hard to continue the conversation without completely changing topics.

In the short-story scenario, for example, a follow-up request for a 3-sentence story was also blocked. The model then suggested "safer" topics, like general story arcs or story-writing tips. Each time I said "OK, let's discuss that", it still said it couldn't discuss those topics, for the same reason (not infringing on intellectual property).

In the fun-things-to-do-around-town scenario, I followed up with things like "what about, like, going to museums?" and it still said it couldn't answer, for the same reason (it didn't want to suggest something harmful).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, it looks like there are too many guardrails in this case now.

@kompfner kompfner marked this pull request as ready for review December 9, 2025 18:17
Comment on lines +5 to +6
- Made the assistant-response-trigger hack a no-op. It's only needed for the
older Nova Sonic model.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool. So they have fixed this. 🙌🎉

Copy link
Contributor

@filipi87 filipi87 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. 🚀

…fetching function to help the model associate a tool response with a tool call...if you interrupt the model while more than one function call is outbound, it seemingly can get confused about which tool result goes which call.
@kompfner kompfner merged commit f41c3dc into main Dec 11, 2025
6 checks passed
@kompfner kompfner deleted the pk/nova-2-sonic branch December 11, 2025 14:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants