Skip to content

Conversation

@markbackman
Copy link
Contributor

Please describe the changes in your PR. If it is addressing an issue, please reference that as well.

The LLMs previously used the default values. With this change, they'll use the InputParams values specified at initialization time. This more closely aligns with expectations.

We could add another step to provide custom params, but since this is a convenience, this seems like a good first change.

@markbackman markbackman force-pushed the mb/update-run-inference branch from a856986 to 6b03620 Compare December 9, 2025 18:35
@markbackman markbackman requested a review from kompfner December 9, 2025 18:35
@codecov
Copy link

codecov bot commented Dec 9, 2025

Codecov Report

❌ Patch coverage is 88.88889% with 3 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
src/pipecat/services/google/llm.py 75.00% 3 Missing ⚠️
Files with missing lines Coverage Δ
src/pipecat/services/anthropic/llm.py 39.27% <100.00%> (+7.60%) ⬆️
src/pipecat/services/aws/llm.py 34.59% <ø> (+5.75%) ⬆️
src/pipecat/services/openai/base_llm.py 37.95% <100.00%> (+3.80%) ⬆️
src/pipecat/services/google/llm.py 43.09% <75.00%> (+10.69%) ⬆️

... and 1 file with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants