-
Notifications
You must be signed in to change notification settings - Fork 1.2k
fix: Ensure that tool calls with no arguments get handled correctly #3560
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…lamastack#3560 When a model decides to use an MCP tool call that requires no arguments, it sets the arguments field to None. This causes validation errors because this field gets removed when being parsed by an openai compatible inference provider like vLLM This PR ensures that, as soon as the tool call args are accumulated while streaming, we check to ensure no tool call function arguments are set to None - if they are we replace them with "{}" Closes llamastack#3456 Added new unit test to verify that any tool calls with function arguments set to None get handled correctly Signed-off-by: Jaideep Rao <[email protected]>
301cf11
to
4f453fe
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks good, but do we perhaps want an integration test for this rather than unit?
…lamastack#3560 When a model decides to use an MCP tool call that requires no arguments, it sets the arguments field to None. This causes validation errors because this field gets removed when being parsed by an openai compatible inference provider like vLLM This PR ensures that, as soon as the tool call args are accumulated while streaming, we check to ensure no tool call function arguments are set to None - if they are we replace them with "{}" Closes llamastack#3456 Added new unit test to verify that any tool calls with function arguments set to None get handled correctly Signed-off-by: Jaideep Rao <[email protected]>
4f453fe
to
e09a28e
Compare
added an integration test, lmk if it looks good |
Note that an inference server passing With that said, I'm all for making this work as opposed to expecting every inference server to strictly adhere to the spec. So, I'll kick off the workflows here and let CI chew on this. |
@jaideepr97 I think you'll need to re-record the integration tests locally |
+1 makes sense @bbrowning |
It looks like the completions spec expects the client to handle validating the JSON for tool calling arguments https://github.com/openai/openai-python/blob/a1493f92a7cd4399d57046aadc943aeadda5b8e7/src/openai/types/chat/chat_completion_message_function_tool_call_param.py#L10-L17 That said, it actually looks like vLLM does sometimes JSON dump the model output, but not consistently. Is it a bug or a feature? 🐞 🚀 ❓ |
…lamastack#3560 When a model decides to use an MCP tool call that requires no arguments, it sets the arguments field to None. This causes validation errors because this field gets removed when being parsed by an openai compatible inference provider like vLLM This PR ensures that, as soon as the tool call args are accumulated while streaming, we check to ensure no tool call function arguments are set to None - if they are we replace them with "{}" Closes llamastack#3456 Added new unit test to verify that any tool calls with function arguments set to None get handled correctly Signed-off-by: Jaideep Rao <[email protected]>
e09a28e
to
4aa586d
Compare
@cdoern @bbrowning on further thought, this change is only affecting how llama stack responds when it encounters a |
seems fair enough to me. I also don't know if what you are hitting in the tests is necessarily your fault 😅 so I don't think it makes sense to have you fix it. |
The 500 error in the integration tests, from the server logs:
The new test will need a recorded response before it can pass. |
@bbrowning Looks like some recordings got added and updated after I ran |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lots of re-recordings, but I think that is ok?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The actual changes look good to me, and the added tests match the changes. I'm assuming all the recording stuff is fine, since CI is happy with it.
Thanks!
What does this PR do?
When a model decides to use an MCP tool call that requires no arguments, it sets the
arguments
field toNone
. This causes the user to see a400 bad requst error
due to validation errors down the stack because this field gets removed when being parsed by an openai compatible inference provider like vLLMThis PR ensures that, as soon as the tool call args are accumulated while streaming, we check to ensure no tool call function arguments are set to None - if they are we replace them with "{}"
Closes #3456
Test Plan
Added new unit test to verify that any tool calls with function arguments set to
None
get handled correctly