-
Notifications
You must be signed in to change notification settings - Fork 5.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Convert AgentChat v0.4 messages to v0.2 format #4833
Comments
@Leon0402 Thanks for the issue. We should definitely add to the migration guide #4765. There are some complications: v0.2's message format is not 100% openai-compatible:
So, the question is whether we are converting to OpenAI format or v0.2 message format. For conversion to OpenAI format, we are already kind of doing it inside We can provide an extension module for each usage case. But for the migration guide we should at least provide some information about this conversion. |
Yeah I think the ticket is more about converting it to the v0.2 format. I just called it openAI format, because it is so similar. Migration guide would be a start, but inside of an extension module even better. |
Here is my version. @Leon0402 can you test it for your usage case? I am including this in the migration guide. from typing import Any, Dict, List, Literal
from autogen_agentchat.messages import (
AgentEvent,
ChatMessage,
HandoffMessage,
MultiModalMessage,
StopMessage,
TextMessage,
ToolCallExecutionEvent,
ToolCallRequestEvent,
ToolCallSummaryMessage,
)
from autogen_core import FunctionCall, Image
from autogen_core.models import FunctionExecutionResult
def convert_to_v02_message(
message: AgentEvent | ChatMessage,
role: Literal["assistant", "user", "tool"],
image_detail: Literal["auto", "high", "low"] = "auto",
) -> Dict[str, Any]:
"""Convert a v0.4 AgentChat message to a v0.2 message.
Args:
message (AgentEvent | ChatMessage): The message to convert.
role (Literal["assistant", "user", "tool"]): The role of the message.
image_detail (Literal["auto", "high", "low"], optional): The detail level of image content in multi-modal message. Defaults to "auto".
Returns:
Dict[str, Any]: The converted AutoGen v0.2 message.
"""
v02_message: Dict[str, Any] = {}
if isinstance(message, TextMessage | StopMessage | HandoffMessage | ToolCallSummaryMessage):
v02_message = {"content": message.content, "role": role, "name": message.source}
elif isinstance(message, MultiModalMessage):
v02_message = {"content": [], "role": role, "name": message.source}
for modal in message.content:
if isinstance(modal, str):
v02_message["content"].append({"type": "text", "text": modal})
elif isinstance(modal, Image):
v02_message["content"].append(modal.to_openai_format(detail=image_detail))
else:
raise ValueError(f"Invalid multimodal message content: {modal}")
elif isinstance(message, ToolCallRequestEvent):
v02_message = {"tool_calls": [], "role": "assistant", "content": None, "name": message.source}
for tool_call in message.content:
v02_message["tool_calls"].append(
{
"id": tool_call.id,
"type": "function",
"function": {"name": tool_call.name, "args": tool_call.arguments},
}
)
elif isinstance(message, ToolCallExecutionEvent):
tool_responses: List[Dict[str, str]] = []
for tool_result in message.content:
tool_responses.append(
{
"tool_call_id": tool_result.call_id,
"role": "tool",
"content": tool_result.content,
}
)
content = "\n\n".join([response["content"] for response in tool_responses])
v02_message = {"tool_responses": tool_responses, "role": "tool", "content": content}
else:
raise ValueError(f"Invalid message type: {type(message)}")
return v02_message
def convert_to_v04_message(message: Dict[str, Any]) -> AgentEvent | ChatMessage:
"""Convert a v0.2 message to a v0.4 AgentChat message."""
if "tool_calls" in message:
tool_calls: List[FunctionCall] = []
for tool_call in message["tool_calls"]:
tool_calls.append(
FunctionCall(
id=tool_call["id"],
name=tool_call["function"]["name"],
arguments=tool_call["function"]["args"],
)
)
return ToolCallRequestEvent(source=message["name"], content=tool_calls)
elif "tool_responses" in message:
tool_results: List[FunctionExecutionResult] = []
for tool_response in message["tool_responses"]:
tool_results.append(
FunctionExecutionResult(
call_id=tool_response["tool_call_id"],
content=tool_response["content"],
)
)
return ToolCallExecutionEvent(source="tools", content=tool_results)
elif isinstance(message["content"], list):
content: List[str | Image] = []
for modal in message["content"]: # type: ignore
if modal["type"] == "text": # type: ignore
content.append(modal["text"]) # type: ignore
else:
content.append(Image.from_uri(modal["image_url"]["url"])) # type: ignore
return MultiModalMessage(content=content, source=message["name"])
elif isinstance(message["content"], str):
return TextMessage(content=message["content"], source=message["name"])
else:
raise ValueError(f"Unable to convert message: {message}") |
What feature would you like to be added?
For migration purposes it would be great to have a function that takes the
TaskResult
and converts it into the openAI format. Previous users of v2 might have additional evaluation, visualization code building up on the openAI format that is hard to migrate. Ideally a good implementation would be provided directly as part of the API along with some warning perhaps that it is for migration purposes.For users needing this functionality, here is my quick implementation:
The implementation is by no means complete. Especially the role part is not quite accurate. I made sort of the minimal implementation that get my specific code to work again. As said, ideally autogen would provide a more robust and more complete API. From the discord discussion around this, I understood that it is not quite trivial as the agent team already acts on a different layer, where certain information is not available anymore.
Why is this needed?
To make the transition from v2 to v4 as easy as possible.
The text was updated successfully, but these errors were encountered: