Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Convert AgentChat v0.4 messages to v0.2 format #4833

Open
Leon0402 opened this issue Dec 27, 2024 · 3 comments · May be fixed by #4765
Open

Convert AgentChat v0.4 messages to v0.2 format #4833

Leon0402 opened this issue Dec 27, 2024 · 3 comments · May be fixed by #4765
Labels
documentation Improvements or additions to documentation proj-agentchat
Milestone

Comments

@Leon0402
Copy link

What feature would you like to be added?

For migration purposes it would be great to have a function that takes the TaskResult and converts it into the openAI format. Previous users of v2 might have additional evaluation, visualization code building up on the openAI format that is hard to migrate. Ideally a good implementation would be provided directly as part of the API along with some warning perhaps that it is for migration purposes.

For users needing this functionality, here is my quick implementation:

def convert_to_openai_format(messages: Sequence[AgentMessage]) -> list[dict[str, Any]]:
    """
    Convert a list of BaseMessage objects into OpenAI format with structured content.

    Args:
        messages: List of BaseMessage or its subclasses (TextMessage, MultiModalMessage).

    Returns:
        List of dictionaries in OpenAI message format with structured content.
    """
    openai_messages = []

    for message in messages:
        content_list = []

        if isinstance(message, TextMessage):
            content_list.append({"type": "text", "text": message.content})
        elif isinstance(message, MultiModalMessage):
            for item in message.content:
                if isinstance(item, str):
                    content_list.append({"type": "text", "text": item})
                elif isinstance(item, Image):
                    content_list.append(
                        {"type": "image_url", "image_url": item.data_uri}
                    )
                else:
                    raise ValueError(f"Unsupported content type in MultiModalMessage: {type(item).__name__}")
        else:
            raise ValueError(f"Unsupported message type: {type(message).__name__}")

        openai_messages.append({"role": message.source, "content": content_list})
    return openai_messages

The implementation is by no means complete. Especially the role part is not quite accurate. I made sort of the minimal implementation that get my specific code to work again. As said, ideally autogen would provide a more robust and more complete API. From the discord discussion around this, I understood that it is not quite trivial as the agent team already acts on a different layer, where certain information is not available anymore.

Why is this needed?

To make the transition from v2 to v4 as easy as possible.

@ekzhu ekzhu added this to the 0.4.0 milestone Dec 27, 2024
@ekzhu ekzhu added proj-agentchat documentation Improvements or additions to documentation and removed needs-triage labels Dec 27, 2024
@ekzhu
Copy link
Collaborator

ekzhu commented Dec 27, 2024

@Leon0402 Thanks for the issue. We should definitely add to the migration guide #4765.

There are some complications:

v0.2's message format is not 100% openai-compatible:

  • the tool response messages in v0.2 rolls up individual tool call responses into a single message dictionary.
  • the messages in group chat makes extensive use of name field to differentiate between agents, however the name field is not officially supported by OpenAI.

So, the question is whether we are converting to OpenAI format or v0.2 message format.

For conversion to OpenAI format, we are already kind of doing it inside AssistantAgent: the AgentChat messages are converted to the autogen_core.models.LLMMessage types, which can be converted to OpenAI format by the autogen_ext.models.openai.BaseOpenAIChatCompletionClient.

We can provide an extension module for each usage case. But for the migration guide we should at least provide some information about this conversion.

@Leon0402
Copy link
Author

@Leon0402 Thanks for the issue. We should definitely add to the migration guide #4765.

There are some complications:

v0.2's message format is not 100% openai-compatible:

* the tool response messages in v0.2 rolls up individual tool call responses into a single message dictionary.

* the messages in group chat makes extensive use of `name` field to differentiate between agents, however the `name` field is not officially supported by OpenAI.

So, the question is whether we are converting to OpenAI format or v0.2 message format.

For conversion to OpenAI format, we are already kind of doing it inside AssistantAgent: the AgentChat messages are converted to the autogen_core.models.LLMMessage types, which can be converted to OpenAI format by the autogen_ext.models.openai.BaseOpenAIChatCompletionClient.

We can provide an extension module for each usage case. But for the migration guide we should at least provide some information about this conversion.

Yeah I think the ticket is more about converting it to the v0.2 format. I just called it openAI format, because it is so similar. Migration guide would be a start, but inside of an extension module even better.

@ekzhu
Copy link
Collaborator

ekzhu commented Dec 28, 2024

Here is my version. @Leon0402 can you test it for your usage case? I am including this in the migration guide.

from typing import Any, Dict, List, Literal

from autogen_agentchat.messages import (
    AgentEvent,
    ChatMessage,
    HandoffMessage,
    MultiModalMessage,
    StopMessage,
    TextMessage,
    ToolCallExecutionEvent,
    ToolCallRequestEvent,
    ToolCallSummaryMessage,
)
from autogen_core import FunctionCall, Image
from autogen_core.models import FunctionExecutionResult


def convert_to_v02_message(
    message: AgentEvent | ChatMessage,
    role: Literal["assistant", "user", "tool"],
    image_detail: Literal["auto", "high", "low"] = "auto",
) -> Dict[str, Any]:
    """Convert a v0.4 AgentChat message to a v0.2 message.

    Args:
        message (AgentEvent | ChatMessage): The message to convert.
        role (Literal["assistant", "user", "tool"]): The role of the message.
        image_detail (Literal["auto", "high", "low"], optional): The detail level of image content in multi-modal message. Defaults to "auto".

    Returns:
        Dict[str, Any]: The converted AutoGen v0.2 message.
    """
    v02_message: Dict[str, Any] = {}
    if isinstance(message, TextMessage | StopMessage | HandoffMessage | ToolCallSummaryMessage):
        v02_message = {"content": message.content, "role": role, "name": message.source}
    elif isinstance(message, MultiModalMessage):
        v02_message = {"content": [], "role": role, "name": message.source}
        for modal in message.content:
            if isinstance(modal, str):
                v02_message["content"].append({"type": "text", "text": modal})
            elif isinstance(modal, Image):
                v02_message["content"].append(modal.to_openai_format(detail=image_detail))
            else:
                raise ValueError(f"Invalid multimodal message content: {modal}")
    elif isinstance(message, ToolCallRequestEvent):
        v02_message = {"tool_calls": [], "role": "assistant", "content": None, "name": message.source}
        for tool_call in message.content:
            v02_message["tool_calls"].append(
                {
                    "id": tool_call.id,
                    "type": "function",
                    "function": {"name": tool_call.name, "args": tool_call.arguments},
                }
            )
    elif isinstance(message, ToolCallExecutionEvent):
        tool_responses: List[Dict[str, str]] = []
        for tool_result in message.content:
            tool_responses.append(
                {
                    "tool_call_id": tool_result.call_id,
                    "role": "tool",
                    "content": tool_result.content,
                }
            )
        content = "\n\n".join([response["content"] for response in tool_responses])
        v02_message = {"tool_responses": tool_responses, "role": "tool", "content": content}
    else:
        raise ValueError(f"Invalid message type: {type(message)}")
    return v02_message


def convert_to_v04_message(message: Dict[str, Any]) -> AgentEvent | ChatMessage:
    """Convert a v0.2 message to a v0.4 AgentChat message."""
    if "tool_calls" in message:
        tool_calls: List[FunctionCall] = []
        for tool_call in message["tool_calls"]:
            tool_calls.append(
                FunctionCall(
                    id=tool_call["id"],
                    name=tool_call["function"]["name"],
                    arguments=tool_call["function"]["args"],
                )
            )
        return ToolCallRequestEvent(source=message["name"], content=tool_calls)
    elif "tool_responses" in message:
        tool_results: List[FunctionExecutionResult] = []
        for tool_response in message["tool_responses"]:
            tool_results.append(
                FunctionExecutionResult(
                    call_id=tool_response["tool_call_id"],
                    content=tool_response["content"],
                )
            )
        return ToolCallExecutionEvent(source="tools", content=tool_results)
    elif isinstance(message["content"], list):
        content: List[str | Image] = []
        for modal in message["content"]:  # type: ignore
            if modal["type"] == "text":  # type: ignore
                content.append(modal["text"])  # type: ignore
            else:
                content.append(Image.from_uri(modal["image_url"]["url"]))  # type: ignore
        return MultiModalMessage(content=content, source=message["name"])
    elif isinstance(message["content"], str):
        return TextMessage(content=message["content"], source=message["name"])
    else:
        raise ValueError(f"Unable to convert message: {message}")

@ekzhu ekzhu changed the title Compatability v4 TaskResult to v2 openAI format Compatability v4 AgentChat messages to v2 format Dec 28, 2024
@ekzhu ekzhu changed the title Compatability v4 AgentChat messages to v2 format Convert AgentChat v0.4 messages to v0.2 format Dec 28, 2024
@ekzhu ekzhu linked a pull request Dec 28, 2024 that will close this issue
@ekzhu ekzhu linked a pull request Dec 28, 2024 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation proj-agentchat
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants