Skip to content

Different Traces View at OpenAI Traces for Responses vs ChatCompletions #501

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
adhishthite opened this issue Apr 14, 2025 · 4 comments
Open
Assignees
Labels
enhancement New feature or request

Comments

@adhishthite
Copy link

Please read this first

  • Have you read the docs?Agents SDK docs -> yes
  • Have you searched for related issues? Others may have faced similar issues. -> Yes

Describe the bug

When I configure the set_default_openai_api("chat_completions") in the OpenAI Agents SDK, the traces rendered in the UI switch from a well-formatted Markdown view to a raw JSON format. This behavior does not occur when using the default responses API.

Debug information

  • Agents SDK version: v0.0.9
  • Python version (e.g. Python 3.13)

Repro steps

...

def get_azure_openai_client():
    """
    Creates and returns an Azure OpenAI client instance.
    
    Returns:
        AsyncAzureOpenAI: Configured Azure OpenAI client
    """
    load_dotenv()
    
    return AsyncAzureOpenAI(
        api_key=os.getenv("AZURE_OPENAI_API_KEY"),
        api_version=os.getenv("AZURE_OPENAI_API_VERSION"),
        azure_endpoint=os.getenv("AZURE_OPENAI_ENDPOINT"),
    )


openai_client = get_azure_openai_client()
set_default_openai_client(openai_client, use_for_tracing=False)
set_default_openai_api("chat_completions")  # 👈 Switching to this causes the trace format change

...

rag_agent = Agent(
    name="rag_agent",
    instructions=PROMPT,
    tools=[get_stocks_information, get_current_datetime],
    model=os.getenv("AZURE_OPENAI_GPT_4O_MODEL"),
    model_settings=ModelSettings(
        temperature=0,
        parallel_tool_calls=True,
        max_tokens=4096,
    ),
    input_guardrails=[smartsource_input_guardrail],
)

Responses API (Expected Trace View)

Image

Chat Completions API (Raw JSON Trace)

Image

Expected behavior

The trace output should remain consistent in terms of formatting and readability regardless of whether I use chat_completions or responses. Ideally, both should render structured, human-readable output (e.g., Markdown or rich display), especially since the underlying content remains similar.

Notes
• The model used in both cases is GPT-4o from Azure.
• This may be a regression or an unhandled case in the tracing renderer.
• If this is the expected behavior, are there plans to provide consistent trace formatting across APIs?


@rm-openai Please help! :)

@adhishthite adhishthite added the bug Something isn't working label Apr 14, 2025
@rm-openai
Copy link
Collaborator

This is actually the expected behavior we shipped with - but understand that it's not super user friendly. Will report internally and see what we can do.

@adhishthite
Copy link
Author

Thanks!

@rm-openai
Copy link
Collaborator

Update: we will fix this, but may take a couple of weeks before we get to it. Hopefully that's ok with you!

@adhishthite
Copy link
Author

adhishthite commented Apr 14, 2025 via email

@rm-openai rm-openai added enhancement New feature or request and removed bug Something isn't working labels Apr 14, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants