-
Notifications
You must be signed in to change notification settings - Fork 2.9k
Open
Labels
feature:coreneeds-more-infoWaiting for a reply/more info from the authorWaiting for a reply/more info from the authorquestionQuestion about using the SDKQuestion about using the SDK
Description
Please read this first
- Have you read the docs?**Agents SDK docs
- Have you searched for related issues?** Others may have faced similar issues.
Describe the bug
When using GPT-5.1 with reasoning_effort(low, medium or high) via OpenAI Agents SDK (Python), internal reasoning summaries leak into message content as msg_{uuid} instead of being isolated in separate ReasoningItem objects rs_{uuid}.
This causes reasoning traces to be indistinguishable from actual assistant responses
Debug information
- Agents SDK version: (0.5.1)
- Python version (Python 3.11)
Repro steps
from agents import Agent, ModelSettings, function_tool, Runner
from openai.types.shared import Reasoning
@function_tool()
async def get_weather(city: str, unit: str = "C"):
mock = {
"Austin": {"temp": 32, "condition": "Sunny"},
"Seattle": {"temp": 18, "condition": "Rain"},
"New York": {"temp": 25, "condition": "Cloudy"},
}
if city in mock:
return {
"city": city,
"temp": mock[city]["temp"],
"unit": unit,
"condition": mock[city]["condition"],
}
return {"city": city, "error": "No mock data found"}
async def main():
agent = Agent(
name="Weather Agent",
model="gpt-5.1",
instructions="Always call the weather tool.",
tools=[get_weather],
model_settings=ModelSettings(
tool_choice="required",
reasoning=Reasoning(effort="low", summary = "auto"),
parallel_tool_calls=True,
),
)
result = Runner.run_streamed(
agent,
input="What is the weather like in Austin?",
)
async for event in result.stream_events():
if event.type != "run_item_stream_event":
continue
item = event.item
if item.type == "message_output_item":
msg = item.raw_item
print(msg.id)
if msg.content and hasattr(msg.content[0], "text"):
response = msg.content[0].text
return responsethe model emits reasoning trace messages wrapped as message_output_item events and assigns them IDs with the msg_ prefix.
The response will look something like this
I should call the tool `get_weather` to fetch the weather status.
Message ID:
msg_05b63e7e027f96e700691d02a67b4c819183984d35170f1ddc
This makes reasoning items appear as normal assistant messages.
Note: This happens occasionally, not on all the turns. If you run this experiment 10 times it may appear once or twice
Expected Behavior
- Reasoning traces should be emitted as
ReasoningItemobjects withrs_ID prefix (type: 'reasoning') - Only the final assistant output should be emitted as a
message_output_itemwith amsg_ID
mando
Metadata
Metadata
Assignees
Labels
feature:coreneeds-more-infoWaiting for a reply/more info from the authorWaiting for a reply/more info from the authorquestionQuestion about using the SDKQuestion about using the SDK