Skip to content

Accessing reasoning tokens of another llm model in agents sdk #462

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
atahanozdemirberkeley opened this issue Apr 9, 2025 · 2 comments
Labels
question Question about using the SDK stale

Comments

@atahanozdemirberkeley
Copy link

I'm using the agents SDK with a non-OpenAI model that supports reasoning (using a ":thinking" suffix in the model name).

When using these models, I can see that the Generation output includes a count of "reasoning_tokens" in the usage stats:

"usage": {
  "input_tokens": 13545,
  "input_tokens_details": { "cached_tokens": 0 },
  "output_tokens_details": { "reasoning_tokens": 114 },
  "output_tokens": 270,
  "total_tokens": 13815
}

However, I can't find a way to access the actual reasoning content. For OpenAI models, I understand the SDK supports reasoning configuration in ModelSettings to expose reasoning content, but that is only supported for o-series models it seems like.

Is there any way to access the reasoning content from non-OpenAI models through the Agents SDK, or is this currently only supported for OpenAI models?

@atahanozdemirberkeley atahanozdemirberkeley added the question Question about using the SDK label Apr 9, 2025
@paulsengh
Copy link

Hey @rm-openai any thoughts here? Thank you!

Copy link

This issue is stale because it has been open for 7 days with no activity.

@github-actions github-actions bot added the stale label Apr 18, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Question about using the SDK stale
Projects
None yet
Development

No branches or pull requests

2 participants