Skip to content

align reasoning model params with OpenAI spec & add 'minimal' effort support#652

Open
RaajeevChandran wants to merge 1 commit intoosaurus-ai:mainfrom
RaajeevChandran:bugfix/openai-reasoning-params
Open

align reasoning model params with OpenAI spec & add 'minimal' effort support#652
RaajeevChandran wants to merge 1 commit intoosaurus-ai:mainfrom
RaajeevChandran:bugfix/openai-reasoning-params

Conversation

@RaajeevChandran
Copy link
Contributor

Summary

Fixes #649 by resolving HTTP 400 error when using OpenAI's reasoning models (o1 and gpt-5-nano) by aligning the request payload with the OpenAI API specification referred from the official OpenAI community

Changes

  • Implemented the new reasoning_effort parameter for openai.com host (I've ensured backward compatibility fr other providers)
  • Intentionally omit temperature and top_p from the request when an OpenAI reasoning model is detected. Looks like these parameters are forbidden by OpenAI when reasoning features are active.
image
  • I've also created a new "minimal" effort level in the OpenAIReasoningProfile for reasoning models.

  • Behavior change

  • UI change (screenshots below)

  • Refactor / chore

  • Tests

  • Docs

Test Plan

  • Connect to the OpenAI provider and verify that models like o1-preview or gpt-5-nano no longer return "Unknown parameter" errors.
  • Verify that non-OpenAI providers still receive the legacy reasoning object
  • Open the model settings for a reasoning model and verify that "minimal" is now a selectable option in the segmented control

Checklist

  • I have read CONTRIBUTING.md
  • I added/updated tests where reasonable
  • I updated docs/README as needed
  • I verified build on macOS with Xcode 16.4+

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

OpenAI models error

1 participant