Enhance openai-agents with Model Improvements, Retry Logic, and Caching #450
+414
−18
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Pull Request Summary
Overall Summary
This PR enhances the
openai-agents
package by improving its model interaction capabilities. Key improvements include:__init__.py
organizes themodels
module, making it easier to use and extend._openai_shared.py
.ModelRetrySettings
adds resilience to API failures with configurable retries and backoff.openai_provider.py
gains better default model handling and error management.utils.py
improve performance and reliability.0.0.8
, marking a significant feature release.These changes collectively make the package more robust, maintainable, and efficient for interacting with OpenAI models.
File-by-File Changes
1.
src/agents/models/__init__.py
(New File)models
module, providing a clear entry point for model-related functionality._openai_shared
,interface
,openai_chatcompletions
,openai_provider
,openai_responses
, andutils
.__all__
, including:Model
,ModelProvider
,ModelRetrySettings
,ModelTracing
TOpenAIClient
,create_client
, etc.OpenAIChatCompletionsModel
,OpenAIProvider
,OpenAIResponsesModel
cache_model_response
,get_token_count_estimate
, etc.2.
src/agents/models/_openai_shared.py
logging
and type aliases:TOpenAIClient = AsyncOpenAI
andTOpenAIClientOptions = dict[str, Any]
for better type hinting._logger
for error logging.set_default_openai_key
,get_default_openai_key
,set_default_openai_client
,get_default_openai_client
,set_use_responses_by_default
, andget_use_responses_by_default
for clarity.create_client
standardizes OpenAI client creation with optional parameters (api_key
,base_url
, etc.).3.
src/agents/models/interface.py
asyncio
,dataclass
,field
fromdataclasses
, andAny
,Callable
fromtyping
.ModelRetrySettings
Class:max_retries
(default 3),initial_backoff_seconds
(1.0),max_backoff_seconds
(30.0),backoff_multiplier
(2.0), andretryable_status_codes
(e.g., 429, 500).execute_with_retry
: Implements exponential backoff retry logic for async operations.4.
src/agents/models/openai_provider.py
logging
and_logger
, plusTOpenAIClient
andcreate_client
from_openai_shared
.DEFAULT_MODEL
to"gpt-4o"
.default_model
parameter (defaults toDEFAULT_MODEL
)._client
typed asTOpenAIClient | None
._get_client
now usescreate_client
if no default client exists, with error logging for failures.get_model
usesdefault_model
if nomodel_name
is provided, returning an appropriate OpenAI model implementation.5.
src/agents/models/utils.py
(New File)set_cache_ttl
,clear_cache
,compute_cache_key
, and acache_model_response
decorator.get_token_count_estimate
: Estimates token count (approx. 4 characters per token).validate_response
: Checks if a response is a validChatCompletion
orResponse
object.6.
uv.lock
openai-agents
version from0.0.7
to0.0.8
.