Significant Performance Degradation with Frequent Recreation of httpx.Client
Instances
#3251
Unanswered
deedy5
asked this question in
Potential Issue
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
We've conducted extensive benchmark tests comparing the performance of several HTTP clients (
requests
,httpx
,tls_client
,curl_cffi
,pycurl
, andprimp
) under various conditions. Our goal was to evaluate each client's efficiency, especially focusing on scenarios where sessions are reused versus recreated for every request.During our tests, we noticed a significant performance degradation when using
httpx.Client
. Specifically, the performance drops sharply when a newhttpx.Client
instance is created for each request within a short period. This behavior contrasts sharply with other tested clients, which either maintain consistent performance or exhibit better performance characteristics when reusing session instances.To illustrate this issue, we performed a series of tests making 2000 requests to a local server with response sizes varying between 5KB, 50KB, and 200KB. We measured both wall clock time and CPU time for each client. The results consistently showed that
httpx.Client
took significantly longer to complete the same number of requests compared to other clients when instantiated anew for each request.Additional Resources:
We believe this observation could be valuable for the
httpx
community, especially for applications designed to handle a large number of HTTP requests efficiently. Understanding the trade-offs between session reuse and recreation is crucial for optimizing network-bound applications.We kindly ask the
httpx
team to investigate this matter further and consider any potential optimizations or best practices to mitigate such performance issues when recreating client instances frequently.Thank you for your attention to this matter.
Beta Was this translation helpful? Give feedback.
All reactions