Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue: Increased CPU and Memory Overhead with Uvicorn Multi-Processing #2466

Open
2 tasks done
jahnavisana2812 opened this issue Sep 23, 2024 · 5 comments
Open
2 tasks done
Assignees

Comments

@jahnavisana2812
Copy link

Initial Checks

  • I confirm this was discussed, and the maintainers suggest I open an issue.
  • I'm aware that if I created this issue without a discussion, it may be closed without a response.

Discussion Link

https://github.com/encode/uvicorn/discussions/2463

Description

We were previously using Gunicorn with Uvicorn workers for our application. Recently, we decided to shift to using Uvicorn directly, due to improvements in Uvicorn.

However, while Gunicorn used to spawn 5 sub-processes for handling requests, Uvicorn now creates 5 multiprocessing spawn processes using Python's multiprocessing library. This results in noticeable CPU and memory overhead, which we didn't experience with Gunicorn workers.

When running Uvicorn with a single worker, the overhead is not present, but increasing the worker count leads to the issue. I have attached screenshots of our monitoring data, which clearly show the difference in resource usage between configurations.

Is this behavior expected with Uvicorn, or is there a possible fix for reducing the overhead in future updates? Any guidance or suggestions would be appreciated.

With default worker=1
Screenshot 2024-09-20 at 4 20 18 PM
with workers=5
Screenshot 2024-09-20 at 4 20 39 PM

Example Code

uvicorn app.main:app --host 0.0.0.0 --port 8000 --workers 5

Python, Uvicorn & OS Version

Running uvicorn 0.30.6 with CPython 3.11.9 on Ubuntu
@abersheeran
Copy link
Member

The difference is that gunicorn uses prefork mode to spawn multiple processes, while uvicorn's multi-process mode uses a more general mode for compatibility with Windows systems.

If you are running on a server with limited performance, using gunicorn's prefork mode would be a better choice.

@Kludex
Copy link
Member

Kludex commented Sep 28, 2024

But shouldn't it be constant at some point anyway?

@abersheeran
Copy link
Member

abersheeran commented Sep 28, 2024

I'm not sure if the growth in the graph he gave is due to requests, but if you run five processes and don't request it, it shouldn't cause a continuous increase in CPU.

@jahnavisana2812
Copy link
Author

jahnavisana2812 commented Sep 30, 2024

It seems like fluctuating requests are leading to increased request rates, which, in turn, are spiking CPU usage. Here's a comparison between two of my test environments: one with a single worker and another with five workers. The environment with five workers shows many ups and downs in CPU usage, whereas the one with a single worker is more stable. However, I can't limit my workers to just one. Could you suggest a solution for better handling this situation?

Workers=1
Screenshot 2024-10-01 at 11 09 15 AM

Workers=5
Screenshot 2024-10-01 at 11 08 48 AM

@lfloeer
Copy link

lfloeer commented Nov 20, 2024

I just walked into this thread by accident, but I might have a suspicion for why you are seeing these graphs: I presume you are using Prometheus to get the CPU metrics, right? CPU in Prometheus is usually expressed as a counter that always increases, and you can use functions like increase in your query to get the actual CPU utilization for a given time frame. When starting five processes, I would expect that (as is usual for Prometheus) that you get a CPU counter per process and what you are seeing in the lower graph is actually five different counters at different counting stages. So I would think you could use sum over the individual counters and then increase to get a comparable graph to the first one. Hope this helps (and is actually what's happening).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants