-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Issue]: Are rate limiting parameters observed? #1500
Comments
As an update I migrated to 0.9.0 and still see lots of 429 errors but the retry wait time is no longer logged, so I'm not sure what the client is doing on a 429 error. I do see some examples of retry failures reaching max_retries and the exception is logged.
|
We believe this was a bug introducing during our adoption of fnllm as the underlying LLM library. We just pushed out a 1.0.1 patch today, please let if know if your problem still exists with that version. |
This issue has been marked stale due to inactivity after repo maintainer or community member responses that request more information or suggest a solution. It will be closed after five additional days. |
Do you need to file an issue?
Describe the issue
I am running 0.6.0 and encountering throttling issues with Azure OpenAI.
The rate limiter seems to default to 0 seconds and tries to parse the recommendation from the error message, but always returns 0:
I don't see anywhere where the
max_retry_wait
parameter is used or any logic to implement smart backoff. Looking at 0.9.0 it seems like the rate-limiting LLM class was removed. How is the current system handling throttling?Steps to reproduce
Run indexing on a throttled AOAI endpoint.
GraphRAG Config Used
Logs and screenshots
log snippet (thousands of entries):
After too many retries:
Additional Information
The text was updated successfully, but these errors were encountered: