Skip to content

Conversation

@preritdas
Copy link
Member

@preritdas preritdas commented Nov 24, 2025

With the first redesign of the client request system, it was possible that an access token would expire if multiple requests failed as it wasn't being automatically updated while the client was backing off the provider's API.

This redesigns it with a separation of concerns, giving _request the retry logic and __request only handles access token refresh. _request calls __request on backoff which updates the token if needed every time.

Summary by CodeRabbit

  • Refactor
    • Simplified request flow and centralized retry/backoff handling for more reliable network requests.
    • Improved logging and diagnostic correlation across retry attempts to make failures easier to trace.
    • Final failures now surface the original error after exhausting retries, providing clearer error visibility for callers.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Nov 24, 2025

Pre-merge checks and finishing touches

❌ Failed checks (1 inconclusive)
Check name Status Explanation Resolution
Title check ❓ Inconclusive The title is vague and does not clearly convey the main change. It mentions 'improve design' and 'access token update through refresh,' but doesn't specifically highlight the core fix: separating retry/backoff logic from access token refresh to prevent token expiration during backoff. Consider a more specific title like 'Separate retry logic from token refresh to prevent expiration during backoff' or 'Fix token expiration by refreshing on each backoff attempt'.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch better-request-abstraction

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a233cbb and 7b2abe2.

📒 Files selected for processing (1)
  • real_intent/client.py (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
real_intent/client.py (1)
real_intent/internal_logging.py (1)
  • log (16-17)
🪛 Ruff (0.14.5)
real_intent/client.py

155-155: Standard pseudo-random generators are not suitable for cryptographic purposes

(S311)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: CodeQL analysis (python)
  • GitHub Check: Agent
  • GitHub Check: test
  • GitHub Check: test
🔇 Additional comments (3)
real_intent/client.py (3)

130-138: LGTM! Clean separation of single-request logic.

The refactor correctly isolates the single-pass request logic (token refresh + send + parse), and since _request now calls this on each retry attempt, the access token will be refreshed as needed even during backoff periods. This directly addresses the PR objective.


140-177: Retry logic is correct and addresses the PR objective.

The control flow properly handles both success (break → log → return) and failure (else → log error → raise) paths. By calling __request on each attempt, the access token is refreshed as needed even during backoff, which solves the token expiration issue mentioned in the PR objectives.


154-157: Static analysis hint is a false positive.

The use of random.uniform for backoff jitter is appropriate here—it's not for cryptographic purposes, so the S311 warning can be safely ignored.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR refactors the client request system to ensure access tokens are refreshed during retry backoffs, preventing token expiration when multiple requests fail consecutively. The redesign separates concerns by moving retry logic from __request to _request, allowing token validation and refresh to occur on every retry attempt.

Key changes:

  • Moved retry loop with exponential backoff from __request to _request
  • Simplified __request to handle single request attempts with automatic token refresh
  • _request now calls __request on each retry, ensuring tokens are refreshed as needed during long backoff periods

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
real_intent/client.py (2)

130-134: Consider reusing a Session for connection pooling.

Creating a new Session for each request works but doesn't benefit from connection pooling. For better performance, consider creating a session in __init__ and reusing it. However, this would require thread-local storage or careful lock management for thread safety, so the current approach is acceptable as a simpler design.


137-137: Trace logging of raw response may be redundant.

A past review flagged duplicate "Received response" logging at lines 137 and 176. While I don't see logging at line 176 in the current code, the trace log at line 137 logs the raw Response object. Consider whether this trace log provides value, or if it could be removed or made more specific (e.g., logging just the status code and URL).

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7b2abe2 and 34907c0.

📒 Files selected for processing (1)
  • real_intent/client.py (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
real_intent/client.py (1)
real_intent/internal_logging.py (1)
  • log (16-17)
🪛 Ruff (0.14.5)
real_intent/client.py

154-154: Standard pseudo-random generators are not suitable for cryptographic purposes

(S311)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: test
🔇 Additional comments (3)
real_intent/client.py (3)

154-154: Static analysis false positive (S311) - no action needed.

The Ruff warning about using standard random for cryptographic purposes is a false positive. The random.uniform call here generates jitter for backoff timing, which doesn't require cryptographic randomness.


140-173: Well-designed separation of concerns.

The refactoring successfully addresses the PR objective by ensuring that _request handles retry/backoff while __request handles token refresh. Each retry attempt calls __request, so if the access token expires during the backoff sleep period, it will be refreshed before the next attempt.


153-156: Verify the 30+ second backoff against BigDBM API specifications and add clarifying comments.

The backoff strategy (30-100s for attempts 1-3) is undocumented in the codebase. Since __request calls raise_for_status(), this backoff applies to all request failures including HTTP 429 errors. While this may be intentionally conservative for a rate-limited API, without access to BigDBM's documented rate limit policies and expected failure recovery times, alignment cannot be confirmed. Recommend:

  • Consult BigDBM API documentation for rate limit windows and recommended backoff behavior
  • Add an explanatory comment if this strategy is intentional (e.g., "BigDBM API requires 30+ second recovery")
  • Consider configuring these values as parameters if they may vary by deployment

@preritdas preritdas merged commit ed71898 into master Nov 24, 2025
2 checks passed
@preritdas preritdas deleted the better-request-abstraction branch November 24, 2025 01:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants