Skip to content

Conversation

@preritdas
Copy link
Member

@preritdas preritdas commented Nov 23, 2025

Instead of a hardcoded double backoff with (7, 13) delay, create a loop-based exponential backoff with a parameter at instantiation max_request_attempts defaulting to 3. Each request fail increments in 30 * n_attempt seconds, more forgiving to target API.

Unfortunately the double backoff was failing the entire system a few times due to a brittle provider API. This should be much more robust.

Summary by CodeRabbit

  • New Features

    • Added configurable per-request timeout (default 30s) and configurable max retry attempts (default 3) to improve connection resilience.
    • Implemented automatic retry with backoff for transient failures, including per-attempt reporting and final failure handling.
    • Input validation ensures retry attempts is at least 1.
  • Documentation

    • Updated initialization docs to describe new parameters and retry behavior.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Nov 23, 2025

Note

Other AI code review bot(s) detected

CodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review.

Walkthrough

The BigDBMClient constructor now accepts request_timeout_seconds and max_request_attempts. The __request method was refactored to perform per-request retries (up to max_request_attempts) using request_timeout_seconds with logging, exponential backoff between attempts, and raising the final exception on persistent failure.

Changes

Cohort / File(s) Summary
Retry Logic & Configurable Timeouts
real_intent/client.py
BigDBMClient.__init__ adds request_timeout_seconds: int = 30 and max_request_attempts: int = 3 (validates max_request_attempts >= 1). Replaced fixed timeout with self.request_timeout_seconds and added self.max_request_attempts. __request refactored to loop up to max_request_attempts, use request_timeout_seconds per attempt, log per-attempt warnings, apply exponential backoff on transient RequestExceptions, and log/error-raise the last exception if all attempts fail. Minor docstring edits.

Sequence Diagram(s)

sequenceDiagram
    actor Caller
    participant Client as BigDBMClient
    participant HTTP as HTTP Layer

    Caller->>Client: __request(...)
    
    rect rgb(220,235,255)
    Note over Client: Retry loop (up to max_request_attempts)
    loop Each Attempt
        Client->>HTTP: Send request (timeout: request_timeout_seconds)
        alt Success
            HTTP-->>Client: Response
            Client-->>Caller: Return response
        else RequestException
            HTTP-->>Client: Exception
            Client->>Client: Log warning (attempt N)
            Client->>Client: Sleep (exponential backoff)
        end
    end
    end

    rect rgb(255,220,220)
    Note over Client: All attempts failed
    Client->>Client: Log error (final)
    Client-->>Caller: Raise last exception
    end
Loading

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

  • Verify backoff calculation and sleep timing in __request.
  • Confirm correct capture and re-raising of the final exception.
  • Check validation in __init__ for max_request_attempts and default values.

Poem

🐰 I hop and try, timeout set just right,
Three chances given in the fading light.
If packets tumble, I’ll retry anew,
Logging each hop until success rings true.
Small hops, stout heart — resilient through.

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'Flexible dynamic exponential backoff of requests' directly describes the main change: replacing hardcoded backoff delays with a loop-based exponential backoff mechanism controlled by configurable parameters.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch triple-backoff

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 8c61552 and 96bd245.

📒 Files selected for processing (1)
  • real_intent/client.py (3 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
real_intent/client.py (1)
real_intent/internal_logging.py (1)
  • log (16-17)
🪛 Ruff (0.14.5)
real_intent/client.py

135-135: Standard pseudo-random generators are not suitable for cryptographic purposes

(S311)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: test
  • GitHub Check: test
🔇 Additional comments (1)
real_intent/client.py (1)

34-46: LGTM! Constructor parameters are well-designed.

The addition of configurable request_timeout_seconds and max_request_attempts parameters provides the intended flexibility for exponential backoff behavior.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 96bd245 and 772ab04.

📒 Files selected for processing (1)
  • real_intent/client.py (3 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
real_intent/client.py (1)
real_intent/internal_logging.py (1)
  • log (16-17)
🪛 Ruff (0.14.5)
real_intent/client.py

126-126: Standard pseudo-random generators are not suitable for cryptographic purposes

(S311)


142-142: Undefined name e

(F821)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: test
🔇 Additional comments (2)
real_intent/client.py (2)

34-40: LGTM!

The new parameters are well-named, properly typed, and have sensible defaults. The signature change maintains backward compatibility for existing code using positional arguments.


45-46: LGTM!

The attributes are properly initialized from the constructor parameters, and the rename from timeout_seconds to request_timeout_seconds improves clarity.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
real_intent/client.py (1)

113-148: Critical: Exception scoping bugs still present.

The retry loop still has the critical exception handling bugs flagged in previous reviews:

  1. Line 145: e is undefined - In Python 3, exception variables are automatically deleted at the end of the except block (PEP 3110). The else block cannot access e, causing a NameError.

  2. Line 148: Bare raise fails - A bare raise statement only works inside an exception handler to re-raise the active exception. In the for...else block, there's no active exception, causing a RuntimeError.

These issues will cause the code to crash when all retry attempts are exhausted.

Apply this diff to fix the exception handling:

+        last_exception: Exception | None = None
         for n_attempt in range(1, self.max_request_attempts+1):
             try:
                 with Session() as session:
                     response = session.send(
                         request.prepare(), 
                         timeout=self.request_timeout_seconds
                     )
 
                 response.raise_for_status()
                 break
             except RequestException as e:
+                last_exception = e
+                
                 if n_attempt == self.max_request_attempts:
-                    continue  # on last failed attempt, skip to else block
+                    break  # Don't sleep after final failure
 
                 # Log it, sleep, loop again to try again
                 sleep_time = round(
                     number=random.uniform(30 * n_attempt, (30 * n_attempt) + 10),
                     ndigits=2
                 )
                 log(
                     "warn", 
                     (
                         f"Request attempt {n_attempt} of {self.max_request_attempts} "
                         f"failed. Retrying in {sleep_time}s. Error: {e}"
                     )
                 )
                 time.sleep(sleep_time)
         else:
             log(
                 "error", 
                 (
                     f"Request failed after {self.max_request_attempts} "
-                    f"attempts. Error: {e}"
+                    f"attempts. Error: {last_exception}"
                 )
             )
-            raise
+            raise last_exception
🧹 Nitpick comments (1)
real_intent/client.py (1)

34-46: Consider adding parameter validation.

The new parameters lack validation. Invalid values like max_request_attempts < 1 or request_timeout_seconds <= 0 could cause confusing runtime behavior.

Apply this diff to add validation:

     def __init__(
             self, 
             client_id: str, 
             client_secret: str, 
             request_timeout_seconds: int = 30,
             max_request_attempts: int = 3
         ) -> None:
         """Initialize the BigDBM client."""
+        if max_request_attempts < 1:
+            raise ValueError("max_request_attempts must be at least 1")
+        if request_timeout_seconds <= 0:
+            raise ValueError("request_timeout_seconds must be positive")
+        
         self.client_id: str = client_id
         self.client_secret: str = client_secret
 
         self.request_timeout_seconds: int = request_timeout_seconds
         self.max_request_attempts: int = max_request_attempts
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 772ab04 and 97e2461.

📒 Files selected for processing (1)
  • real_intent/client.py (3 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
real_intent/client.py (1)
real_intent/internal_logging.py (1)
  • log (16-17)
🪛 Ruff (0.14.5)
real_intent/client.py

129-129: Standard pseudo-random generators are not suitable for cryptographic purposes

(S311)


145-145: Undefined name e

(F821)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: test
🔇 Additional comments (1)
real_intent/client.py (1)

88-88: LGTM!

The updated docstring accurately reflects the behavior of the method.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR replaces hardcoded retry logic with a configurable retry mechanism for API requests. The changes make the client more resilient to transient API failures by allowing customization of timeout and retry attempts, though the implementation uses linear backoff rather than true exponential backoff as described.

Key Changes

  • Added configurable request_timeout_seconds (default: 30s) and max_request_attempts (default: 3) parameters to BigDBMClient.__init__
  • Replaced one-time retry with sleep of 7-13 seconds with a loop-based retry mechanism
  • Implemented linear backoff with delays of approximately 30s, 60s, 90s for successive attempts (not exponential as stated in description)

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (1)
real_intent/client.py (1)

85-90: Docstring should mention retry/backoff behavior.

__request now performs multiple attempts with backoff, not just a single call that “raises for status automatically.” Consider updating the docstring to describe that it may retry up to max_request_attempts with incremental delays before propagating a RequestException, so callers understand the potential latency and failure mode.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 97e2461 and 09d20a0.

📒 Files selected for processing (1)
  • real_intent/client.py (3 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
real_intent/client.py (1)
real_intent/internal_logging.py (1)
  • log (16-17)
🪛 Ruff (0.14.5)
real_intent/client.py

131-131: Standard pseudo-random generators are not suitable for cryptographic purposes

(S311)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: test

preritdas and others added 2 commits November 23, 2025 17:10
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (2)
real_intent/client.py (2)

146-146: Clarify: the backoff is linear, not exponential.

The PR description claims "exponential backoff," but the formula 30 * n_attempt produces linear delays: approximately 30-40s, 60-70s, 90-100s for attempts 1, 2, 3. If true exponential backoff is intended, use 30 * (2 ** (n_attempt - 1)) for delays of approximately 30-40s, 60-70s, 120-130s. Otherwise, update the PR description to accurately describe this as "linear backoff."

Additionally, silence the S311 static analysis warning since random.uniform is appropriate for non-cryptographic jitter.

Option 1: Keep linear backoff and silence S311 (if current behavior is acceptable):

                 sleep_time = round(
-                    number=random.uniform(30 * n_attempt, (30 * n_attempt) + 10),
+                    number=random.uniform(30 * n_attempt, (30 * n_attempt) + 10),  # noqa: S311
                     ndigits=2
                 )

Option 2: Implement true exponential backoff:

                 sleep_time = round(
-                    number=random.uniform(30 * n_attempt, (30 * n_attempt) + 10),
+                    number=random.uniform(30 * (2 ** (n_attempt - 1)), (30 * (2 ** (n_attempt - 1))) + 10),  # noqa: S311
                     ndigits=2
                 )

38-38: Add validation for request_timeout_seconds to prevent immediate timeout failures.

The parameter accepts any integer value, including 0 or negative numbers, which would cause immediate timeout failures or undefined behavior in the requests library.

Apply this diff to add validation:

         """
         if max_request_attempts < 1:
             raise ValueError("max_request_attempts must be at least 1")
+        if request_timeout_seconds <= 0:
+            raise ValueError("request_timeout_seconds must be positive")
+        
         self.client_id: str = client_id
         self.client_secret: str = client_secret
🧹 Nitpick comments (2)
real_intent/client.py (2)

141-142: Consider removing the unnecessary continue statement.

The continue statement when on the last attempt is unnecessary and could be confusing. The loop would naturally end anyway, and the else block would execute. A clearer pattern is to check if n_attempt < self.max_request_attempts: before the sleep logic.

Apply this diff for clearer intent:

             except RequestException as e:
                 last_exception: RequestException = e
                 
-                if n_attempt == self.max_request_attempts:
-                    continue  # on last failed attempt, skip to else block
-                
-                # Log it, sleep, loop again to try again
-                sleep_time = round(
-                    number=random.uniform(30 * n_attempt, (30 * n_attempt) + 10),
-                    ndigits=2
-                )
-                log(
-                    "warn", 
-                    (
-                        f"Request attempt {n_attempt} of {self.max_request_attempts} "
-                        f"failed. Retrying in {sleep_time}s. Error: {e}"
+                if n_attempt < self.max_request_attempts:
+                    # Log it, sleep, loop again to try again
+                    sleep_time = round(
+                        number=random.uniform(30 * n_attempt, (30 * n_attempt) + 10),
+                        ndigits=2
                     )
-                )
-                time.sleep(sleep_time)
+                    log(
+                        "warn", 
+                        (
+                            f"Request attempt {n_attempt} of {self.max_request_attempts} "
+                            f"failed. Retrying in {sleep_time}s. Error: {e}"
+                        )
+                    )
+                    time.sleep(sleep_time)

145-147: Consider extracting magic numbers as constants.

The hardcoded values 30 (base delay in seconds) and 10 (jitter in seconds) make the backoff calculation less maintainable. Consider extracting them as class-level constants like BASE_RETRY_DELAY_SECONDS = 30 and RETRY_JITTER_SECONDS = 10, or making them configurable via __init__ parameters.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 09d20a0 and a716bcc.

📒 Files selected for processing (1)
  • real_intent/client.py (3 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
real_intent/client.py (1)
real_intent/internal_logging.py (1)
  • log (16-17)
🪛 Ruff (0.14.5)
real_intent/client.py

56-56: Avoid specifying long messages outside the exception class

(TRY003)


146-146: Standard pseudo-random generators are not suitable for cryptographic purposes

(S311)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: test

[skip ci]
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
real_intent/client.py (2)

34-63: Good additions: validation and documentation.

The constructor improvements are solid:

  • Parameter validation for max_request_attempts >= 1 prevents the UnboundLocalError identified in previous reviews.
  • The docstring clearly documents both new parameters and their defaults.

Based on previous review feedback from Copilot, consider adding validation for request_timeout_seconds to ensure it's positive:

         """
         # Validate input
+        if request_timeout_seconds <= 0:
+            raise ValueError("request_timeout_seconds must be positive")
         if max_request_attempts < 1:
             raise ValueError("max_request_attempts must be at least 1")

130-167: Retry logic is now correct; consider optional cleanup.

The retry implementation properly addresses the critical bugs from earlier reviews:

  • Success path breaks out of the loop immediately.
  • Exception scoping uses last_exception variable correctly.
  • The for-else pattern ensures final failures are logged and raised.

A few optional refinements remain from previous feedback:

  1. Line 144: Unnecessary continue – As Copilot noted, the continue on the last attempt doesn't add clarity. Consider restructuring:
             except RequestException as e:
                 last_exception: RequestException = e
-
-                if n_attempt == self.max_request_attempts:
-                    continue  # on last failed attempt, skip to else block
-
-                # Log it, sleep, loop again to try again
-                sleep_time = round(
-                    number=random.uniform(30 * n_attempt, (30 * n_attempt) + 10),
-                    ndigits=2
-                )
-                log(
-                    "warn", 
-                    (
-                        f"Request attempt {n_attempt} of {self.max_request_attempts} "
-                        f"failed. Retrying in {sleep_time}s. Error: {e}"
-                    )
-                )
-                time.sleep(sleep_time)
+
+                # Only retry if not on the last attempt
+                if n_attempt < self.max_request_attempts:
+                    sleep_time = round(
+                        number=random.uniform(30 * n_attempt, (30 * n_attempt) + 10),  # noqa: S311
+                        ndigits=2
+                    )
+                    log(
+                        "warn", 
+                        (
+                            f"Request attempt {n_attempt} of {self.max_request_attempts} "
+                            f"failed. Retrying in {sleep_time}s. Error: {e}"
+                        )
+                    )
+                    time.sleep(sleep_time)
  1. Line 148: Linear vs exponential backoff – Copilot correctly noted the backoff is linear (30s, 60s, 90s) not exponential. Either update the PR title/description or adjust to true exponential (30 * 2^(n-1)30s, 60s, 120s).

  2. Line 148: S311 warning – Suppress the non-cryptographic random warning with # noqa: S311 as shown above.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a716bcc and f466121.

📒 Files selected for processing (1)
  • real_intent/client.py (3 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
real_intent/client.py (1)
real_intent/internal_logging.py (1)
  • log (16-17)
🪛 Ruff (0.14.5)
real_intent/client.py

57-57: Avoid specifying long messages outside the exception class

(TRY003)


148-148: Standard pseudo-random generators are not suitable for cryptographic purposes

(S311)

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (3)
real_intent/client.py (3)

143-144: Remove unnecessary continue statement.

When n_attempt == self.max_request_attempts, the loop naturally ends after this iteration regardless of the continue statement. The code then falls through to the else block as intended. The continue adds no functional value and creates confusion about the control flow.

Apply this diff to simplify:

             except RequestException as e:
                 last_exception: RequestException = e
 
-                if n_attempt == self.max_request_attempts:
-                    continue  # on last failed attempt, skip to else block
-
-                # Log it, sleep, loop again to try again
-                sleep_time = round(
-                    number=random.uniform(30 * n_attempt, (30 * n_attempt) + 10),
-                    ndigits=2
-                )
-                log(
-                    "warn", 
-                    (
-                        f"Request attempt {n_attempt} of {self.max_request_attempts} "
-                        f"failed. Retrying in {sleep_time}s. Error: {e}"
-                    )
-                )
-                time.sleep(sleep_time)
+                # Only retry if not on the last attempt
+                if n_attempt < self.max_request_attempts:
+                    # Log it, sleep, loop again to try again
+                    sleep_time = round(
+                        number=random.uniform(30 * n_attempt, (30 * n_attempt) + 10),
+                        ndigits=2
+                    )
+                    log(
+                        "warn", 
+                        (
+                            f"Request attempt {n_attempt} of {self.max_request_attempts} "
+                            f"failed. Retrying in {sleep_time}s. Error: {e}"
+                        )
+                    )
+                    time.sleep(sleep_time)

34-64: Add validation for request_timeout_seconds parameter.

The constructor validates max_request_attempts but doesn't validate request_timeout_seconds. A value of 0 or negative would cause immediate timeout failures or undefined behavior in the requests library.

Apply this diff to add validation:

         """
         # Validate input
+        if not isinstance(request_timeout_seconds, int) or request_timeout_seconds <= 0:
+            raise ValueError("request_timeout_seconds must be an int and greater than 0")
+        
         if not isinstance(max_request_attempts, int) or max_request_attempts < 1:
             raise ValueError("max_request_attempts must be an int and at least 1")

147-149: Clarify: This implements linear backoff, not exponential.

The PR title and description claim "exponential backoff," but the formula 30 * n_attempt produces linear delays: approximately 30-40s, 60-70s, 90-100s for attempts 1, 2, 3.

True exponential backoff would use 30 * (2 ** (n_attempt - 1)), yielding approximately 30-40s, 60-70s, 120-130s.

Either update the PR description to accurately describe this as "linear backoff," or apply this diff for true exponential backoff:

                 # Log it, sleep, loop again to try again
                 sleep_time = round(
-                    number=random.uniform(30 * n_attempt, (30 * n_attempt) + 10),
+                    number=random.uniform(30 * (2 ** (n_attempt - 1)), (30 * (2 ** (n_attempt - 1))) + 10),  # noqa: S311
                     ndigits=2
                 )

Note: I've also added # noqa: S311 to suppress the Ruff warning about random.uniform in cryptographic contexts—backoff jitter is not a security-sensitive use case.

🧹 Nitpick comments (1)
real_intent/client.py (1)

148-148: Suppress false-positive S311 warning for non-cryptographic random usage.

The random.uniform call is used only for backoff jitter timing, not for cryptographic purposes. The Ruff S311 warning is a false positive here.

Apply this diff to suppress the warning:

                 sleep_time = round(
-                    number=random.uniform(30 * n_attempt, (30 * n_attempt) + 10),
+                    number=random.uniform(30 * n_attempt, (30 * n_attempt) + 10),  # noqa: S311
                     ndigits=2
                 )
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between f466121 and 93f58b9.

📒 Files selected for processing (1)
  • real_intent/client.py (3 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
real_intent/client.py (1)
real_intent/internal_logging.py (1)
  • log (16-17)
🪛 Ruff (0.14.5)
real_intent/client.py

57-57: Avoid specifying long messages outside the exception class

(TRY003)


148-148: Standard pseudo-random generators are not suitable for cryptographic purposes

(S311)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: test

@preritdas preritdas merged commit e0b6ab1 into master Nov 23, 2025
1 of 2 checks passed
@preritdas preritdas deleted the triple-backoff branch November 23, 2025 23:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants