Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix exponential backoff calculation #21959

Merged
merged 3 commits into from
Feb 17, 2025
Merged

Conversation

chris-smith-zocdoc
Copy link
Contributor

@chris-smith-zocdoc chris-smith-zocdoc commented Feb 15, 2025

Fixes #21958

Required the update of tokio-retry to tokio-retry2 which is a fork of the original (which is now unmaintained?)

The backoff algorithm is now

sleep = random(0, base * 2 ** attempt)

so they will approximate 200ms, 400ms, 800ms, 1600ms

Previously it was

sleep = random(0, base ** attempt)

which was 200ms, 40000ms, 8000000ms, 1600000000ms

@sureshjoshi
Copy link
Member

Added some people who spend more time in the Pants rustland than I do right now.

One concern I have after some thought, is that this is a technically breaking change. I can't say whether people who use Pants care about this default, or expect this behaviour - but just by sheer definition, it changes a known implementation.

Not sure if this is something that should be behind a flag for 3 revisions?

Copy link
Contributor

@huonw huonw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for noticing this and fixing it.

Can you add release notes to docs/notes/2.26.x.md?

1600000000ms

That's a lot of milliseconds!

One concern I have after some thought, is that this is a technically breaking change. I can't say whether people who use Pants care about this default, or expect this behaviour - but just by sheer definition, it changes a known implementation.

Good to think about.

From my perspective, I think this specific example is just a bug fix that we don't need to stage out. The old behaviour seems actively unhelpful for almost all reasonable values of the file_downloads_retry_delay option:

  • it seems unlikely most people would customise it
  • for the few that did customise it, it seems unlikely to be less than 0.1s = 100ms... unless they're working around this bug.

And thus, we'll always have huge numbers of milliseconds, the "smallest" likely sequence seems like: 100ms -> 10000ms (10s) -> 1000000ms (~17 minutes) -> ...

@sureshjoshi
Copy link
Member

From my perspective, I think this specific example is just a bug fix that we don't need to stage out.

I would agree. This just feels like something that's flagrantly unexpected, so to me it just rings of "bug"

@chris-smith-zocdoc
Copy link
Contributor Author

Can you add release notes to docs/notes/2.26.x.md?

Done, let me know if you want to change the note

Copy link
Contributor

@huonw huonw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great other than a tweak to the notes!

docs/notes/2.26.x.md Outdated Show resolved Hide resolved
chris-smith-zocdoc and others added 3 commits February 16, 2025 22:19
Required the update of tokio-retry to tokio-retry2 which is a fork of the original (which is now unmaintained?)

The backoff algorithm is now

sleep = random(0, base * 2 ** attempt)
so they will approximate 200ms, 400ms, 800ms, 1600ms

Previously it was

sleep = random(0, base ** attempt)
which was 200ms, 40000ms, 8000000ms, 1600000000ms
Co-authored-by: Huon Wilson <[email protected]>
@huonw huonw enabled auto-merge (squash) February 17, 2025 03:27
@huonw huonw merged commit 303a082 into pantsbuild:main Feb 17, 2025
24 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Infinite loop in url retry
3 participants