Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Polymorphize array::IntoIter's iterator impl #139430

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

scottmcm
Copy link
Member

@scottmcm scottmcm commented Apr 6, 2025

Today we emit all the iterator methods for every different array width. That's wasteful since the actual array length never even comes into it -- the indices used are from the separate alive: IndexRange field, not even the N const param.

This PR switches things so that an array::IntoIter<T, N> stores a PolymorphicIter<[MaybeUninit<T>; N]>, which we unsize to PolymorphicIter<[MaybeUninit<T>]> and call methods on that non-Sized type for all the iterator methods.

That also necessarily makes the layout consistent between the different lengths of arrays, because of the unsizing. Compare that to today https://rust.godbolt.org/z/Prb4xMPrb, where different widths can't even be deduped because the offset to the indices is different for different array widths.

@rustbot
Copy link
Collaborator

rustbot commented Apr 6, 2025

r? @workingjubilee

rustbot has assigned @workingjubilee.
They will have a look at your PR within the next two weeks and either review your PR or reassign to another reviewer.

Use r? to explicitly pick a reviewer

@rustbot rustbot added S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. T-compiler Relevant to the compiler team, which will review and decide on the PR/issue. T-libs Relevant to the library team, which will review and decide on the PR/issue. labels Apr 6, 2025
// const-fold for certain widths. The `test_eight` case below shows that, yes,
// what we're emitting *can* be const-folded, except that the way LLVM does it
// for certain widths doesn't today. We should be able to put this back to
// the same check after <https://github.com/llvm/llvm-project/issues/134513>
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Making the mention explicit: llvm/llvm-project#134513

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why match the undesired output though? Just to raise a signal if that changes?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mostly just to raise the signal, yeah, which is why I'm not trying to match it at all specifically.

But also that if we do change the library code in a way that doesn't optimize at all that'd still be good to catch. It's pretty unlikely, TBH, if we're also testing the -v1 version in the test.

I'd be fine to remove these checks too, though, if it's preferred. (Or just drop it to the masked.load as a way to confirm it's still the same issue, or similar.)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think masked.load would suffice, or maybe ret i64 % as well to show the non-const return.

Would you mind adding a more explicit x86-64 base case as I did in #139503? This relates back to #131563, where I first ran into v3 trouble because RHEL 10 has increased its default cpu.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, I cut it back to just the masked.load and brought over the v1 case from #139503 -- please check that it still matches the intent you had.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My host ignore-x86_64 is also intended to exclude the case of "actually, the default cpu is x86-64-v3 and fails." So that gives explicit testing of x86_64 versions, and default for everything else.

@scottmcm
Copy link
Member Author

scottmcm commented Apr 6, 2025

Jubilee's already on three other PRs of mine, so let's spread the wealth
r? libs

@rustbot rustbot assigned jhpratt and unassigned workingjubilee Apr 6, 2025
Copy link
Member

@cuviper cuviper left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like the approach -- shall we do a perf run though?

@scottmcm
Copy link
Member Author

scottmcm commented Apr 9, 2025

shall we do a perf run though?

Sure, let's see what happens
@bors try @rust-timer queue

@rust-timer

This comment has been minimized.

@rustbot rustbot added the S-waiting-on-perf Status: Waiting on a perf run to be completed. label Apr 9, 2025
@bors
Copy link
Collaborator

bors commented Apr 9, 2025

⌛ Trying commit e30cb32 with merge 23f2683...

bors added a commit to rust-lang-ci/rust that referenced this pull request Apr 9, 2025
…, r=<try>

Polymorphize `array::IntoIter`'s iterator impl

Today we emit all the iterator methods for every different array width.  That's wasteful since the actual array length never even comes into it -- the indices used are from the separate `alive: IndexRange` field, not even the `N` const param.

This PR switches things so that an `array::IntoIter<T, N>` stores a `PolymorphicIter<[MaybeUninit<T>; N]>`, which we *unsize* to `PolymorphicIter<[MaybeUninit<T>]>` and call methods on that non-`Sized` type for all the iterator methods.

That also necessarily makes the layout consistent between the different lengths of arrays, because of the unsizing.  Compare that to today <https://rust.godbolt.org/z/Prb4xMPrb>, where different widths can't even be deduped because the offset to the indices is different for different array widths.
@bors
Copy link
Collaborator

bors commented Apr 9, 2025

☀️ Try build successful - checks-actions
Build commit: 23f2683 (23f2683ca5f674c59b7f65e72d6f9b17dc22230d)

@rust-timer

This comment has been minimized.

@rust-timer
Copy link
Collaborator

Finished benchmarking commit (23f2683): comparison URL.

Overall result: ❌ regressions - please read the text below

Benchmarking this pull request likely means that it is perf-sensitive, so we're automatically marking it as not fit for rolling up. While you can manually mark this PR as fit for rollup, we strongly recommend not doing so since this PR may lead to changes in compiler perf.

Next Steps: If you can justify the regressions found in this try perf run, please indicate this with @rustbot label: +perf-regression-triaged along with sufficient written justification. If you cannot justify the regressions please fix the regressions and do another perf run. If the next run shows neutral or positive results, the label will be automatically removed.

@bors rollup=never
@rustbot label: -S-waiting-on-perf +perf-regression

Instruction count

This is the most reliable metric that we have; it was used to determine the overall result at the top of this comment. However, even this metric can sometimes exhibit noise.

mean range count
Regressions ❌
(primary)
0.6% [0.4%, 1.0%] 3
Regressions ❌
(secondary)
- - 0
Improvements ✅
(primary)
- - 0
Improvements ✅
(secondary)
- - 0
All ❌✅ (primary) 0.6% [0.4%, 1.0%] 3

Max RSS (memory usage)

Results (primary 3.9%, secondary -5.6%)

This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.

mean range count
Regressions ❌
(primary)
3.9% [1.2%, 7.8%] 5
Regressions ❌
(secondary)
- - 0
Improvements ✅
(primary)
- - 0
Improvements ✅
(secondary)
-5.6% [-5.6%, -5.6%] 1
All ❌✅ (primary) 3.9% [1.2%, 7.8%] 5

Cycles

Results (secondary -0.0%)

This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.

mean range count
Regressions ❌
(primary)
- - 0
Regressions ❌
(secondary)
2.4% [2.1%, 3.1%] 4
Improvements ✅
(primary)
- - 0
Improvements ✅
(secondary)
-5.0% [-7.7%, -2.2%] 2
All ❌✅ (primary) - - 0

Binary size

Results (primary 0.1%, secondary -0.0%)

This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.

mean range count
Regressions ❌
(primary)
0.2% [0.0%, 0.7%] 17
Regressions ❌
(secondary)
- - 0
Improvements ✅
(primary)
-0.1% [-0.2%, -0.0%] 15
Improvements ✅
(secondary)
-0.0% [-0.0%, -0.0%] 35
All ❌✅ (primary) 0.1% [-0.2%, 0.7%] 32

Bootstrap: 780.796s -> 777.474s (-0.43%)
Artifact size: 366.15 MiB -> 366.17 MiB (0.01%)

@rustbot rustbot added perf-regression Performance regression. and removed S-waiting-on-perf Status: Waiting on a perf run to be completed. labels Apr 9, 2025
@scottmcm
Copy link
Member Author

scottmcm commented Apr 9, 2025

Those perf results look fine to me. 3 seconds on bootstrap is nice, and even though that cargo opt-full instructions number is +1.04%, it's -0.23% on wall time.

Looking closer, it looks like classic codegen schedule churn, which makes sense from switching up what's sharable between sizes. There's actually less to optimize at the module level, with a bit more in LTO, that ends up fluking into a slight overall time win:
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
perf-regression Performance regression. S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. T-compiler Relevant to the compiler team, which will review and decide on the PR/issue. T-libs Relevant to the library team, which will review and decide on the PR/issue.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

8 participants