Rollup of 3 pull requests#152239
Conversation
…e-consts, r=BoxyUwU
MGCA: require #[type_const] on free consts too
When investigating another issue, I discovered that following ICEs (the `const_of_item` query doesn't support non-type_const-marked constants and does a `span_delayed_bug`):
```rust
#![feature(min_generic_const_args)]
#![allow(incomplete_features)]
const N: usize = 4;
fn main() {
let x = [(); N];
}
```
My initial thought of "only require `#[type_const]` on places that stable doesn't currently accept" ran into the issue of this compiling on stable today:
```rust
trait Trait {
const N: usize;
}
impl<const PARAM: usize> Trait for [(); PARAM] {
const N: usize = PARAM;
}
fn main() {
let x = [(); <[(); 4] as Trait>::N];
}
```
Figuring out which specific cases are not currently accepted by stable is quite hairy.
Upon discussion with @BoxyUwU, she suggested that *all* consts, including free consts, should require `#[type_const]` to be able to be referred to. This is what this PR does.
---
~~The change to `tests/ui/const-generics/generic_const_exprs/non-local-const.rs` is unfortunate, reverting the fix in rust-lang#143106 no longer fails the test. Any suggestions to test it more appropriately would be most welcome!~~
edit: never mind, figured out how compiletests work :) - verified that the new test setup correctly ICEs when that PR's fix is reverted.
r? @BoxyUwU
…=BoxyUwU mGCA: Support directly represented negated literals fixes rust-lang#152123 PatExprKind already awkwardly tacks on a `negated: bool` for the same purpose: https://github.com/rust-lang/rust/blob/8bccf1224deab49b54694c9090e577bfe90a94e6/compiler/rustc_hir/src/hir.rs#L1954-L1959 perhaps one day we should indeed do that FIXME... r? @BoxyUwU
Convert to inline diagnostics in `rustc_passes` For rust-lang#151366 r? @jdonszelmann
|
@bors r+ rollup=never p=5 |
This comment has been minimized.
This comment has been minimized.
|
📌 Perf builds for each rolled up PR:
previous master: bce89b6a56 In the case of a perf regression, run the following command for each PR you suspect might be the cause: |
What is this?This is an experimental post-merge analysis report that shows differences in test outcomes between the merged PR and its parent PR.Comparing bce89b6 (parent) -> efc9e1b (this PR) Test differencesShow 246 test diffsStage 0
Stage 1
Stage 2
(and 124 additional test diffs) Additionally, 22 doctest diffs were found. These are ignored, as they are noisy. Job group index
Test dashboardRun cargo run --manifest-path src/ci/citool/Cargo.toml -- \
test-dashboard efc9e1b50cbf2cede7ebe25f0a1fc64fd8b3e942 --output-dir test-dashboardAnd then open Job duration changes
How to interpret the job duration changes?Job durations can vary a lot, based on the actual runner instance |
|
Finished benchmarking commit (efc9e1b): comparison URL. Overall result: ❌✅ regressions and improvements - please read the text belowOur benchmarks found a performance regression caused by this PR. Next Steps:
@rustbot label: +perf-regression Instruction countOur most reliable metric. Used to determine the overall result above. However, even this metric can be noisy.
Max RSS (memory usage)Results (primary -0.9%, secondary -2.1%)A less reliable metric. May be of interest, but not used to determine the overall result above.
CyclesResults (secondary -4.7%)A less reliable metric. May be of interest, but not used to determine the overall result above.
Binary sizeThis benchmark run did not return any relevant results for this metric. Bootstrap: 474.508s -> 474.536s (0.01%) |
|
Some huge perf wins in secondary benchmarks @rust-timer build ce85f85 |
|
Missing artifact for sha |
|
It is #152189, all the other inline diagnostics PRs had similar results |
|
Good to know, thanks! |
|
The perf regression is also likely #152189, just checked and that benchmark emits 300 diagnostics. Inline diagnostics are slightly slower when actually used, while being much faster when not used. Will wait with making it as triaged until we can run the build for that PR, but pretty sure |
Is the benchmark output somewhere on the website or did you run it locally? I was looking for the output other day and couldn't find anything. |
|
I checked it locally, it is helpfully already in the rust repo in |
|
@rust-timer build 202ae33 |
This comment has been minimized.
This comment has been minimized.
|
Finished benchmarking commit (202ae33): comparison URL. Overall result: ❌✅ regressions and improvements - please read the text belowInstruction countOur most reliable metric. Used to determine the overall result above. However, even this metric can be noisy.
Max RSS (memory usage)Results (primary -0.9%, secondary -2.0%)A less reliable metric. May be of interest, but not used to determine the overall result above.
CyclesResults (secondary -5.2%)A less reliable metric. May be of interest, but not used to determine the overall result above.
Binary sizeThis benchmark run did not return any relevant results for this metric. Bootstrap: 474.508s -> 474.986s (0.10%) |
Successful merges:
rustc_passes#152189 (Convert to inline diagnostics inrustc_passes)r? @ghost
Create a similar rollup