Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve VecCache under parallel frontend #124780

Merged
merged 1 commit into from
Nov 19, 2024

Conversation

Mark-Simulacrum
Copy link
Member

@Mark-Simulacrum Mark-Simulacrum commented May 6, 2024

This replaces the single Vec allocation with a series of progressively larger buckets. With the cfg for parallel enabled but with -Zthreads=1, this looks like a slight regression in i-count and cycle counts (~1%).

With the parallel frontend at -Zthreads=4, this is an improvement (-5% wall-time from 5.788 to 5.4688 on libcore) than our current Lock-based approach, likely due to reducing the bouncing of the cache line holding the lock. At -Zthreads=32 it's a huge improvement (-46%: 8.829 -> 4.7319 seconds).

try-job: i686-gnu-nopt
try-job: dist-x86_64-linux

@rustbot rustbot added A-query-system Area: The rustc query system (https://rustc-dev-guide.rust-lang.org/query.html) S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. T-compiler Relevant to the compiler team, which will review and decide on the PR/issue. labels May 6, 2024
@Mark-Simulacrum
Copy link
Member Author

@bors try @rust-timer queue

@rust-timer

This comment has been minimized.

@rustbot rustbot added the S-waiting-on-perf Status: Waiting on a perf run to be completed. label May 6, 2024
@bors
Copy link
Contributor

bors commented May 6, 2024

⌛ Trying commit 206251b with merge 15ee677...

bors added a commit to rust-lang-ci/rust that referenced this pull request May 6, 2024
[WIP] Improve VecCache under parallel frontend

This replaces the single Vec allocation with a series of progressively larger buckets. With the cfg for parallel enabled but with -Zthreads=1, this looks like a slight regression in i-count and cycle counts (<0.1%).

With the parallel frontend at -Zthreads=4, this is an improvement (-5% wall-time from 5.788 to 5.4688 on libcore) than our current Lock-based approach, likely due to reducing the bouncing of the cache line holding the lock. At -Zthreads=32 it's a huge improvement (-46%: 8.829 -> 4.7319 seconds).

FIXME: Extract the internals to rustc_data_structures, safety comments, etc.

r? `@Mark-Simulacrum` -- opening for perf first.
@rust-log-analyzer

This comment has been minimized.

@bors
Copy link
Contributor

bors commented May 6, 2024

☀️ Try build successful - checks-actions
Build commit: 15ee677 (15ee67784efd5d46670c56f52fd618a4a00771e5)

@rust-timer

This comment has been minimized.


impl SlotIndex {
#[inline]
fn from_index(idx: u32) -> Self {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have you tried to write a branchless version of this?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The result of this on x86 at least is branchless (with conditional moves): https://rust.godbolt.org/z/x9s3aexYh

let index = match current {
0 => return None,
// Treat "initializing" as actually just not initialized at all.
// The only reason this is a separate state is that `complete` calls could race and
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because we ensure that each query only executes once, complete will only be called once per key, so this race can't happen.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would prefer to avoid a non-local safety condition like that. The performance gain from avoiding a true lock here isn't meaningful anyway.

@rust-timer
Copy link
Collaborator

Finished benchmarking commit (15ee677): comparison URL.

Overall result: ❌✅ regressions and improvements - ACTION NEEDED

Benchmarking this pull request likely means that it is perf-sensitive, so we're automatically marking it as not fit for rolling up. While you can manually mark this PR as fit for rollup, we strongly recommend not doing so since this PR may lead to changes in compiler perf.

Next Steps: If you can justify the regressions found in this try perf run, please indicate this with @rustbot label: +perf-regression-triaged along with sufficient written justification. If you cannot justify the regressions please fix the regressions and do another perf run. If the next run shows neutral or positive results, the label will be automatically removed.

@bors rollup=never
@rustbot label: -S-waiting-on-perf +perf-regression

Instruction count

This is a highly reliable metric that was used to determine the overall result at the top of this comment.

mean range count
Regressions ❌
(primary)
0.8% [0.2%, 2.9%] 169
Regressions ❌
(secondary)
0.9% [0.2%, 2.3%] 70
Improvements ✅
(primary)
- - 0
Improvements ✅
(secondary)
-1.8% [-2.3%, -0.4%] 5
All ❌✅ (primary) 0.8% [0.2%, 2.9%] 169

Max RSS (memory usage)

Results

This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.

mean range count
Regressions ❌
(primary)
1.8% [1.8%, 1.8%] 1
Regressions ❌
(secondary)
3.0% [2.6%, 3.8%] 4
Improvements ✅
(primary)
-3.0% [-7.1%, -1.1%] 54
Improvements ✅
(secondary)
-8.2% [-13.3%, -4.6%] 22
All ❌✅ (primary) -2.9% [-7.1%, 1.8%] 55

Cycles

Results

This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.

mean range count
Regressions ❌
(primary)
1.2% [0.8%, 2.1%] 13
Regressions ❌
(secondary)
2.4% [1.4%, 3.2%] 6
Improvements ✅
(primary)
- - 0
Improvements ✅
(secondary)
- - 0
All ❌✅ (primary) 1.2% [0.8%, 2.1%] 13

Binary size

This benchmark run did not return any relevant results for this metric.

Bootstrap: 676.55s -> 678.643s (0.31%)
Artifact size: 315.72 MiB -> 316.01 MiB (0.09%)

@rustbot rustbot added perf-regression Performance regression. and removed S-waiting-on-perf Status: Waiting on a perf run to be completed. labels May 6, 2024
@Mark-Simulacrum Mark-Simulacrum changed the title [WIP] Improve VecCache under parallel frontend Improve VecCache under parallel frontend Jun 9, 2024
@Mark-Simulacrum
Copy link
Member Author

@bors try @rust-timer queue

No major changes, but let's collect another more recent run. Presuming results are about the same I'll mark this ready for review, the improvement in scalability IMO outweighs slight regressions to single-threaded compilation.

@rust-timer

This comment has been minimized.

@rustbot rustbot added the S-waiting-on-perf Status: Waiting on a perf run to be completed. label Jun 9, 2024
bors added a commit to rust-lang-ci/rust that referenced this pull request Jun 9, 2024
Improve VecCache under parallel frontend

This replaces the single Vec allocation with a series of progressively larger buckets. With the cfg for parallel enabled but with -Zthreads=1, this looks like a slight regression in i-count and cycle counts (~1%).

With the parallel frontend at -Zthreads=4, this is an improvement (-5% wall-time from 5.788 to 5.4688 on libcore) than our current Lock-based approach, likely due to reducing the bouncing of the cache line holding the lock. At -Zthreads=32 it's a huge improvement (-46%: 8.829 -> 4.7319 seconds).
@bors
Copy link
Contributor

bors commented Jun 9, 2024

⌛ Trying commit a28ef6f with merge d447427...

@bors
Copy link
Contributor

bors commented Jun 9, 2024

☀️ Try build successful - checks-actions
Build commit: d447427 (d447427ec5ed8db4f1baad0135b5304baef82ce9)

@rust-timer

This comment has been minimized.

@rust-timer
Copy link
Collaborator

Finished benchmarking commit (d447427): comparison URL.

Overall result: ❌✅ regressions and improvements - ACTION NEEDED

Benchmarking this pull request likely means that it is perf-sensitive, so we're automatically marking it as not fit for rolling up. While you can manually mark this PR as fit for rollup, we strongly recommend not doing so since this PR may lead to changes in compiler perf.

Next Steps: If you can justify the regressions found in this try perf run, please indicate this with @rustbot label: +perf-regression-triaged along with sufficient written justification. If you cannot justify the regressions please fix the regressions and do another perf run. If the next run shows neutral or positive results, the label will be automatically removed.

@bors rollup=never
@rustbot label: -S-waiting-on-perf +perf-regression

Instruction count

This is a highly reliable metric that was used to determine the overall result at the top of this comment.

mean range count
Regressions ❌
(primary)
0.7% [0.2%, 2.6%] 171
Regressions ❌
(secondary)
1.1% [0.2%, 10.0%] 99
Improvements ✅
(primary)
- - 0
Improvements ✅
(secondary)
-2.0% [-5.2%, -0.4%] 8
All ❌✅ (primary) 0.7% [0.2%, 2.6%] 171

Max RSS (memory usage)

Results (primary -3.4%, secondary -8.1%)

This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.

mean range count
Regressions ❌
(primary)
- - 0
Regressions ❌
(secondary)
3.2% [2.5%, 3.9%] 2
Improvements ✅
(primary)
-3.4% [-8.2%, -0.9%] 70
Improvements ✅
(secondary)
-9.2% [-14.0%, -3.3%] 21
All ❌✅ (primary) -3.4% [-8.2%, -0.9%] 70

Cycles

Results (primary 2.2%, secondary 2.3%)

This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.

mean range count
Regressions ❌
(primary)
2.2% [1.5%, 3.1%] 17
Regressions ❌
(secondary)
2.3% [1.4%, 2.9%] 22
Improvements ✅
(primary)
- - 0
Improvements ✅
(secondary)
- - 0
All ❌✅ (primary) 2.2% [1.5%, 3.1%] 17

Binary size

This benchmark run did not return any relevant results for this metric.

Bootstrap: missing data
Artifact size: 319.77 MiB -> 319.42 MiB (-0.11%)

@rustbot rustbot removed the S-waiting-on-perf Status: Waiting on a perf run to be completed. label Jun 10, 2024
@Mark-Simulacrum
Copy link
Member Author

r? compiler

As noted in the PR description this is a regression for cycles/time on non-parallel builds (more code to execute). That's largely offset by far better scalability to more threads in parallel compilers. I think the trade-off is worth it, particularly since this also appears to be a memory usage win.

// tests.
for power in 0..25 {
let key = 1u32 << power;
cache.complete(key, (), key);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you also add a lookup here to make sure lookup is also correct for later buckets?

This replaces the single Vec allocation with a series of progressively
larger buckets. With the cfg for parallel enabled but with -Zthreads=1,
this looks like a slight regression in i-count and cycle counts (<0.1%).

With the parallel frontend at -Zthreads=4, this is an improvement (-5%
wall-time from 5.788 to 5.4688 on libcore) than our current Lock-based
approach, likely due to reducing the bouncing of the cache line holding
the lock. At -Zthreads=32 it's a huge improvement (-46%: 8.829 -> 4.7319
seconds).
@Mark-Simulacrum
Copy link
Member Author

This would instead require us to adjust the tests to not reach the isize::MAX limit on 32 bit targets 🤔 i think an abort here is totally acceptable as we already abort if the layout is small enough, but we fail to allocate

Yeah, that seems obvious now that you say it :)

Pushed up a revert of most of the 32-bit changes, we were already aborting/panicking on allocation failure so I think most of those changes weren't even necessary in the first place, we just needed to avoid the tests hitting the LayoutError -- which we now do, tweaking to test the full (sparse) range on 64-bit Linux and otherwise limiting to a smaller range to avoid wasting a bunch of memory.

@bors try @rust-timer queue

@rust-timer

This comment has been minimized.

@rustbot rustbot added the S-waiting-on-perf Status: Waiting on a perf run to be completed. label Nov 16, 2024
@Mark-Simulacrum
Copy link
Member Author

@bors try

1 similar comment
@Mark-Simulacrum
Copy link
Member Author

@bors try

@Mark-Simulacrum
Copy link
Member Author

@bors try

Hopefully this works, I think bors forgot about the PR...

bors added a commit to rust-lang-ci/rust that referenced this pull request Nov 17, 2024
Improve VecCache under parallel frontend

This replaces the single Vec allocation with a series of progressively larger buckets. With the cfg for parallel enabled but with -Zthreads=1, this looks like a slight regression in i-count and cycle counts (~1%).

With the parallel frontend at -Zthreads=4, this is an improvement (-5% wall-time from 5.788 to 5.4688 on libcore) than our current Lock-based approach, likely due to reducing the bouncing of the cache line holding the lock. At -Zthreads=32 it's a huge improvement (-46%: 8.829 -> 4.7319 seconds).

try-job: i686-gnu-nopt
try-job: dist-x86_64-linux
@bors
Copy link
Contributor

bors commented Nov 17, 2024

⌛ Trying commit da58efb with merge a185d27...

@bors
Copy link
Contributor

bors commented Nov 17, 2024

☀️ Try build successful - checks-actions
Build commit: a185d27 (a185d27b07d8de5f7a07525cef0701b8364b4862)

@rust-timer

This comment has been minimized.

@rust-timer
Copy link
Collaborator

Finished benchmarking commit (a185d27): comparison URL.

Overall result: ❌✅ regressions and improvements - please read the text below

Benchmarking this pull request likely means that it is perf-sensitive, so we're automatically marking it as not fit for rolling up. While you can manually mark this PR as fit for rollup, we strongly recommend not doing so since this PR may lead to changes in compiler perf.

Next Steps: If you can justify the regressions found in this try perf run, please indicate this with @rustbot label: +perf-regression-triaged along with sufficient written justification. If you cannot justify the regressions please fix the regressions and do another perf run. If the next run shows neutral or positive results, the label will be automatically removed.

@bors rollup=never
@rustbot label: -S-waiting-on-perf +perf-regression

Instruction count

This is the most reliable metric that we have; it was used to determine the overall result at the top of this comment. However, even this metric can sometimes exhibit noise.

mean range count
Regressions ❌
(primary)
1.0% [0.2%, 3.3%] 214
Regressions ❌
(secondary)
1.1% [0.2%, 3.0%] 165
Improvements ✅
(primary)
- - 0
Improvements ✅
(secondary)
-1.4% [-3.1%, -0.1%] 9
All ❌✅ (primary) 1.0% [0.2%, 3.3%] 214

Max RSS (memory usage)

Results (primary -2.9%, secondary -2.1%)

This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.

mean range count
Regressions ❌
(primary)
2.3% [1.6%, 2.8%] 4
Regressions ❌
(secondary)
3.1% [0.7%, 7.5%] 24
Improvements ✅
(primary)
-3.2% [-6.9%, -0.8%] 85
Improvements ✅
(secondary)
-6.4% [-12.5%, -1.7%] 29
All ❌✅ (primary) -2.9% [-6.9%, 2.8%] 89

Cycles

Results (primary 2.1%, secondary 2.7%)

This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.

mean range count
Regressions ❌
(primary)
2.1% [1.0%, 4.5%] 31
Regressions ❌
(secondary)
2.7% [1.2%, 5.2%] 23
Improvements ✅
(primary)
- - 0
Improvements ✅
(secondary)
- - 0
All ❌✅ (primary) 2.1% [1.0%, 4.5%] 31

Binary size

This benchmark run did not return any relevant results for this metric.

Bootstrap: 787.86s -> 792.684s (0.61%)
Artifact size: 335.54 MiB -> 335.95 MiB (0.12%)

@rustbot rustbot removed the S-waiting-on-perf Status: Waiting on a perf run to be completed. label Nov 17, 2024
Copy link
Contributor

@lcnr lcnr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

r=me if you're happy with perf

@Mark-Simulacrum
Copy link
Member Author

@bors r=lcnr

Yeah, within reasonable bounds, though obviously still room for improvement.

@bors
Copy link
Contributor

bors commented Nov 19, 2024

📌 Commit da58efb has been approved by lcnr

It is now in the queue for this repository.

@bors bors added S-waiting-on-bors Status: Waiting on bors to run and complete tests. Bors will change the label on completion. and removed S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. labels Nov 19, 2024
@bors
Copy link
Contributor

bors commented Nov 19, 2024

⌛ Testing commit da58efb with merge 5926e82...

@bors
Copy link
Contributor

bors commented Nov 19, 2024

☀️ Test successful - checks-actions
Approved by: lcnr
Pushing 5926e82 to master...

@bors bors added the merged-by-bors This PR was explicitly merged by bors. label Nov 19, 2024
@bors bors merged commit 5926e82 into rust-lang:master Nov 19, 2024
13 checks passed
@rustbot rustbot added this to the 1.84.0 milestone Nov 19, 2024
@rust-timer
Copy link
Collaborator

Finished benchmarking commit (5926e82): comparison URL.

Overall result: ❌✅ regressions and improvements - please read the text below

Our benchmarks found a performance regression caused by this PR.
This might be an actual regression, but it can also be just noise.

Next Steps:

  • If the regression was expected or you think it can be justified,
    please write a comment with sufficient written justification, and add
    @rustbot label: +perf-regression-triaged to it, to mark the regression as triaged.
  • If you think that you know of a way to resolve the regression, try to create
    a new PR with a fix for the regression.
  • If you do not understand the regression or you think that it is just noise,
    you can ask the @rust-lang/wg-compiler-performance working group for help (members of this group
    were already notified of this PR).

@rustbot label: +perf-regression
cc @rust-lang/wg-compiler-performance

Instruction count

This is the most reliable metric that we have; it was used to determine the overall result at the top of this comment. However, even this metric can sometimes exhibit noise.

mean range count
Regressions ❌
(primary)
1.0% [0.2%, 3.3%] 214
Regressions ❌
(secondary)
1.0% [0.2%, 2.9%] 172
Improvements ✅
(primary)
- - 0
Improvements ✅
(secondary)
-1.9% [-3.0%, -0.4%] 8
All ❌✅ (primary) 1.0% [0.2%, 3.3%] 214

Max RSS (memory usage)

Results (primary -3.0%, secondary -2.7%)

This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.

mean range count
Regressions ❌
(primary)
2.8% [2.0%, 4.1%] 5
Regressions ❌
(secondary)
3.6% [1.8%, 8.6%] 16
Improvements ✅
(primary)
-3.3% [-7.5%, -1.2%] 82
Improvements ✅
(secondary)
-6.3% [-12.4%, -1.5%] 28
All ❌✅ (primary) -3.0% [-7.5%, 4.1%] 87

Cycles

Results (primary 2.0%, secondary 2.1%)

This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.

mean range count
Regressions ❌
(primary)
2.1% [1.0%, 4.1%] 29
Regressions ❌
(secondary)
2.6% [1.8%, 3.8%] 27
Improvements ✅
(primary)
-2.0% [-2.0%, -2.0%] 1
Improvements ✅
(secondary)
-2.2% [-3.1%, -1.4%] 3
All ❌✅ (primary) 2.0% [-2.0%, 4.1%] 30

Binary size

This benchmark run did not return any relevant results for this metric.

Bootstrap: 792.184s -> 793.779s (0.20%)
Artifact size: 334.48 MiB -> 335.25 MiB (0.23%)

@pnkfelix
Copy link
Member

Visiting for rustc-perf triage

As a quick process note: I'm not sure it was a great idea for @Mark-Simulacrum to add the rustc-perf-triaged label back on June 15th (i.e. before this had landed and in fact before it had gotten r+'ed), because in theory that could cause us to miss a case where new commits are added and the regression effectively becomes untriaged.

But having said that, ...

... as far as I can tell, the regression here remains roughly within the bounds that were anticipated back when this PR's description was first composed... back circa june 15th 9th we were looking at a worst case of 2.60% and a mean of 0.7%, while now it is a worst case of 3.3% and a mean of 1.0%...

so that all seems ballpark within what was anticipated, and the expected gains for parallel compilation performance probably justify paying this cost as it stands.

So, I'll leave the rustc-perf-triaged label in place.

@Mark-Simulacrum Mark-Simulacrum deleted the lockless-cache branch November 23, 2024 15:49
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-query-system Area: The rustc query system (https://rustc-dev-guide.rust-lang.org/query.html) merged-by-bors This PR was explicitly merged by bors. perf-regression Performance regression. perf-regression-triaged The performance regression has been triaged. S-waiting-on-bors Status: Waiting on bors to run and complete tests. Bors will change the label on completion. T-compiler Relevant to the compiler team, which will review and decide on the PR/issue.
Projects
None yet
Development

Successfully merging this pull request may close these issues.