-
Notifications
You must be signed in to change notification settings - Fork 12.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve VecCache under parallel frontend #124780
Conversation
@bors try @rust-timer queue |
This comment has been minimized.
This comment has been minimized.
[WIP] Improve VecCache under parallel frontend This replaces the single Vec allocation with a series of progressively larger buckets. With the cfg for parallel enabled but with -Zthreads=1, this looks like a slight regression in i-count and cycle counts (<0.1%). With the parallel frontend at -Zthreads=4, this is an improvement (-5% wall-time from 5.788 to 5.4688 on libcore) than our current Lock-based approach, likely due to reducing the bouncing of the cache line holding the lock. At -Zthreads=32 it's a huge improvement (-46%: 8.829 -> 4.7319 seconds). FIXME: Extract the internals to rustc_data_structures, safety comments, etc. r? `@Mark-Simulacrum` -- opening for perf first.
This comment has been minimized.
This comment has been minimized.
☀️ Try build successful - checks-actions |
This comment has been minimized.
This comment has been minimized.
|
||
impl SlotIndex { | ||
#[inline] | ||
fn from_index(idx: u32) -> Self { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Have you tried to write a branchless version of this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The result of this on x86 at least is branchless (with conditional moves): https://rust.godbolt.org/z/x9s3aexYh
let index = match current { | ||
0 => return None, | ||
// Treat "initializing" as actually just not initialized at all. | ||
// The only reason this is a separate state is that `complete` calls could race and |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because we ensure that each query only executes once, complete
will only be called once per key, so this race can't happen.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would prefer to avoid a non-local safety condition like that. The performance gain from avoiding a true lock here isn't meaningful anyway.
Finished benchmarking commit (15ee677): comparison URL. Overall result: ❌✅ regressions and improvements - ACTION NEEDEDBenchmarking this pull request likely means that it is perf-sensitive, so we're automatically marking it as not fit for rolling up. While you can manually mark this PR as fit for rollup, we strongly recommend not doing so since this PR may lead to changes in compiler perf. Next Steps: If you can justify the regressions found in this try perf run, please indicate this with @bors rollup=never Instruction countThis is a highly reliable metric that was used to determine the overall result at the top of this comment.
Max RSS (memory usage)ResultsThis is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.
CyclesResultsThis is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.
Binary sizeThis benchmark run did not return any relevant results for this metric. Bootstrap: 676.55s -> 678.643s (0.31%) |
206251b
to
a28ef6f
Compare
@bors try @rust-timer queue No major changes, but let's collect another more recent run. Presuming results are about the same I'll mark this ready for review, the improvement in scalability IMO outweighs slight regressions to single-threaded compilation. |
This comment has been minimized.
This comment has been minimized.
Improve VecCache under parallel frontend This replaces the single Vec allocation with a series of progressively larger buckets. With the cfg for parallel enabled but with -Zthreads=1, this looks like a slight regression in i-count and cycle counts (~1%). With the parallel frontend at -Zthreads=4, this is an improvement (-5% wall-time from 5.788 to 5.4688 on libcore) than our current Lock-based approach, likely due to reducing the bouncing of the cache line holding the lock. At -Zthreads=32 it's a huge improvement (-46%: 8.829 -> 4.7319 seconds).
☀️ Try build successful - checks-actions |
This comment has been minimized.
This comment has been minimized.
Finished benchmarking commit (d447427): comparison URL. Overall result: ❌✅ regressions and improvements - ACTION NEEDEDBenchmarking this pull request likely means that it is perf-sensitive, so we're automatically marking it as not fit for rolling up. While you can manually mark this PR as fit for rollup, we strongly recommend not doing so since this PR may lead to changes in compiler perf. Next Steps: If you can justify the regressions found in this try perf run, please indicate this with @bors rollup=never Instruction countThis is a highly reliable metric that was used to determine the overall result at the top of this comment.
Max RSS (memory usage)Results (primary -3.4%, secondary -8.1%)This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.
CyclesResults (primary 2.2%, secondary 2.3%)This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.
Binary sizeThis benchmark run did not return any relevant results for this metric. Bootstrap: missing data |
a28ef6f
to
f73ad9b
Compare
r? compiler As noted in the PR description this is a regression for cycles/time on non-parallel builds (more code to execute). That's largely offset by far better scalability to more threads in parallel compilers. I think the trade-off is worth it, particularly since this also appears to be a memory usage win. |
// tests. | ||
for power in 0..25 { | ||
let key = 1u32 << power; | ||
cache.complete(key, (), key); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you also add a lookup
here to make sure lookup is also correct for later buckets?
This replaces the single Vec allocation with a series of progressively larger buckets. With the cfg for parallel enabled but with -Zthreads=1, this looks like a slight regression in i-count and cycle counts (<0.1%). With the parallel frontend at -Zthreads=4, this is an improvement (-5% wall-time from 5.788 to 5.4688 on libcore) than our current Lock-based approach, likely due to reducing the bouncing of the cache line holding the lock. At -Zthreads=32 it's a huge improvement (-46%: 8.829 -> 4.7319 seconds).
1961749
to
da58efb
Compare
Yeah, that seems obvious now that you say it :) Pushed up a revert of most of the 32-bit changes, we were already aborting/panicking on allocation failure so I think most of those changes weren't even necessary in the first place, we just needed to avoid the tests hitting the LayoutError -- which we now do, tweaking to test the full (sparse) range on 64-bit Linux and otherwise limiting to a smaller range to avoid wasting a bunch of memory. @bors try @rust-timer queue |
This comment has been minimized.
This comment has been minimized.
@bors try |
1 similar comment
@bors try |
@bors try Hopefully this works, I think bors forgot about the PR... |
Improve VecCache under parallel frontend This replaces the single Vec allocation with a series of progressively larger buckets. With the cfg for parallel enabled but with -Zthreads=1, this looks like a slight regression in i-count and cycle counts (~1%). With the parallel frontend at -Zthreads=4, this is an improvement (-5% wall-time from 5.788 to 5.4688 on libcore) than our current Lock-based approach, likely due to reducing the bouncing of the cache line holding the lock. At -Zthreads=32 it's a huge improvement (-46%: 8.829 -> 4.7319 seconds). try-job: i686-gnu-nopt try-job: dist-x86_64-linux
☀️ Try build successful - checks-actions |
This comment has been minimized.
This comment has been minimized.
Finished benchmarking commit (a185d27): comparison URL. Overall result: ❌✅ regressions and improvements - please read the text belowBenchmarking this pull request likely means that it is perf-sensitive, so we're automatically marking it as not fit for rolling up. While you can manually mark this PR as fit for rollup, we strongly recommend not doing so since this PR may lead to changes in compiler perf. Next Steps: If you can justify the regressions found in this try perf run, please indicate this with @bors rollup=never Instruction countThis is the most reliable metric that we have; it was used to determine the overall result at the top of this comment. However, even this metric can sometimes exhibit noise.
Max RSS (memory usage)Results (primary -2.9%, secondary -2.1%)This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.
CyclesResults (primary 2.1%, secondary 2.7%)This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.
Binary sizeThis benchmark run did not return any relevant results for this metric. Bootstrap: 787.86s -> 792.684s (0.61%) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
r=me if you're happy with perf
@bors r=lcnr Yeah, within reasonable bounds, though obviously still room for improvement. |
☀️ Test successful - checks-actions |
Finished benchmarking commit (5926e82): comparison URL. Overall result: ❌✅ regressions and improvements - please read the text belowOur benchmarks found a performance regression caused by this PR. Next Steps:
@rustbot label: +perf-regression Instruction countThis is the most reliable metric that we have; it was used to determine the overall result at the top of this comment. However, even this metric can sometimes exhibit noise.
Max RSS (memory usage)Results (primary -3.0%, secondary -2.7%)This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.
CyclesResults (primary 2.0%, secondary 2.1%)This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.
Binary sizeThis benchmark run did not return any relevant results for this metric. Bootstrap: 792.184s -> 793.779s (0.20%) |
Visiting for rustc-perf triage As a quick process note: I'm not sure it was a great idea for @Mark-Simulacrum to add the rustc-perf-triaged label back on June 15th (i.e. before this had landed and in fact before it had gotten r+'ed), because in theory that could cause us to miss a case where new commits are added and the regression effectively becomes untriaged. But having said that, ... ... as far as I can tell, the regression here remains roughly within the bounds that were anticipated back when this PR's description was first composed... back circa june so that all seems ballpark within what was anticipated, and the expected gains for parallel compilation performance probably justify paying this cost as it stands. So, I'll leave the rustc-perf-triaged label in place. |
This replaces the single Vec allocation with a series of progressively larger buckets. With the cfg for parallel enabled but with -Zthreads=1, this looks like a slight regression in i-count and cycle counts (~1%).
With the parallel frontend at -Zthreads=4, this is an improvement (-5% wall-time from 5.788 to 5.4688 on libcore) than our current Lock-based approach, likely due to reducing the bouncing of the cache line holding the lock. At -Zthreads=32 it's a huge improvement (-46%: 8.829 -> 4.7319 seconds).
try-job: i686-gnu-nopt
try-job: dist-x86_64-linux