-
Notifications
You must be signed in to change notification settings - Fork 12.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CI: dist-s390x-linux build went from 40min. to 160min with new LLVM pass manager #89609
Comments
This regression absolutely needs to be reported upstream, as LLVM dev talk has mentioned completely removing the old pass manager within 1~2 versions. |
Most time is apparently being spent compiling
https://github.com/rust-lang-ci/rust/runs/3708445190?check_suite_focus=true#step:26:12286 This crate usually takes less than 30 seconds to build. |
The build timeout in #88379 (comment) might be caused by this as well. |
That is very surprising and am wondering what we could be doing for that not to be O(n). |
cc @rust-lang/wg-llvm I am a little tempted to suggest a patch that passes -Zno-new-pass-manager or w/e on s390x, but that seems pretty unfortunate as well, given the relatively quick deprecation timeline. |
Is there anything I can do to unblock #88379? Or will it require the patch that |
Just checking, has the regression been reported upstream yet? I didn't see a report when I searched the LLVM Bugzilla. (I don't have an LLVM Bugzilla account, so I can't report it.) |
I'm going to try to find the culprit, but in the meantime to unblock folks that hack could be an extra target condition in |
I have the First it was sitting here at about 31% of the samples:
Then it moved away from that to a new hotspot at 28%:
Next it moved to this at 49%:
It's still going, but I'm not sure this is helpful enough to keep watching this way... |
See also #89524, which is not s390x specific. |
Based on those results, it looks to me (albeit as someone who's not knowledgeable about LLVM internals) like maybe it has some really large data structures that are slow to process. Could that be it? |
I have just tried |
…anager_on_s390x_take_two, r=nagisa Default to disabling the new pass manager for the s390x arch targets. This hack disables the new LLVM pass manager by default for s390x arch targets until the performance issues are fixed (see rust-lang#89609). The command line option `-Z new-llvm-pass-manager=(yes|no)` continues to take precedence over this default.
…ager_on_s390x_take_two, r=nagisa Default to disabling the new pass manager for the s390x arch targets. This hack disables the new LLVM pass manager by default for s390x arch targets until the performance issues are fixed (see rust-lang#89609). The command line option `-Z new-llvm-pass-manager=(yes|no)` continues to take precedence over this default.
I managed to collect
So it took almost 15 minutes from
In that 4 million lines of LLVM IR, about 2.7 million is just in 2 instantiations of
I cut them off with "etc.", but the full The same blocks in
|
Here's the input bitcode before the pass manager: rustc_ast_lowering-cgu.0.rcgu.thin-lto-after-patch.bc.gz Even |
cc @nikic -- this is another new-pm problem, although it appears to be specific to SystemZ. |
Filed: https://bugs.llvm.org/show_bug.cgi?id=52146 |
Assigning priority as discussed in the Zulip thread of the Prioritization Working Group. @rustbot label -I-prioritize +P-high |
If anyone is wondering why this one only occurs on SystemZ: https://github.com/llvm/llvm-project/blob/main/llvm/lib/Target/SystemZ/SystemZTargetTransformInfo.h#L39 Apparently this target just increases all inlining threshold by a factor of three... |
Aha, thanks! The only other targets that change that multiplier are NVPTX (5) and AMDGPU (11).
|
CI time quadrupled when the new pass manager was enabled:
Right now that does not really matter much because Github's Apple CI runners use very old hardware and take about the same time, but once that is fixed this will block a faster CI. No idea if that platform has any significant userbase.
Maybe disable the new PM for that platform or report it upstream?
The text was updated successfully, but these errors were encountered: