-
Notifications
You must be signed in to change notification settings - Fork 13k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Replaced linear token counting macros with optimized implementation #59600
Conversation
Thanks for the pull request, and welcome! The Rust team is excited to review your changes, and you should hear from @KodrAus (or someone else) soon. If any changes to this PR are deemed necessary, please add them as extra commits. This ensures that the reviewer can see what has changed since they last reviewed the code. Due to the way GitHub handles out-of-date commits, this should also make it reasonably obvious what issues have or haven't been addressed. Large or tricky changes may require several passes of review and changes. Please see the contribution instructions for more information. |
The job Click to expand the log.
I'm a bot! I can only do what humans tell me to, so if this was not helpful or you have suggestions for improvements, please ping or otherwise contact |
Should we deduplicate one of these macros? |
I don't know if the size of the final generated code here matters in practice. The previous code being generated looks quite trivial to optimize; but the logarithmic variant probably also optimizes well. Out of curiosity, did you attempt to measure the expansion time itself of the old and new versions on large inputs? Update: I don't want to give the wrong impression: I'm not objecting to this PR. I just was a little surprised to see it. |
(also, in case you hadn't noticed, travis is complaining about unneeded parentheses in the macro definitions here...) |
Hello. I've removed the extra parentheses, let's see if the warnings are gone. To me it seemed obvious that a linear counter was a weak solution, that could cause trouble sooner or later. Be it for the time taken to count things, for the memory used by macro recursion, or for Rust's default recursion limit. Considering that a token counter could conceivably be used to implement initializers like I agree that it should be deduplicated in a standard place. But I don't know Rust's sources well enough to suggest the correct place. Also, there may be other instances of that linear counting algorithm elsewhere in the source tree, that my grep-fu didn't turn up. Anyways, I did an accurate measurement, using this source code. I did the various measurements by hand, changing the number of tokens in the macro call, commenting / decommenting the two macros, and running each test several times in a Bash loop, with
(*) Rust's default recursion limit does not allow the old linear macro to run on more than 61 tokens, so for the test's sake I artificially increased it with Also, in case anybody is curious, this is the expansion for 10,000 tokens:
Compare that to summing 1 to itself 10,000 times, in a non-tail recursive call, needing 10,000 stack frames. |
r? @pnkfelix |
Whats the status on this? |
Ping from triage @pnkfelix |
@bors r+ rollup |
📌 Commit a4a07e0 has been approved by |
This PR may affect parsing performance. Do it need to be rollup? |
@bors r+ rollup |
💡 This pull request was already approved, no need to approve it again.
|
📌 Commit a4a07e0 has been approved by |
I'm having some issues interacting with github at the moment (thus the duplicate approvals). I don't care one way or another whether this is rolled up. But I don't expect there to be much impact on performance from it one way or another (apart from the case of microbenchmarks like the ones given above). |
Replaced linear token counting macros with optimized implementation There are currently two distinct token-counting macros in the source. Both implement the trivial algorithm, with linear complexity. They may or may not be adequate for their use case, but considering that other people are probably going to copy and paste them whenever they need a token-counting macro, I replaced them with an optimized implementation with logarithmic complexity.
Replaced linear token counting macros with optimized implementation There are currently two distinct token-counting macros in the source. Both implement the trivial algorithm, with linear complexity. They may or may not be adequate for their use case, but considering that other people are probably going to copy and paste them whenever they need a token-counting macro, I replaced them with an optimized implementation with logarithmic complexity.
Replaced linear token counting macros with optimized implementation There are currently two distinct token-counting macros in the source. Both implement the trivial algorithm, with linear complexity. They may or may not be adequate for their use case, but considering that other people are probably going to copy and paste them whenever they need a token-counting macro, I replaced them with an optimized implementation with logarithmic complexity.
Rollup of 5 pull requests Successful merges: - #59600 (Replaced linear token counting macros with optimized implementation) - #61501 (get rid of real_intrinsics module) - #61570 (Fix issues with const argument inference) - #61683 (Haiku: the maximum stack size is 16 MB) - #61697 (submodules: update clippy from 71be6f6 to c0dbd34) Failed merges: r? @ghost
There are currently two distinct token-counting macros in the source. Both implement the trivial algorithm, with linear complexity. They may or may not be adequate for their use case, but considering that other people are probably going to copy and paste them whenever they need a token-counting macro, I replaced them with an optimized implementation with logarithmic complexity.