-
Notifications
You must be signed in to change notification settings - Fork 937
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Meta: Performance benchmarks for sbt #3731
Comments
Zinc != sbt, so in this case it would make sense to benchmark the real overhead of sbt by not only focusing on I think this is a good idea and I'm happy someone is looking at it. I think that more profiling for sbt is required. |
You are of course right, but at sbt/zinc#371 (comment) @eed3si9n says that it would be good to have a repeatable benchmark to see which scenario would benefit most from the improved hashing. This (admittedly simple) benchmark provides just that. I thought I start with a simple scenario that allows us to explore ways of benchmarking and provides hard data. More test cases should of course be added. |
I guess the logger fix would drastically improve the picture. Though it's definitely profiling time |
Exactly, the point of this benchmark is to take the guesswork out of figuring out if a proposed performance improvement has any meaningful impact on real projects - although the sbt project under test is rather extreme with the size of it's dependencies. |
The recent weeks have seen a lot of talk about performance regressions and various ways to fix them. One thing that the core developers have asked for is a repeatable set of benchmarks that compares various versions of sbt and demonstrates clear improvements should a PR be merged.
Since nothing seemed to exist, I took it upon myself to write one.
As the first test I used @fommil's test repo with a huge classpath but almost no source files: https://github.com/cakesolutions/sbt-cake/tree/sbt-perf-regression
I wrote a little bit of tooling (accompanying blog post) around running some tasks in sbt 0.13.16 and 1.0.3 and timing the runs.
The results are sobering:
I'm going to run the benchmarks again once 1.0.4 is released and report back.
Nevertheless, I would like to solicit feedback about my methodology and stimulate a discussion about how to measure performance of sbt itself.
I can even imagine running this as part of the sbt build just like scalac has been doing for a little while.
WDYT?
References:
The text was updated successfully, but these errors were encountered: