Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Meta: Performance benchmarks for sbt #3731

Open
leonardehrenfried opened this issue Nov 12, 2017 · 4 comments
Open

Meta: Performance benchmarks for sbt #3731

leonardehrenfried opened this issue Nov 12, 2017 · 4 comments

Comments

@leonardehrenfried
Copy link
Contributor

The recent weeks have seen a lot of talk about performance regressions and various ways to fix them. One thing that the core developers have asked for is a repeatable set of benchmarks that compares various versions of sbt and demonstrates clear improvements should a PR be merged.

Since nothing seemed to exist, I took it upon myself to write one.

As the first test I used @fommil's test repo with a huge classpath but almost no source files: https://github.com/cakesolutions/sbt-cake/tree/sbt-perf-regression

I wrote a little bit of tooling (accompanying blog post) around running some tasks in sbt 0.13.16 and 1.0.3 and timing the runs.

The results are sobering:

command sbt 0.13.16 sbt 1.0.3
startup 37 seconds 45 seconds
compile once 89 seconds 129 seconds
compile twice 107 seconds 189 seconds

I'm going to run the benchmarks again once 1.0.4 is released and report back.

Nevertheless, I would like to solicit feedback about my methodology and stimulate a discussion about how to measure performance of sbt itself.

I can even imagine running this as part of the sbt build just like scalac has been doing for a little while.

WDYT?

References:

@jvican
Copy link
Member

jvican commented Nov 13, 2017

Zinc != sbt, so in this case it would make sense to benchmark the real overhead of sbt by not only focusing on compile. We can deal with Zinc regressions in the main sbt/zinc repository.

I think this is a good idea and I'm happy someone is looking at it. I think that more profiling for sbt is required.

@leonardehrenfried
Copy link
Contributor Author

leonardehrenfried commented Nov 13, 2017

You are of course right, but at sbt/zinc#371 (comment) @eed3si9n says that it would be good to have a repeatable benchmark to see which scenario would benefit most from the improved hashing. This (admittedly simple) benchmark provides just that.

I thought I start with a simple scenario that allows us to explore ways of benchmarking and provides hard data. More test cases should of course be added.

@pshirshov
Copy link

I'm going to run the benchmarks again once 1.0.4 is released and report back.

I guess the logger fix would drastically improve the picture. Though it's definitely profiling time

@leonardehrenfried
Copy link
Contributor Author

I guess the logger fix would drastically improve the picture. Though it's definitely profiling time

Exactly, the point of this benchmark is to take the guesswork out of figuring out if a proposed performance improvement has any meaningful impact on real projects - although the sbt project under test is rather extreme with the size of it's dependencies.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants