Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

-chooser failfirst is very slow #78

Open
gildor478 opened this issue Oct 24, 2020 · 0 comments
Open

-chooser failfirst is very slow #78

gildor478 opened this issue Oct 24, 2020 · 0 comments
Labels

Comments

@gildor478
Copy link
Owner

This bug has been migrated from artifact #1745 on forge.ocamlcore.org. It was assigned to user100.

user24543 posted on 2017-03-21 14:49:51:

I have an OUnit test program which generates OUnit tests from the contents of directories. These directories contain source files for testing a compiler. One of the directories contains a plethora of source files which were generated from a synthesis tool. Overall, my test suite has about 4,500 tests.

I often want to identify the first bug immediately so I can begin working on it. For this reason, I usually run the tester with "-chooser failfirst" so I can get information about the first failed test without waiting for the other tests to run. Unfortunately, even this is very slow. Instead of immediately stopping the test run and printing the results of the failed test, each of the other thousands of tests must be skipped. This process takes about twenty times as long as the first test did to fail.

It would be helpful if "-chooser failfirst" immediately stopped the test run, discarding the remaining tests instead of "running" each of them and discovering that they are in a skipped state. This doesn't seem to be something I can accomplish with command-line arguments, since I don't know which of the 4,500 tests will fail first and so I can't group them more sensibly than I already have.

Thank you!

user102 replied on 2017-03-21 15:33:54:

So you have a LOT of very small test.

How long does the 4500 tests take to run in skipped mode?

user102 replied on 2017-03-21 16:10:08:

(fixing english mistakes)

So you have a LOT of very small tests.

How long do the 4500 tests take to run in skipped mode?

user24543 replied on 2017-03-21 16:59:18:

Thanks for the quick turnaround! I just ran my tests again to get more precise data.

When using "-chooser failfirst" and the first test fails: 14.21 seconds.

When using "-test-only junk", skipping all tests: 3.38 seconds.

user102 replied on 2017-03-21 17:22:34:

This is bad!

Can you check the size of the test logs in _build/?

Maybe they are huge and that's the reason why it takes so long.

user24543 replied on 2017-03-21 17:28:43:

That seems possible. When I run -chooser failfirst, my log file in _build is 2.7Mb of formatted text.

I tried to set -no-output-file but it didn't prevent the logging. Is there a way to suppress logging?

I'm on an SSD and my OS has 4Gb of RAM currently allocated to caches and buffering, so I doubt the I/O itself is the bottleneck. It might be the string formatting, though?

user102 replied on 2017-03-21 18:10:14:

Can you try '-runner sequential -no-output-file -health-check-interval 30' ?

user24543 replied on 2017-03-22 12:42:29:

I just tried to recreate the circumstances, but I've run into a problem: -chooser failfirst isn't behaving quite like I expected. Here's my little test rig:

https://github.com/zepalmer/ounit-problem-example

When I run those tests with "-chooser failfirst", it doesn't stop running the tests at all! Is this a known bug? If not, I should probably open a report for it.

In any case, I hobbled my test functions to always assert failures and I got the following results:

$ time (./tests.byte -runner sequential -no-output-file -health-check-interval 30 -only-test nothing >&/dev/null)

real 0m2.734s
user 0m2.700s
sys 0m0.028s

$ time (./gradingTests.byte -runner sequential -no-output-file -health-check-interval 30 -chooser failfirst >&/dev/null)

real 0m13.588s
user 0m13.480s
sys 0m0.084s

So it doesn't seem that the sequential runner had any impact on the results.

user102 replied on 2017-03-22 13:20:52:

OK, I think I have enough data. I'll fix that with the next release.

user24543 replied on 2017-03-22 13:55:23:

Thank you!

@gildor478 gildor478 added the bug label Oct 24, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant