Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add test count statistics at end of run #18

Open
clach04 opened this issue Mar 7, 2016 · 3 comments
Open

Add test count statistics at end of run #18

clach04 opened this issue Mar 7, 2016 · 3 comments

Comments

@clach04
Copy link
Member

clach04 commented Mar 7, 2016

At end of run include counts for:

  • test ran
  • test skipped
  • test failed
@bodob
Copy link
Member

bodob commented May 25, 2016

@clach04, not sure if the statistics should be counted for each test class or for each module (application) or for all applications.
We can easily add statistics for each TestCase subclass, counting the individual test methods that ran/skipped/failed - the output would go to the trace window.
Alternatively we could specify some output file (configurable, e.g. via environment variable), e.g. as XML for collecting the stats for either a test class, or a test application, or all test applications.
Remember:
The or_tests.bash script executes all test applications, each test application executed could contain several test classes, each could contain several test methods.

So, what are your expectations?

@clach04
Copy link
Member Author

clach04 commented May 25, 2016

I'll copy/paste a couple of python unittest (stdout) outputs below as those are easy for me to lay my hands on right now, using basic unittest and not anything fancy with additional options. Stats on each test application / class would be interesting but the common statistic that is usually reported is test methods.

Output below is from a single file (Python) unittest with 2 test classes:

................
----------------------------------------------------------------------
Ran 16 tests in 0.132s

OK

The "." signify test ran successfully

Same file (2x test classes) with a hack to make one test fail:

..............F.
======================================================================
FAIL: test_generate_compare_sql_list_mp3 (__main__.TestUtilSQL)
... NOTE I've omitted the failure dump ...
----------------------------------------------------------------------
Ran 16 tests in 0.130s

FAILED (errors=1)

And same but with verbose flag:

test_check_for_dupe_filenames_more (__main__.TestFts) ... ok
test_check_for_dupe_filenames_more_exclude (__main__.TestFts) ... ok
test_check_for_dupe_filenames_more_include (__main__.TestFts) ... ok
test_check_for_dupes (__main__.TestFts) ... ok
test_check_for_dupes_more (__main__.TestFts) ... ok
test_check_for_missing_filenames (__main__.TestFts) ... ok
test_listings_non_ascii_CJK_into_sqlite3 (__main__.TestFts) ... ok
test_listings_non_ascii_latin1_into_sqlite3 (__main__.TestFts) ... ok
test_md5_zip_listings_into_sqlite3 (__main__.TestFts) ... ok
test_md5_zip_listings_into_sqlite3_md5deep_out (__main__.TestFts) ... ok
test_zip_listings_into_sqlite3 (__main__.TestFts) ... ok
test_zip_listings_into_sqlite3_table_already_exists (__main__.TestFts) ... ok
test_generate_compare_sql_exclude_tuple_mp3_ogg (__main__.TestUtilSQL) ... ok
test_generate_compare_sql_include_tuple_mp3_ogg (__main__.TestUtilSQL) ... ok
test_generate_compare_sql_list_mp3 (__main__.TestUtilSQL) ... FAIL
test_generate_compare_sql_tuple_mp3 (__main__.TestUtilSQL) ... ok
... NOTE I've omitted the failure dump ...
----------------------------------------------------------------------
Ran 16 tests in 0.130s

FAILED (errors=1)

To explain what test_generate_compare_sql_tuple_mp3 (__main__.TestUtilSQL) is, its the TestUtilSQL class with the method test_generate_compare_sql_tuple_mp3().

My instinct is to recommend we append something line verbose output format to a text file which can then be summarized (as we are already relying on bash, grep and wc). This would be fairly quick and straightforward to implement and handle Junit/Hudson/Jenkins compatible output XML separately when we have a need.

@clach04
Copy link
Member Author

clach04 commented May 25, 2016

CC'ing @ravendaemon into conversation.

clach04 added a commit that referenced this issue May 31, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants