Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test case verbosity #11653

Merged
merged 15 commits into from
Feb 24, 2024
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions changelog/11639.feature.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
Added the new :confval:`verbosity_test_case` configuration option for fine-grained control of failed assertions verbosity.
plannigan marked this conversation as resolved.
Show resolved Hide resolved
plannigan marked this conversation as resolved.
Show resolved Hide resolved

See :ref:`Fine-grained verbosity <pytest.fine_grained_verbosity>` for more details.

For plugin authors, :attr:`config.get_verbosity <pytest.Config.get_verbosity>` can be used to retrieve the verbosity level for a specific verbosity type.
4 changes: 3 additions & 1 deletion doc/en/how-to/output.rst
Original file line number Diff line number Diff line change
Expand Up @@ -298,7 +298,9 @@ This is done by setting a verbosity level in the configuration file for the spec
``pytest --no-header`` with a value of ``2`` would have the same output as the previous example, but each test inside
the file is shown by a single character in the output.

(Note: currently this is the only option available, but more might be added in the future).
:confval:`verbosity_test_case`: Controls how verbose the test execution output should be when pytest is executed.
plannigan marked this conversation as resolved.
Show resolved Hide resolved
Running ``pytest --no-header`` with a value of ``2`` would have the same output as the first verbosity example, but each
test inside the file gets its own line in the output.

.. _`pytest.detailed_failed_tests_usage`:

Expand Down
13 changes: 13 additions & 0 deletions doc/en/reference/reference.rst
Original file line number Diff line number Diff line change
Expand Up @@ -1835,6 +1835,19 @@ passed multiple times. The expected format is ``name=value``. For example::
"auto" can be used to explicitly use the global verbosity level.


.. confval:: verbosity_test_case
plannigan marked this conversation as resolved.
Show resolved Hide resolved

Set a verbosity level specifically for test case execution related output, overriding the application wide level.

.. code-block:: ini

[pytest]
verbosity_test_case = 2
plannigan marked this conversation as resolved.
Show resolved Hide resolved

Defaults to application wide verbosity level (via the ``-v`` command-line option). A special value of
"auto" can be used to explicitly use the global verbosity level.


.. confval:: xfail_strict

If set to ``True``, tests marked with ``@pytest.mark.xfail`` that actually succeed will by default fail the
Expand Down
2 changes: 2 additions & 0 deletions src/_pytest/config/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -1653,6 +1653,8 @@ def getvalueorskip(self, name: str, path=None):

#: Verbosity type for failed assertions (see :confval:`verbosity_assertions`).
VERBOSITY_ASSERTIONS: Final = "assertions"
#: Verbosity type for test case execution (see :confval:`verbosity_test_cases`).
VERBOSITY_TEST_CASES: Final = "test_cases"
_VERBOSITY_INI_DEFAULT: Final = "auto"

def get_verbosity(self, verbosity_type: Optional[str] = None) -> int:
Expand Down
26 changes: 19 additions & 7 deletions src/_pytest/terminal.py
Original file line number Diff line number Diff line change
Expand Up @@ -253,6 +253,14 @@
"progress even when capture=no)",
default="progress",
)
Config._add_verbosity_ini(
parser,
Config.VERBOSITY_TEST_CASES,
help=(
"Specify a verbosity level for test case execution, overriding the main level. "
"Higher levels will provide more detailed information about each test case executed."
),
)


def pytest_configure(config: Config) -> None:
Expand Down Expand Up @@ -415,7 +423,7 @@

@property
def showlongtestinfo(self) -> bool:
return self.verbosity > 0
return self.config.get_verbosity(Config.VERBOSITY_TEST_CASES) > 0

def hasopt(self, char: str) -> bool:
char = {"xfailed": "x", "skipped": "s"}.get(char, char)
Expand Down Expand Up @@ -593,7 +601,7 @@
markup = {"yellow": True}
else:
markup = {}
if self.verbosity <= 0:
if self.config.get_verbosity(Config.VERBOSITY_TEST_CASES) <= 0:
self._tw.write(letter, **markup)
else:
self._progress_nodeids_reported.add(rep.nodeid)
Expand All @@ -602,7 +610,7 @@
self.write_ensure_prefix(line, word, **markup)
if rep.skipped or hasattr(report, "wasxfail"):
reason = _get_raw_skip_reason(rep)
if self.config.option.verbose < 2:
if self.config.get_verbosity(Config.VERBOSITY_TEST_CASES) < 2:
available_width = (
(self._tw.fullwidth - self._tw.width_of_current_line)
- len(" [100%]")
Expand Down Expand Up @@ -639,7 +647,10 @@

def pytest_runtest_logfinish(self, nodeid: str) -> None:
assert self._session
if self.verbosity <= 0 and self._show_progress_info:
if (
self.config.get_verbosity(Config.VERBOSITY_TEST_CASES) <= 0
and self._show_progress_info
):
if self._show_progress_info == "count":
num_tests = self._session.testscollected
progress_length = len(f" [{num_tests}/{num_tests}]")
Expand Down Expand Up @@ -819,8 +830,9 @@
rep.toterminal(self._tw)

def _printcollecteditems(self, items: Sequence[Item]) -> None:
if self.config.option.verbose < 0:
if self.config.option.verbose < -1:
test_cases_verbosity = self.config.get_verbosity(Config.VERBOSITY_TEST_CASES)

Check warning on line 833 in src/_pytest/terminal.py

View check run for this annotation

Codecov / codecov/patch

src/_pytest/terminal.py#L833

Added line #L833 was not covered by tests
if test_cases_verbosity < 0:
if test_cases_verbosity < -1:
counts = Counter(item.nodeid.split("::", 1)[0] for item in items)
for name, count in sorted(counts.items()):
self._tw.line("%s: %d" % (name, count))
Expand All @@ -840,7 +852,7 @@
stack.append(col)
indent = (len(stack) - 1) * " "
self._tw.line(f"{indent}{col}")
if self.config.option.verbose >= 1:
if test_cases_verbosity >= 1:
obj = getattr(col, "obj", None)
doc = inspect.getdoc(obj) if obj else None
if doc:
Expand Down
120 changes: 120 additions & 0 deletions testing/test_terminal.py
Original file line number Diff line number Diff line change
Expand Up @@ -2614,3 +2614,123 @@ def test_format_trimmed() -> None:

assert _format_trimmed(" ({}) ", msg, len(msg) + 4) == " (unconditional skip) "
assert _format_trimmed(" ({}) ", msg, len(msg) + 3) == " (unconditional ...) "


def test_fine_grained_test_case_verbosity(pytester: Pytester):
plannigan marked this conversation as resolved.
Show resolved Hide resolved
plannigan marked this conversation as resolved.
Show resolved Hide resolved
p = pytester.makepyfile(_fine_grained_verbosity_file_contents())
pytester.makeini(
"""
[pytest]
verbosity_test_cases = 2
"""
)
result = pytester.runpytest(p)

result.stdout.fnmatch_lines(
[
f"{p.name}::test_ok PASSED [ 14%]",
f"{p.name}::test_words_fail FAILED [ 28%]",
f"{p.name}::test_numbers_fail FAILED [ 42%]",
f"{p.name}::test_long_text_fail FAILED [ 57%]",
f"{p.name}::test_parametrize_fail[hello-1] FAILED [ 71%]",
f"{p.name}::test_parametrize_fail[world-987654321] FAILED [ 85%]",
f"{p.name}::test_sample_skip SKIPPED (some",
"long skip reason that will not fit on a single line with other content",
"that goes on and on and on and on and on) [100%]",
],
consecutive=True,
)


def test_fine_grained_test_case_verbosity_collect_only_negative_2(pytester: Pytester):
plannigan marked this conversation as resolved.
Show resolved Hide resolved
p = pytester.makepyfile(_fine_grained_verbosity_file_contents())
pytester.makeini(
"""
[pytest]
verbosity_test_cases = -2
"""
)
result = pytester.runpytest("--collect-only", p)

result.stdout.fnmatch_lines(
[
"collected 7 items",
"",
f"{p.name}: 7",
],
consecutive=True,
)


def test_fine_grained_test_case_verbosity_collect_only_positive_2(pytester: Pytester):
plannigan marked this conversation as resolved.
Show resolved Hide resolved
p = pytester.makepyfile(_fine_grained_verbosity_file_contents())
pytester.makeini(
"""
[pytest]
verbosity_test_cases = 2
"""
)
result = pytester.runpytest("--collect-only", p)

result.stdout.fnmatch_lines(
[
"collected 7 items",
"",
f"<Module {p.name}>",
" <Function test_ok>",
" some docstring",
" <Function test_words_fail>",
" <Function test_numbers_fail>",
" <Function test_long_text_fail>",
" <Function test_parametrize_fail[hello-1]>",
" <Function test_parametrize_fail[world-987654321]>",
" <Function test_sample_skip>",
],
consecutive=True,
)


def _fine_grained_verbosity_file_contents() -> str:
long_text = "Lorem ipsum dolor sit amet " * 10
return f"""
Copy link
Member

@nicoddemus nicoddemus Dec 2, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure we need this complex tests, just a simple parametrized test would allow us to test the purpose of the flag just as well, right?

@pytest.mark.parametrize("i", range(7))
def test_ok(i): pass

The rationale is that we are not testing the details of the output, only that it is more or less verbose according to the option, so the names and the length of the tests docstrings do not plan a role here, and they are already being tested elsewhere.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I started working on this and found that I missed part of the negative verbosity case when executing tests. So I'm going to do some more work on adding more tests for this branch to increase the confidence that we have the correct behavior.

import pytest
def test_ok():
'''
some docstring
'''
pass


def test_words_fail():
fruits1 = ["banana", "apple", "grapes", "melon", "kiwi"]
fruits2 = ["banana", "apple", "orange", "melon", "kiwi"]
assert fruits1 == fruits2


def test_numbers_fail():
number_to_text1 = {{str(x): x for x in range(5)}}
number_to_text2 = {{str(x * 10): x * 10 for x in range(5)}}
assert number_to_text1 == number_to_text2


def test_long_text_fail():
long_text = "{long_text}"
assert "hello world" in long_text


@pytest.mark.parametrize(["foo", "bar"], [
("hello", 1),
("world", 987654321),
])
def test_parametrize_fail(foo, bar):
long_text = f"{{foo}} {{bar}}"
assert "hello world" in long_text


@pytest.mark.skip(
"some long skip reason that will not fit on a single line with other content that goes"
" on and on and on and on and on"
)
def test_sample_skip():
pass
"""