You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
(I recommend copy-pasting the XML content from this issue into files, and then opening those in a web-browser. The tags nesting will be much more apparent.)
Take this code:
importpytest@pytest.mark.parametrize('n', [0,2,4,0,3,6,0,5,10])classTestClass:
deftest_func(self, n):
print(n)
self.run_single(n)
defrun_single(self, n):
ifn==2orn==10:
pytest.skip()
assertn%2==0, 'n is odd'
When run with pytest pytest-subtest.py --junitxml=out-subtest.xml, the XML file it produces is the following:
<?xml version="1.0" encoding="utf-8"?><testsuites><testsuitename="pytest"errors="0"failures="2"skipped="2"tests="9"time="0.041"timestamp="2023-02-22T13:10:30.982095"hostname="stefano-XPS"><testcaseclassname="pytest-regular.TestClass"name="test_func[00]"time="0.001" /><testcaseclassname="pytest-regular.TestClass"name="test_func[2]"time="0.000"><skippedtype="pytest.skip"message="Skipped">/home/stefano/git/neo4j-experiments/pytest-regular.py:11: Skipped</skipped></testcase><testcaseclassname="pytest-regular.TestClass"name="test_func[4]"time="0.000" /><testcaseclassname="pytest-regular.TestClass"name="test_func[01]"time="0.000" /><testcaseclassname="pytest-regular.TestClass"name="test_func[3]"time="0.001"><failuremessage="AssertionError: n is odd assert (3 % 2) == 0">self = <pytest-regular.TestClass object at 0x7f7770985300>, n = 3
def test_func(self, n):
print(n)
> self.run_single(n)
pytest-regular.py:7:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <pytest-regular.TestClass object at 0x7f7770985300>, n = 3
def run_single(self, n):
if n == 2 or n == 10:
pytest.skip()
> assert n%2 == 0, 'n is odd'
E AssertionError: n is odd
E assert (3 % 2) == 0
pytest-regular.py:13: AssertionError</failure></testcase><testcaseclassname="pytest-regular.TestClass"name="test_func[6]"time="0.000" /><testcaseclassname="pytest-regular.TestClass"name="test_func[02]"time="0.000" /><testcaseclassname="pytest-regular.TestClass"name="test_func[5]"time="0.001"><failuremessage="AssertionError: n is odd assert (5 % 2) == 0">self = <pytest-regular.TestClass object at 0x7f77709854b0>, n = 5
def test_func(self, n):
print(n)
> self.run_single(n)
pytest-regular.py:7:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <pytest-regular.TestClass object at 0x7f77709854b0>, n = 5
def run_single(self, n):
if n == 2 or n == 10:
pytest.skip()
> assert n%2 == 0, 'n is odd'
E AssertionError: n is odd
E assert (5 % 2) == 0
pytest-regular.py:13: AssertionError</failure></testcase><testcaseclassname="pytest-regular.TestClass"name="test_func[10]"time="0.000"><skippedtype="pytest.skip"message="Skipped">/home/stefano/git/neo4j-experiments/pytest-regular.py:11: Skipped</skipped></testcase></testsuite></testsuites>
I tweaked that code to run the exact same test cases, but split in 3 tests of 3 subtests each:
importpytest@pytest.mark.parametrize('start', [2,3,5])classTestClass:
deftest_func(self, subtests, start):
print(start)
formultiplierinrange(3):
withsubtests.test():
n=start*multiplierself.run_single(n)
defrun_single(self, n):
ifn==6orn==10:
pytest.skip()
assertn%2==0, 'n is odd'
the resulting XML of which is:
<?xml version="1.0" encoding="utf-8"?><testsuites><testsuitename="pytest"errors="0"failures="2"skipped="2"tests="12"time="0.041"timestamp="2023-02-22T13:10:24.166299"hostname="stefano-XPS"><testcaseclassname="pytest-subtest.TestClass"name="test_func[2]"time="0.007"><skippedtype="pytest.skip"message="Skipped">/home/stefano/git/neo4j-experiments/pytest-subtest.py:15: Skipped</skipped></testcase><testcaseclassname="pytest-subtest.TestClass"name="test_func[3]"time="0.017"><failuremessage="AssertionError: n is odd assert (3 % 2) == 0">self = <pytest-subtest.TestClass object at 0x7f8123d1e530>
subtests = SubTests(ihook=<_pytest.config.compat.PathAwareHookProxy object at 0x7f8124cf4190>, suspend_capture_ctx=<bound method ...te='started' _in_suspended=False> _capture_fixture=None>>, request=<SubRequest 'subtests' for <Function test_func[3]>>)
start = 3
def test_func(self, subtests, start):
print(start)
for multiplier in range(3):
with subtests.test():
n = start*multiplier
print(n)
> self.run_single(n)
pytest-subtest.py:11:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <pytest-subtest.TestClass object at 0x7f8123d1e530>, n = 3
def run_single(self, n):
if n == 2 or n == 10:
pytest.skip()
> assert n%2 == 0, 'n is odd'
E AssertionError: n is odd
E assert (3 % 2) == 0
pytest-subtest.py:17: AssertionError</failure></testcase><testcaseclassname="pytest-subtest.TestClass"name="test_func[5]"time="0.004"><failuremessage="AssertionError: n is odd assert (5 % 2) == 0">self = <pytest-subtest.TestClass object at 0x7f8123d1e410>
subtests = SubTests(ihook=<_pytest.config.compat.PathAwareHookProxy object at 0x7f8124cf4190>, suspend_capture_ctx=<bound method ...te='started' _in_suspended=False> _capture_fixture=None>>, request=<SubRequest 'subtests' for <Function test_func[5]>>)
start = 5
def test_func(self, subtests, start):
print(start)
for multiplier in range(3):
with subtests.test():
n = start*multiplier
print(n)
> self.run_single(n)
pytest-subtest.py:11:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <pytest-subtest.TestClass object at 0x7f8123d1e410>, n = 5
def run_single(self, n):
if n == 2 or n == 10:
pytest.skip()
> assert n%2 == 0, 'n is odd'
E AssertionError: n is odd
E assert (5 % 2) == 0
pytest-subtest.py:17: AssertionError</failure><skippedtype="pytest.skip"message="Skipped">/home/stefano/git/neo4j-experiments/pytest-subtest.py:15: Skipped</skipped></testcase></testsuite></testsuites>
In the subtest version, the xml lacks any information about passed subtests. There is info about failures/skips as nested tags inside a testcase, but while the non-subtests version has all tests listed out in separate testcase tags, the subtests one only lists tests and subtests with special status. This can skew off CI tools that count testcase tags. We have a few tens of tests that each spawn hundreds of subtests (in a scenario that makes sense, contrary to the stupid example here), and
we get a full test fail/skip if one subtest fails/is skipped
we get a single pass if all subtests pass in one test.
If we run 3 tests with 3 subtests each, and 2 subtests are skipped and one fails, my expectation would be the CI to report 1 failure, 2 skips, 7 pass (or 9 if we also consider the tests, I don't care). Instead, depending a bit on where the tests fail/skip, I can now get 1 failure, 2 skips, 0 pass.
Is there scope for improving on this?
The text was updated successfully, but these errors were encountered:
I did not take a deep look, but I believe that would need a tighter integration with the builtin junitxml plugin, which is not easy for an external plugin to do. This can probably be easier addressed when we integrate pytest-subtests into the core, as then the SubTestReport will be an official report, which the other plugins can handle accordingly.
For clarification:
my expectation would be the CI to report 1 failure, 2 skips, 7 pass (or 9 if we also consider the tests, I don't care). Instead, depending a bit on where the tests fail/skip, I can now get 1 failure, 2 skips, 0 pass.
Here you mean the report generated by your CI from reading the XML file, not from pytest's output summary, correct?
(I recommend copy-pasting the XML content from this issue into files, and then opening those in a web-browser. The tags nesting will be much more apparent.)
Take this code:
When run with
pytest pytest-subtest.py --junitxml=out-subtest.xml
, the XML file it produces is the following:I tweaked that code to run the exact same test cases, but split in 3 tests of 3 subtests each:
the resulting XML of which is:
In the subtest version, the xml lacks any information about passed subtests. There is info about failures/skips as nested tags inside a
testcase
, but while the non-subtests version has all tests listed out in separatetestcase
tags, the subtests one only lists tests and subtests with special status. This can skew off CI tools that counttestcase
tags. We have a few tens of tests that each spawn hundreds of subtests (in a scenario that makes sense, contrary to the stupid example here), andIf we run 3 tests with 3 subtests each, and 2 subtests are skipped and one fails, my expectation would be the CI to report 1 failure, 2 skips, 7 pass (or 9 if we also consider the
tests
, I don't care). Instead, depending a bit on where the tests fail/skip, I can now get 1 failure, 2 skips, 0 pass.Is there scope for improving on this?
The text was updated successfully, but these errors were encountered: