Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Show full repr with assert a==b and -vv #4607

Merged
merged 1 commit into from
Jan 10, 2019

Conversation

oscarbenjamin
Copy link
Contributor

Fixes #2256

Adds a new explanation handler to show reprs of arbitrary objects. The handler shows the full repr in test failure output for assert x==y when -vv is given on the command line.

Here's a test case:

class Nums:
    def __init__(self, nums):
        self.nums = nums
    def __repr__(self):
        return str(self.nums)

def test_f():
    x = list(range(5000))
    y = list(range(5000))
    y[len(y)//2] = 3
    x = Nums(x)
    y = Nums(y)
    assert x == y

Running on master this gives:

$ pytest test_f.py -vv
==================================================================== test session starts =====================================================================
platform darwin -- Python 3.7.1, pytest-4.0.3.dev19+ge8152207, py-1.7.0, pluggy-0.8.0 -- /Users/enojb/current/sympy/venv/bin/python3
cachedir: .pytest_cache
rootdir: /Users/enojb/current/sympy/pytest, inifile: tox.ini
plugins: xdist-1.24.1, forked-0.2, doctestplus-0.3.0.dev0
collected 1 item                                                                                                                                             

test_f.py::test_f FAILED                                                                                                                               [100%]

========================================================================== FAILURES ==========================================================================
___________________________________________________________________________ test_f ___________________________________________________________________________

    def test_f():
        x = list(range(5000))
        y = list(range(5000))
        y[len(y)//2] = 3
        x = Nums(x)
        y = Nums(y)
>       assert x == y
E       assert [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31,...4980, 4981, 4982, 4983, 4984, 4985, 4986, 4987, 4988, 4989, 4990, 4991, 4992, 4993, 4994, 4995, 4996, 4997, 4998, 4999] == [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31,...4980, 4981, 4982, 4983, 4984, 4985, 4986, 4987, 4988, 4989, 4990, 4991, 4992, 4993, 4994, 4995, 4996, 4997, 4998, 4999]

test_f.py:13: AssertionError
================================================================== short test summary info ===================================================================
FAIL test_f.py::test_f
================================================================== 1 failed in 0.09 seconds ==================================================================

With this patch there is a short output given by:

$ pytest test_f.py -v
==================================================================== test session starts =====================================================================
platform darwin -- Python 3.7.1, pytest-4.0.3.dev19+ge8152207, py-1.7.0, pluggy-0.8.0 -- /Users/enojb/current/sympy/venv/bin/python3
cachedir: .pytest_cache
rootdir: /Users/enojb/current/sympy/pytest, inifile: tox.ini
plugins: xdist-1.24.1, forked-0.2, doctestplus-0.3.0.dev0
collected 1 item                                                                                                                                             

test_f.py::test_f FAILED                                                                                                                               [100%]

========================================================================== FAILURES ==========================================================================
___________________________________________________________________________ test_f ___________________________________________________________________________

    def test_f():
        x = list(range(5000))
        y = list(range(5000))
        y[len(y)//2] = 3
        x = Nums(x)
        y = Nums(y)
>       assert x == y
E       AssertionError: assert [0, 1, 2, 3, ...7, 4998, 4999] == [0, 1, 2, 3, 4...7, 4998, 4999]
E         -[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136...
E         
E         ...Full output truncated (2 lines hidden), use '-vv' to show

test_f.py:13: AssertionError
================================================================== short test summary info ===================================================================
FAIL test_f.py::test_f
================================================================== 1 failed in 0.10 seconds ==================================================================

Running with -vv shows the whole repr (which is very long so I won't paste it here).

@blueyed
Copy link
Contributor

blueyed commented Jan 6, 2019

Thanks, please add a test for this - (maybe only by adjusting the now failing test(s)?!)

@blueyed blueyed added type: enhancement new feature or API change, should be merged into features branch topic: reporting related to terminal output and user-facing messages and errors labels Jan 6, 2019
Copy link
Member

@RonnyPfannschmidt RonnyPfannschmidt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i'd also note that this was introduced to prevent various denial of service issues triggered by broken reprs - i suggest to use a higher verbosity level to trigger the behaviour,

left_lines = repr(left).splitlines(keepends)
right_lines = repr(right).splitlines(keepends)

explanation = []
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

unified_diff imported but not used

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay I've removed that now

@blueyed
Copy link
Contributor

blueyed commented Jan 6, 2019

It might be worth controlling this through some additional setting.

I for example would like to see full diffs always, without turning pytest verbose in general.

@oscarbenjamin
Copy link
Contributor Author

What verbosity level should I use?

It's already easy to make arbitrarily long output with -vv using strings. I can also make pytest hang in ndiff easily enough if I want:

import random

def test_f():
    linesa = '\n'.join('foobar' + random.choice('wer') for n in range(10000))
    linesb = '\n'.join('foobar' + random.choice('wer') for n in range(10000))
    assert linesa == linesb

(ndiff is very slow: https://bugs.python.org/issue6931

The behaviour I've gone for here is intended to match what already happens with strings.

@codecov
Copy link

codecov bot commented Jan 7, 2019

Codecov Report

Merging #4607 into master will decrease coverage by <.01%.
The diff coverage is 100%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #4607      +/-   ##
==========================================
- Coverage   95.75%   95.74%   -0.01%     
==========================================
  Files         111      111              
  Lines       24678    24706      +28     
  Branches     2446     2449       +3     
==========================================
+ Hits        23630    23655      +25     
  Misses        740      740              
- Partials      308      311       +3
Flag Coverage Δ
#docs 29.53% <3.57%> (+0.04%) ⬆️
#doctesting 29.53% <3.57%> (+0.04%) ⬆️
#linting 29.53% <3.57%> (+0.04%) ⬆️
#linux 95.57% <100%> (-0.01%) ⬇️
#nobyte 92.38% <100%> (ø) ⬆️
#numpy 93.19% <92.85%> (ø) ⬆️
#pexpect 42.08% <3.57%> (-0.06%) ⬇️
#py27 93.78% <100%> (-0.01%) ⬇️
#py34 91.87% <100%> (+0.07%) ⬆️
#py35 91.89% <100%> (+0.07%) ⬆️
#py36 91.91% <100%> (+0.07%) ⬆️
#py37 93.93% <100%> (+0.02%) ⬆️
#trial 93.19% <92.85%> (ø) ⬆️
#windows 93.93% <100%> (+0.01%) ⬆️
#xdist 93.78% <100%> (-0.01%) ⬇️
Impacted Files Coverage Δ
testing/test_assertion.py 97.59% <100%> (+0.08%) ⬆️
src/_pytest/assertion/util.py 97.63% <100%> (+0.09%) ⬆️
src/_pytest/cacheprovider.py 95.75% <0%> (-1.42%) ⬇️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update a4c426b...85055a9. Read the comment docs.

@nicoddemus
Copy link
Member

It might be worth controlling this through some additional setting.

I also have the same feeling. How about --full-diff?

@nicoddemus
Copy link
Member

We also need a new CHANGELOG entry

@nicoddemus
Copy link
Member

nicoddemus commented Jan 7, 2019

Adding a new option will require this to target features, and also solve @RonnyPfannschmidt's concern.

@asmeurer
Copy link

asmeurer commented Jan 7, 2019

I agree there should be a separate option, but I would argue it should still be controlled by -v by default. It's very ambiguous to a user such as myself what "verbosity" means, and I always found it surprising that it controlled the size of the diff output for containers, but not the size not for arbitrary reprs.

@oscarbenjamin
Copy link
Contributor Author

I think I've addressed all issues (removed unused import, added changelog, fixed it so no tests fail, added a new test) apart from the question of the command line option to use.

I don't understand why there should be a separate option unless it is intended to apply to all the cases. The behaviour of this patch matches what already happens for list, set, string etc.

If you try this you can see the effect of -v and -vv for all the other types:

def test_f():
    class Nums:
        def __init__(self, nums):
            self.nums = nums
        def __repr__(self):
            return str(self.nums)

    x = list(range(500))
    y = list(range(500))
    y[len(y)//2] = 3
    x = Nums(x)
    y = Nums(y)
    assert x == y

def test_g():
    x = list(range(500))
    y = list(range(500))
    y[len(y)//2] = -1
    assert x == y

def test_h():
    x = set(range(500))
    y = set(range(500))
    y.remove(200)
    assert x == y

def test_i():
    lines = ['qwerty'*5 for n in range(20)]
    x = '\n'.join(lines)
    lines[10] += 'zxc'
    y = '\n'.join(lines)
    assert x == y

def test_j():
    x = list(range(500))
    y = list(range(500))
    y[len(y)//2] = 3
    x = str(x)
    y = str(y)
    assert x == y

You can see that verbosity=0 (no -v arg) gives one short line of output. Passing -v (verbosity=1) gives a few longer lines. Passing -vv (verbosity=2) gives the full output. This is true in all cases except for test_f which on current master will not give full output but will with this patch.

If there is to be a separate command line option why should it only apply to this case and not the others?

Likewise I don't see how this case can cause a new DOS issue given that it matches the other cases.

@nicoddemus
Copy link
Member

Well, as a first interaction we can keep it as -vv then. If we decide to introduce a new option, we can do it later and apply it for all cases, as you mention.

@@ -151,6 +151,8 @@ def isiterable(obj):
elif type(left) == type(right) and (isdatacls(left) or isattrs(left)):
type_fn = (isdatacls, isattrs)
explanation = _compare_eq_cls(left, right, verbose, type_fn)
elif verbose:
explanation = _compare_eq_verbose(left, right)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This uses it with -v already, no?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. That's also what happens in the case of e.g. strings: https://github.com/pytest-dev/pytest/blob/master/src/_pytest/assertion/util.py#L176

The filtering based on -vv happens down the line somewhere else (in all cases).

So with -v this function will generate full output but pytest won't ultimately show the whole of it unless -vv is given.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's confusing - especially when looking at the test.
Can this be fixed here (in this PR) also?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The test is showing the output of call_equal which calls these functions directly. Shortening (without -vv) takes place here I think: https://github.com/pytest-dev/pytest/blob/master/src/_pytest/assertion/truncate.py

That looks like a deliberate design and seems reasonable to me so I'm not sure what needs fixing.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it makes sense that truncation happens later down the line, otherwise every implementation would need to implement or call truncation (IIUC).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But yeah, I was also confused for this same reason, I had to look at the rest of the code to understand what was going on.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@nicoddemus
I think we should create a follow-up issue/PR for this.
Have you looked closer at it already?

@oscarbenjamin
Copy link
Contributor Author

So is this patch okay?

Happy to make changes but I think I've addressed all requests so far...

Copy link
Member

@nicoddemus nicoddemus left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thank you @oscarbenjamin, we appreciate the PR and your patience. 👍

@nicoddemus nicoddemus merged commit 71a7452 into pytest-dev:master Jan 10, 2019
@nicoddemus
Copy link
Member

Thanks again @oscarbenjamin for the PR!

1 similar comment
@nicoddemus
Copy link
Member

Thanks again @oscarbenjamin for the PR!

@oscarbenjamin
Copy link
Contributor Author

Thanks all!

@blueyed
Copy link
Contributor

blueyed commented Oct 20, 2019

For reference: this caused a regression (#5192, #5932, ), addressed in #5933.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
topic: reporting related to terminal output and user-facing messages and errors type: enhancement new feature or API change, should be merged into features branch
Projects
None yet
Development

Successfully merging this pull request may close these issues.

-vvv doesn't remove the ... from the assertion error
5 participants