Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using pytest-pyvista to improve tests of geovista #429

Closed
tkoyama010 opened this issue Sep 6, 2023 · 18 comments · Fixed by #470
Closed

Using pytest-pyvista to improve tests of geovista #429

tkoyama010 opened this issue Sep 6, 2023 · 18 comments · Fixed by #470
Assignees
Labels
new: issue Highlight a new community raised "generic" issue

Comments

@tkoyama010
Copy link
Collaborator

📰 Custom Issue

pytest-pyvista is a plugin to test PyVista plot outputs.
We can add tests of geovista's plotting using this.

@tkoyama010 tkoyama010 added the new: issue Highlight a new community raised "generic" issue label Sep 6, 2023
@tkoyama010 tkoyama010 self-assigned this Sep 6, 2023
@bjlittle
Copy link
Owner

bjlittle commented Sep 7, 2023

@tkoyama010 Cool.

Ultimately I'm aiming to use matplotlib/pytest-mpl#150.

This may complement the testing framework you're suggesting?

@tkoyama010
Copy link
Collaborator Author

Nice. It might be a good idea to encourage PyVista to move from pytest-pyvista to pytest-mpl based on that.

@bjlittle
Copy link
Owner

bjlittle commented Sep 7, 2023

@tkoyama010 We've been using the perceptual image hash approach on SciTools/iris for a few years now, so I'm pretty satisfied that it gives the stability and control to detect image differences and detect levels of image similarity.

I'm keen to get my PR merged on pytest-mpl (it's well overdue) and there is also talk of pytest-mpl being capable to test images that are not generated by matplotlib. I'm definitely interested in helping make that happen on pytest-mpl as it will allow me to use that testing infrastructure with geovista.

I'll take a peek at pytest-pyvista to see how it works too, out of interest 👍

@tkoyama010
Copy link
Collaborator Author

there is also talk of pytest-mpl being capable to test images that are not generated by matplotlib.

Great. Could you point me to where the talk thread is? I am really interested in it.

@bjlittle
Copy link
Owner

bjlittle commented Sep 7, 2023

See matplotlib/pytest-mpl#201

But I've talked to other pytest-mpl core devs about this capability. I think it would be pretty darn awesome, and open up the use of pytest-mpl elsewhere within the scientific python ecosystem.

@tkoyama010
Copy link
Collaborator Author

I would expect pytest-pyvista to be more profitable than pytest-mpl for GeoVista, but I don't think I can explain it because of my less knowledge of pytest-pyvista.
@MatthewFlamm I would like to hear your opinion if you like.

@MatthewFlamm
Copy link

I'm not super familiar to pytest-mpl but these packages look very similar. One problem with using pytest-mpl even if it becomes more generic is that it will require a lot of boilerplate and tinkering to get testing to work with pyvista.

pytest-pyvista hooks natively into PyVista, so you can do:

def test_my_plot(verify_image_cache):
    pv.Sphere.plot()

and it will do the image comparison.

In an ideal world in the scientific software field we would have a meta level comparing tool that is agnostic for how the images are generated. Then each tool provides a wrapper for generating those images.

But practically and selfishly, I think it benefits the pyvista community more if we make pytest-pyvista more robust. If we can offload parts of that pipeline to pytest-mpl, I think that would be great. It gets slightly closer to my ideal state above. pytest-pyvista, in my opinion, has a more flexible interface. It uses a fixture, so it allows you to do more custom things like for example doing multiple image comparisons in one test, turning off and then back on image capturing inside a test, etc....

@MatthewFlamm
Copy link

After glancing at your PR in pytest-mpl, I get the sense that it adds the ability to do different kinds of comparison. But it doesn't much touch the meta-level problem I highlighted above? I need to learn more though so please educate me.

I think there is a strong opportunity to replace the homegrown image comparison in pytest-pyvista with a better supported comparison tool. This will not be a tiny effort in PyVista as the are a lot of tests in pyvista that generate images with a lot of variance based on architecture, etc.

@MatthewFlamm
Copy link

MatthewFlamm commented Sep 22, 2023

Can you point to the discussion on making pytest-mpl hook better into other image generation tools?

Edit: Sorry for the noise, Somehow I missed that it is linked above.

@tkoyama010
Copy link
Collaborator Author

Thank you for your valuable input!
I have gained a lot of insight that I was not aware of and will try to create a Pull Request for GeoVista for a test using pytest-pyvista.

After glancing at your PR in pytest-mpl, I get the sense that it adds the ability to do different kinds of comparison. But it doesn't much touch the meta-level problem I highlighted above? I need to learn more though so please educate me.

@bjlittle We would appreciate it if you have any insight on this.

@tkoyama010 tkoyama010 linked a pull request Sep 22, 2023 that will close this issue
@bjlittle
Copy link
Owner

@MatthewFlamm Seems like you found it and chipped into the conversation on matplotlib/pytest-mpl#201 👍

@bjlittle
Copy link
Owner

@MatthewFlamm I'm also going to make time to understand better pytest-pyvista, but can you just confirm the technique that it's using for performing the image comparison / differences?

Also, would you guys be open to extending it's support to optionally use perceptual image hashing? I'd happily contribute that functionality if you wanted it?

@MatthewFlamm
Copy link

I am 100% in favor of opening it up for more kinds of image hashing if it doesn't make things too complicated. I slightly worry that we will have to think about maintaining those in pytest-pyvista even though we don't use it for pyvista. So we should avoid implementing too many custom things. If we could leverage widely used image comparison tools (either directly from pytest-mpl or some other package), this would have lower downsides.

One thing that we'd need to think about is that the success criteria in pytest-pyvista is really geared heavily towards the home-grown tool. So it might require a bit of an overhaul of how this is specified, for example if other image comparison tools have different kinds of criteria or multiple criteria. But making it more usable for more packages is the reason we split it out of pyvista in the first place. So, I would support this sort of re-work.

To your question about what technique it uses:

https://github.com/pyvista/pytest-pyvista/blob/597da3865bb9f548fb66242ef37ed72cfa4a4af7/pytest_pyvista/pytest_pyvista.py#L208

which is from

https://github.com/pyvista/pyvista/blob/c7e301c35558e18afda76be9668f9b0d71522ae9/pyvista/plotting/utilities/regression.py#L142

True to form, we use vtk itself to do the comparison.

@bjlittle
Copy link
Owner

bjlittle commented Sep 22, 2023

@MatthewFlamm Yes, it's interesting that pytest-pyvista uses a fixture as opposed to the decorator approach adopted by pytest-mpl. In your experience, do you consider such pytest fixtures more flexible and powerful than decorators? If so, how? (Just curious and keen to understand better)

Also, take a look at the issue matplotlib/pytest-mpl#146 where I argued the case for pytest-mpl to adopt perceptual image hash.

With regards to metadata, do you mean that on pytest-pyvista you have different base images cached for each test depending on the architecture et al when performing the image comparison between the base image and the test output image?

If so, I believe that perceptual image hash solves this problem. But maybe I'm misunderstanding you, sorry.

@MatthewFlamm
Copy link

It is simple to apply the fixture to all the tests in a module or test suite or test class, etc. I'm not sure that it is so easy with a decorator this way. One interesting thing we can do with a fixture is that it can be interacted with inside the test itself in a flexible way. For example, we do something like this in our tests:

def test_something(verify_image_cache):
    # Allows the code to run, but don't use the image compare check for a vtk version
    verify_image_cache.skip = pv.vtk_version_info > (9, 1)

This is a really compelling type of usage. You might be able to do the same in a decorator for this exact case, but imagine that you have two parts to this test and only the first part depends on the needing the vtk version. Or what if you want to use a higher image regression threshold for the first image in a test, but a second image regression threshold for the second image in the same test. All of these things are straightforward to do with the fixture approach.

@MatthewFlamm
Copy link

MatthewFlamm commented Sep 22, 2023

On the question on architecture, we do not save multiple images for different architectures. I think this is due to the overhead for this, but the current structure of pytest-pyvista would allow for this (see below). We have selective tests where we entirely skip the image check on a different architecture. In other cases we specify a different success criteria to account for differences in architecture or vtk version.

This is another reason that the fixture approach is very nice. You can create a wrapper fixture that would override the baseline image path depending on the architecture and set autouse=True and it will automatically apply to all your tests. This will be cumbersome with decorators. If this is more widely used, we should build in helpers into the internal pytest-pyvista fixture that enables this.

Edit:

Psuedo-code to illustrate this better:

@pytest.fixture(autouse=True)
def verify_image_cache_arch(verify_image_cache):
    if sys_arch  == "windows":
        verify_image_cache.cache_dir = "path/to/windows_baseline_images/"
    elif sys_arch == "linux":
        verify_image_cache.cache_dir = "path/to/windows_baseline_images/"

    yield verify_image_cache

Edit2: You can also appreciate how you could implement this sort of split baseline images only for a certain classes of test in a downstream package too. It is really flexible this way.

@bjlittle
Copy link
Owner

@MatthewFlamm Yes, it's interesting that pytest-pyvista uses a fixture as opposed to the decorator approach adopted by pytest-mpl. In your experience, do you consider such pytest fixtures more flexible and powerful than decorators? If so, how? (Just curious and keen to understand better)

Also, take a look at matplotlib/pytest-mpl#146 where I argued the case for pytest-mpl to adopt perceptual image hash.

With regards to metadata, do you mean that on pytest-pyvista you have different base images cached for each test depending on the architecture et al when performing the image comparison between the base image and the test output image?

If so, I believe that perceptual image hash solves this problem. But maybe I'm misunderstanding you, sorry.

@bjlittle
Copy link
Owner

Awesome, thanks @MatthewFlamm for taking the time to clarify and explain, much appreciated 🍻

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
new: issue Highlight a new community raised "generic" issue
Projects
None yet
3 participants