-
Notifications
You must be signed in to change notification settings - Fork 222
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add testing.check_figures_equal to avoid storing baseline images #555
Conversation
@seisman, hope you don't mind if I continue some work on this.
Will try to see if it's possible to mimic the pytest-mpl paths, or look to matplotlib to see how they do it.
It should be easy-enough to use GMTTempFile so that the images are cleaned up after the test, but we'll need to find a way to keep the images if the test fails so that they can be examined. |
Also moved test_check_figures_* to a doctest under check_figures_equal.
Same logic that was implemented in matplotlib/matplotlib#16800
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, this is pretty much ready for review. After stress testing this with pytest
, I think the check_figures_equal
function will have to end up pretty much exactly like matplotlib's https://matplotlib.org/3.3.1/_modules/matplotlib/testing/decorators.html#check_figures_equal, except that we're using fig=pygmt.Figure()
instead of matplotlib's plt.subplot
. If only we can monkeypatch it!!
It might be worth raising an issue/PR on matplotlib to see if there's a way to reduce code duplication (for the sake of long term maintainability), as they've clearly spent months/years of thought and effort into this one check_figures_equal
function. This would involve some sort of subclassing (or I don't know, pluggability?), so that we can swap in pygmt.Figure
into fig_ref
and fig_test
, while getting all of the image_compare goodness.
pygmt/helpers/decorators.py
Outdated
parameters = [ | ||
param | ||
for param in old_sig.parameters.values() | ||
if param.name not in {"fig_test", "fig_ref"} | ||
] | ||
new_sig = old_sig.replace(parameters=parameters) | ||
wrapper.__signature__ = new_sig |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Figured out how to make our PyGMT check_figures_equal
decorator work with pytest fixtures (e.g. grid=xr.DataArray(...)
) in 3e0d3fb. This is basically just copying what was done in matplotlib at matplotlib/matplotlib#16800.
@check_figures_equal() | ||
def test_grdimage_central_longitude(grid, fig_ref, fig_test): | ||
""" | ||
Test that plotting a grid centred at different longitudes/meridians work. | ||
""" | ||
fig_ref.grdimage("@earth_relief_01d_g", projection="W120/15c", cmap="geo") | ||
fig_test.grdimage(grid, projection="W120/15c", cmap="geo") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@check_figures_equal() | |
def test_grdimage_central_longitude(grid, fig_ref, fig_test): | |
""" | |
Test that plotting a grid centred at different longitudes/meridians work. | |
""" | |
fig_ref.grdimage("@earth_relief_01d_g", projection="W120/15c", cmap="geo") | |
fig_test.grdimage(grid, projection="W120/15c", cmap="geo") | |
@pytest.mark.parametrize("meridian", [0, 33, 120, 180]) | |
@check_figures_equal() | |
@pytest.mark.parametrize("proj_type", ["H", "Q", "W"]) | |
def test_grdimage_different_central_meridians_and_projections( | |
grid, proj_type, meridian, fig_ref, fig_test | |
): | |
""" | |
Test that plotting a grid centred on different meridians using different | |
projection systems work. | |
""" | |
fig_ref.grdimage( | |
"@earth_relief_01d_g", projection=f"{proj_type}{meridian}/15c", cmap="geo" | |
) | |
fig_test.grdimage(grid, projection=f"{proj_type}{meridian}/15c", cmap="geo") | |
I'll update this test in #560 later 😄. Problem with using this fancy pytest.mark.parametrize
is that it would complicate the check_figures_equal
code (see matplotlib/matplotlib#15199 and matplotlib/matplotlib#16693), and make this PR even harder to review.
pygmt/helpers/decorators.py
Outdated
|
||
import numpy as np | ||
from matplotlib.testing.compare import compare_images |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
from matplotlib.testing.compare import compare_images
As I understand it, the code means that now matplotlib
becomes a required dependency, even for users who never run the tests, right?
Although PyGMT already requires matplotlib
for testings and most users usually have matplotlib installed. I still don't want to add one dependency to PyGMT.
When I wrote the first commit (8b78614), I put the codes in pygmt/helpers/testing.py
. By doing that way, I think matplotlib is still optional, although I haven't tested it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point, I think you're right here. I also encountered issues with circular imports when moving the code to decorators.py, hence this line:
pygmt/pygmt/helpers/decorators.py
Line 459 in 04b3f41
from ..figure import Figure # pylint: disable=import-outside-toplevel |
Probably should move it back under pygmt/helpers/testing.py
then. As an aside, I've opened up a feature request at matplotlib/pytest-mpl#94, and we might be able to do all this from pytest-mpl
in the future.
from ..figure import Figure | ||
|
||
|
||
def check_figures_equal(*, tol=0.0, result_dir="result_images"): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One more thing about the result_dir is that, the check_figures_equal
decorator generates images in result_images
directory, while pytest.mark.mpl_image_compare
generates images in directories like results/tmpjtnnwqt4
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, which was partly why I opened up the issue at matplotlib/pytest-mpl#94, to get all of that pytest-mpl
goodness (e.g. not having a hardcoded result_dir
). I'll try to make a Pull Request to pytest-mpl
for that, so we can just use a proper @pytest.mark.mpl_check_equal
decorator in the future (will open a new issue after this one is merged). For now though, since we don't have many tests using check_figures_equal
yet, we can probably just leave it like so.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, looks good to me.
Writing an image-based test is only slightly more difficult than a simple test. | ||
The main consideration is that you must specify the "baseline" or reference | ||
image, and compare it with a "generated" or test image. This is handled using |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added some notes here to CONTRIBUTING.md, adapted from https://matplotlib.org/3.3.1/devel/testing.html#writing-an-image-comparison-test.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me, but I can't approve it because I opened this PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah yes, I'll approve it then 😆
Tell git to ignore PNG files in 'result_images' folder, and add it to list of folders to clean for make clean. Patches #555.
Tests with @pytest-mpl.mpl_image_compare and @check_figures_equal (#555) will return an image diff on test failures in the 'tmp-test-dir-with-unique-name' directory, and we can upload those files as a Github artifact for inspection purposes. * Use GitHub Action to upload diff images on test failure * Set a unique artifact name for each OS/Python Version * Add upload-artifact to GMT Latest tests Co-Authored-By: Dongdong Tian <[email protected]>
Description of proposed changes
This PR adds a new function
check_figures_equal
to check if twopygmt.Figure()
objects are the same, mostly inspired by comment #522 (review) and the matplotlib decoratorcheck_figures_equal
.What the function
check_figures_equal
does is very simple:fig_ref
andfig_test
) as argumentscompare_images
function from matplotlib and calculate the RMS valueI add three new tests:
test_check_figures_equal
checks the case when two images are equaltest_check_figures_unequal
checks the case when two images are unequal. The exception is correctly caught bypytest.raises
.test_grdimage_central_longitude
is a real test to check the images generated by passing a matrix or a grid to grdimage (we always assume that passing a netCDF grid to GMT gives the correct/reference image). The test helps me find a GMT bug, which is usually very difficult to detect (see Matrix as grid with changing central meridian doesn't work well for gridline grids gmt#3844 and Add special check for non-rotated global grids gmt#3849).Some known issues/limitations:
test_check_figures_unequal()
checks if the functioncheck_figures_equal
correctly raises theGMTImageComparisonFailure
exception when two images are unequal, and I use pytest.raises to catch the exception so that the test passes.However, the two images are not deleted after the test.They are now.Fixes #
Reminders
make format
andmake check
to make sure the code follows the style guide.doc/api/index.rst
.