Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create visual test system #6604

Merged
merged 2 commits into from
Jan 12, 2024
Merged

Conversation

davepagurek
Copy link
Contributor

Resolves #6576

Changes

This adds a visual test suite to p5. It involves adding test files to test/unit/visual/cases that look like normal-ish test cases, but where you call screenshot() at various points to save an image of its current state.

Some of the constraints this is designed around:

  • We may or may not want to have these run in CI. Currently, the same test code is run in both a manual test runner and in CI, but one can add a visual test file only to the manual test runner list. It also means it uses mocha functions under the hood, and provides alternate implementations for the manual visual test runner.
  • Antialiasing differences shouldn't fail a test. After taking the difference between actual/expected images, I run the ERODE filter to shrink away single pixel artifacts. After that, I apply a threshold of 10/255 in each color channel to call a test case failed.

I've also added some instructions to the unit testing contributor doc on how to use these/add more.

Screenshots of the change

image

Hover over images to see what they represent:
image

PR Checklist

  • npm run lint passes
  • [Inline documentation] is included / updated
  • [Unit tests] are included / updated

@limzykenneth
Copy link
Member

@davepagurek I have the exploration fork passing quite a few tests with Vitest here if you want to review if this system can work with Vitest. I can merge this at this stage as well if you want it incorporated at this point.

@davepagurek
Copy link
Contributor Author

Thanks for the link! I was testing it out, and have made a bit of progress getting it to run after some minor changes (importing scripts instead of relying on script tags in HTML, using beforeAll instead of setup, etc) although I'm currently stuck on fetching the expected images inside vitest. I gather it doesn't launch a local webserver of any kind, is that right? Our existing tests of loading images may have the same issue, would we have to manually spawn one in order for those to pass?

@limzykenneth
Copy link
Member

As I understand it, it would spin up a server since the tests runs in the browser but it won't be directly usable (ie. made to serve static assets, though I may be wrong). The idea is to use mocks and stubs to get around it. I found the suggestion in the documentation to use Mock Service Workers which will act like a server but could be more suitable for a test environment.

@davepagurek
Copy link
Contributor Author

An alternate approach that doesn't need a web server would be to find something like the plugin we use to import shaders, but to import image data into maybe a base64 string. It looks like that's what happens when you import an image when vite is in library mode, so that could maybe work here if that's preferable to also spawning a server?

The other thing that looks like it'll need updating is how we save expected images to files if none are generated yet. Currently, it's using puppeteer's API to pass the data to nodejs. Do you know if there's any equivalent of that for the vitest runner?

@limzykenneth
Copy link
Member

I can look into the static file serving bit to see what's possible.

The other thing that looks like it'll need updating is how we save expected images to files if none are generated yet. Currently, it's using puppeteer's API to pass the data to nodejs. Do you know if there's any equivalent of that for the vitest runner?

I'm not entirely sure what this refers to?

@davepagurek
Copy link
Contributor Author

I'm not entirely sure what this refers to?

This bit of code in the PR registers a function within the puppeteer window, and then passes the data to fs.writeFileSync:

await page.exposeFunction('writeImageFile', function(filename, base64Data) {
fs.mkdirSync('test/' + path.dirname(filename), { recursive: true });
const prefix = /^data:image\/\w+;base64,/;
fs.writeFileSync(
'test/' + filename,
base64Data.replace(prefix, ''),
'base64'
);
});
await page.exposeFunction('writeTextFile', function(filename, data) {
fs.mkdirSync('test/' + path.dirname(filename), { recursive: true });
fs.writeFileSync(
'test/' + filename,
data
);
});

Right now that's how the visual test runner can save a screenshot to the filesystem when one does not yet exist, used as a way to generate initial screenshots for new tests. In vitest we'd have to use some other method of passing data to nodejs to have write access to the filesystem I think.

@limzykenneth
Copy link
Member

It might be possible but we need to look into the API and options for WebDriverIO (the Vitest equivalent of Puppeteer). Although if we can avoid that if possible it would be my preference, ideally each test always run idempotently.

@davepagurek
Copy link
Contributor Author

Another option could be to have a script that we run that spawns a puppeteer window that runs the same test code, and gives it access to the filesystem similar to how it works with puppeteer right now. It means needing the test code to be runnable in two different contexts that way, but I think that would be doable.

@limzykenneth
Copy link
Member

limzykenneth commented Dec 20, 2023

That would more or less mean having two different test runners though and complicate things a bit. Playwright is the other browser test driver that Vitest supports which has more similar API to Puppeteer if desired.

However, if the plan is to automatically generate the initial screenshot that future test will compare against (if I understand this feature correctly), we can possibly use Vite and a small HTTP server as part of a slightly more manual step which will serve the visual test code, wrap it in a POST request to the small HTTP server, which will then save it to file accordingly.

Since this feature is not meant to be run in CI, it can possibly be this more manual step? This will also keep the tests idempotent, with test cases where initial screenshot not yet exist be reported as "Pending" instead of pass or fail, if that made sense.

@davepagurek
Copy link
Contributor Author

That also works! Yeah it's just for test writers who are creating the original reference image, so no need for it to work in CI (I haven't looked into this aspect of vitest yet, but if the case that would generate a new screenshot is encountered in CI, it should fail the test. Currently I'm looking at an env var that's set, presumably there's some equivalent we can look at in the new system.)

@davepagurek
Copy link
Contributor Author

@limzykenneth since we have a 2.0 branch now, how do you feel about merging this to main so that we can start using visual tests, and then I can make a PR into the 2.0 branch to add a compatible version of the tests into the new test runner?

@dhowe
Copy link
Contributor

dhowe commented Jan 4, 2024

This is great -- is the idea to port over some (or all) of the tests currently in https://github.com/processing/p5.js/tree/main/test/manual-test-examples ?

@davepagurek
Copy link
Contributor Author

@dhowe I think so! some of them are performance tests, which don't need to be converted. Others would be great to port into this system, since I don't think we're in the habit of checking manual tests for each PR (at least I'm not in the habit 😅), and that porting process probably includes scoping down the sketches into a smaller canvas size so it can be tested more quickly in CI than e.g. a fullscreen sketch

@dhowe
Copy link
Contributor

dhowe commented Jan 4, 2024

@dhowe I think so! some of them are performance tests, which don't need to be converted. Others would be great to port into this system, since I don't think we're in the habit of checking manual tests for each PR (at least I'm not in the habit 😅), and that porting process probably includes scoping down the sketches into a smaller canvas size so it can be tested more quickly in CI than e.g. a fullscreen sketch

excellent -- very useful for the typography stuff

@limzykenneth
Copy link
Member

@davepagurek You can merge this whenever you are ready and we can look into how this can work in 2.0

@davepagurek davepagurek merged commit bf0a48d into processing:main Jan 12, 2024
2 checks passed
@davepagurek davepagurek deleted the feat/visual-tests branch January 12, 2024 21:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add visual tests to prevent regressions
3 participants