-
-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create visual test system #6604
Conversation
@davepagurek I have the exploration fork passing quite a few tests with Vitest here if you want to review if this system can work with Vitest. I can merge this at this stage as well if you want it incorporated at this point. |
Thanks for the link! I was testing it out, and have made a bit of progress getting it to run after some minor changes ( |
As I understand it, it would spin up a server since the tests runs in the browser but it won't be directly usable (ie. made to serve static assets, though I may be wrong). The idea is to use mocks and stubs to get around it. I found the suggestion in the documentation to use Mock Service Workers which will act like a server but could be more suitable for a test environment. |
An alternate approach that doesn't need a web server would be to find something like the plugin we use to import shaders, but to import image data into maybe a base64 string. It looks like that's what happens when you The other thing that looks like it'll need updating is how we save expected images to files if none are generated yet. Currently, it's using puppeteer's API to pass the data to nodejs. Do you know if there's any equivalent of that for the vitest runner? |
I can look into the static file serving bit to see what's possible.
I'm not entirely sure what this refers to? |
This bit of code in the PR registers a function within the puppeteer window, and then passes the data to p5.js/tasks/test/mocha-chrome.js Lines 37 to 52 in 5f89e23
Right now that's how the visual test runner can save a screenshot to the filesystem when one does not yet exist, used as a way to generate initial screenshots for new tests. In vitest we'd have to use some other method of passing data to nodejs to have write access to the filesystem I think. |
It might be possible but we need to look into the API and options for WebDriverIO (the Vitest equivalent of Puppeteer). Although if we can avoid that if possible it would be my preference, ideally each test always run idempotently. |
Another option could be to have a script that we run that spawns a puppeteer window that runs the same test code, and gives it access to the filesystem similar to how it works with puppeteer right now. It means needing the test code to be runnable in two different contexts that way, but I think that would be doable. |
That would more or less mean having two different test runners though and complicate things a bit. Playwright is the other browser test driver that Vitest supports which has more similar API to Puppeteer if desired. However, if the plan is to automatically generate the initial screenshot that future test will compare against (if I understand this feature correctly), we can possibly use Vite and a small HTTP server as part of a slightly more manual step which will serve the visual test code, wrap it in a POST request to the small HTTP server, which will then save it to file accordingly. Since this feature is not meant to be run in CI, it can possibly be this more manual step? This will also keep the tests idempotent, with test cases where initial screenshot not yet exist be reported as "Pending" instead of pass or fail, if that made sense. |
That also works! Yeah it's just for test writers who are creating the original reference image, so no need for it to work in CI (I haven't looked into this aspect of vitest yet, but if the case that would generate a new screenshot is encountered in CI, it should fail the test. Currently I'm looking at an env var that's set, presumably there's some equivalent we can look at in the new system.) |
@limzykenneth since we have a 2.0 branch now, how do you feel about merging this to main so that we can start using visual tests, and then I can make a PR into the 2.0 branch to add a compatible version of the tests into the new test runner? |
This is great -- is the idea to port over some (or all) of the tests currently in https://github.com/processing/p5.js/tree/main/test/manual-test-examples ? |
@dhowe I think so! some of them are performance tests, which don't need to be converted. Others would be great to port into this system, since I don't think we're in the habit of checking manual tests for each PR (at least I'm not in the habit 😅), and that porting process probably includes scoping down the sketches into a smaller canvas size so it can be tested more quickly in CI than e.g. a fullscreen sketch |
excellent -- very useful for the typography stuff |
@davepagurek You can merge this whenever you are ready and we can look into how this can work in 2.0 |
Resolves #6576
Changes
This adds a visual test suite to p5. It involves adding test files to
test/unit/visual/cases
that look like normal-ish test cases, but where you callscreenshot()
at various points to save an image of its current state.Some of the constraints this is designed around:
I've also added some instructions to the unit testing contributor doc on how to use these/add more.
Screenshots of the change
Hover over images to see what they represent:
PR Checklist
npm run lint
passes