-
Notifications
You must be signed in to change notification settings - Fork 15
Conversation
1d5aa5e
to
aac9247
Compare
Adds an automated test suite that runs JS tests using Tape. The tests are run in Firefox using the remote code execution capabilities of Marionette. Marionette's only well-maintained client is in Python, so we also have to add Python dependencies, including a Pipfile. bin/run_tests.py contains most of the plumbing for getting the test JS running, including bundling it with Webpack, launching Firefox, and formatting the output.
npm install | ||
pipenv run mozdownload --version latest --destination $HOME/firefox.tar.bz2 | ||
pipenv run mozinstall --destination $HOME $HOME/firefox.tar.bz2 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
CircleCI browser images have a ludicrously old version of Firefox (like, 48). Maybe there's more recent versions elsewhere in the image that I didn't look, but this ensures we test using the latest version.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Edit: this was answered in slack, but I'll leave it here for posterity.
Looks like we are downloading and installing the latest Firefox for each run; could we add an "update" step or make our own image?
I see that the Shield Studies Addon Template uses "latest-browsers" for their docker image, though I haven't found a similar one that also has Python.
Looks like we may also be able to cache the Firefox binary, so we don't have to install/download all the time?
Osmose's reply in slack:
Downloading the latest firefox kinda sucks, but it also avoids having to maintain a docker image with the latest Firefox, and specifically one that also has Python and node installed
And it's really fast, like the download and install per test run takes like 10 seconds
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We know that the docker image's Firefox version was old, because Marionette outputs the Firefox version to the terminal right (on client.start_session()
)?
mozversion INFO | application_version: 61.0.1
@@ -1,3 +1,4 @@ | |||
node_modules | |||
web-ext-artifacts | |||
build | |||
gecko.log |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is generated by Firefox during the Marionette run. It has useful info if Firefox fails in some way during the run.
"test": "bin/run_tests.py" | ||
}, | ||
"config": { | ||
"firefox_bin": "" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Per-project npm config is my favorite npm trick.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought this would get populated in the case that I am running the tests locally when I call this from the README:
npm config set webext-commerce:firefox_bin <PATH_TO_FIREFOX_BINARY>
But I don't see it updated. This makes sense in part, since we don't want a dev's local Firefox binary path written in our package.json
, but why do we need this in package.json
at all if we are just updating the user's .npmrc
file?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The value in package.json
is the default value if it is not configured. The value in .npmrc
is the user-set value. One overrides the other. We need it defined so that npm knows that the config value even exists and should be set as an environment variable.
Actually, I've never tested what happens if you set the config value without having it present in package.json
, but it's better to have an empty default anyway to make it clear that the config value exists at all.
{ | ||
"rules": { | ||
"import/no-extraneous-dependencies": ["error", { | ||
"devDependencies": true, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This avoids lint errors when the test scripts import from development-only dependencies.
|
||
module.exports = { | ||
mode: 'development', | ||
devtool: 'eval-source-map', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Inline source maps cause a syntax error in scripts run via Marionette. I'm not entirely sure why.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Comparing inline-source-map
to eval-source-map
, the only difference seems to be that the later wraps the former in an eval statement? I don't totally follow what's happening here...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't either, and still don't, really, but comparing the output for background.js
withinline-source-map
vs eval-source-map
gives some hints. eval-source-map
wraps the code for the module in an eval
call, with it's own sourceMappingURL
comment. inline-source-map
, on the other hand, has a single sourceMappingURL
comment for the entire file. In other words, eval
allows for a separate sourcemap for each module bundled together.
Webpack says eval-source-map
is faster than inline-source-map
but they both show original source for each line executed. Which kinda makes sense; eval
doesn't have to do the work combining the separate source maps for each bundled file into a single map.
I still don't know why inline source maps cause a syntax error. I don't really care why, either, since eval
works just as well.
test.onFailure(() => failures++); | ||
|
||
// Import all files within the tests directory that have the word "test" | ||
// in their filename. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good ole webpack context magic that I don't entirely understand.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall this looks great, and I'm definitely stealing this in the future. I have a couple comments, but nothing that needs to block the PR.
@@ -2,13 +2,16 @@ version: 2 | |||
jobs: | |||
build: | |||
docker: | |||
- image: circleci/node:10-browsers | |||
- image: circleci/python:2.7-node-browsers |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
2.7? 😢
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This comment recommends using Geckodriver or similar (at least for non-Mozilla projects). Why wouldn't we want to do that here? (granted this IS a Mozilla project).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Geckodriver translates the Webdriver protocol into Marionette commands. While Webdriver does include methods for executing JS in the browser, it does not have support for switching to a chrome context and executing privileged code. Since that is one of the benefits of Marionette/Firefox based testing (testing privileged chrome code), we don't want to use Geckodriver since we'd lose that ability.
}, | ||
plugins: [ | ||
new webpack.BannerPlugin({ | ||
banner: 'const marionetteScriptFinished = arguments[0];', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What does this do?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
BannerPlugin adds a string to the top of all emitted bundles. The extra arguments here ensure it's not wrapped in a comment, since it's JS we want to run.
Marionette().execute_async_script
can pass arguments to the script being executed, and they're available in a top-level arguments
object. One argument (the last one in the list) is always passed: a callback that signals when the async script is finished executing. Since Webpack wraps all the processed scripts, I added this so that we could call the callback from src/tests/index.js
once the test suite is finished running.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Brain. explode.
'--firefox_bin', | ||
required=True, | ||
envvar='npm_package_config_firefox_bin', | ||
help='Path to Firefox binary', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If someone doesn't read the directions thoroughly enough (like me), and forgets to set this via npm package config, it just hangs for a while, and eventually times out. It would be nice if this could detect if the path is the default npm config (the empty string), or a path that does not exist, and give a nicer error message.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good idea. I think click has something built-in for this.
Updated with more human-friendly error messages when Firefox is misconfigured. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wow this is a really dense PR for me, not having any Python knowledge. :X Seems to work though, and I am following most of it. :D Lots of questions for you, including some overarching ones I'll ask here:
- Why do we have to run the python test script through
npm test
? I thoughtpipenv
only changes the path for where to execute a Python script. - Why do we have to write our
run_tests
script in Python? Is there not a Marionette for JavaScript? - Why aren't we using Geckodriver?
- Could we could talk over
src/tests/index.jsx
in our 1:1 this afternoon?
npm install | ||
pipenv run mozdownload --version latest --destination $HOME/firefox.tar.bz2 | ||
pipenv run mozinstall --destination $HOME $HOME/firefox.tar.bz2 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Edit: this was answered in slack, but I'll leave it here for posterity.
Looks like we are downloading and installing the latest Firefox for each run; could we add an "update" step or make our own image?
I see that the Shield Studies Addon Template uses "latest-browsers" for their docker image, though I haven't found a similar one that also has Python.
Looks like we may also be able to cache the Firefox binary, so we don't have to install/download all the time?
Osmose's reply in slack:
Downloading the latest firefox kinda sucks, but it also avoids having to maintain a docker image with the latest Firefox, and specifically one that also has Python and node installed
And it's really fast, like the download and install per test run takes like 10 seconds
"test": "bin/run_tests.py" | ||
}, | ||
"config": { | ||
"firefox_bin": "" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought this would get populated in the case that I am running the tests locally when I call this from the README:
npm config set webext-commerce:firefox_bin <PATH_TO_FIREFOX_BINARY>
But I don't see it updated. This makes sense in part, since we don't want a dev's local Firefox binary path written in our package.json
, but why do we need this in package.json
at all if we are just updating the user's .npmrc
file?
With these installed, you can set up the test suite: | ||
|
||
1. Install Python dependencies: | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I needed to pip install pipenv
first, so maybe add a check for that?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The "Prerequisites" section above lists Pipenv as required, and links to the Pipenv website with instructions on how to download. I figured that should be enough to indicate that you need Pipenv installed.
npm install | ||
pipenv run mozdownload --version latest --destination $HOME/firefox.tar.bz2 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there any reason why you chose pipenv run
instead of pipenv shell
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's less stateful, as pipenv shell
would leave you in a pipenv-enabled shell afterwards. Easier to not have to remember that if we add more commands to the script later.
2. Save the path to your Firefox binary with `npm`: | ||
|
||
```sh | ||
npm config set webext-commerce:firefox_bin <PATH_TO_FIREFOX_BINARY> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I might add an example path,since on Mac, it's a pain to find the path to Firefox:
Example: "/Applications/Firefox.app/Contents/MacOS/firefox-bin"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is called out earlier in the README.
{ | ||
test: /\.css$/, | ||
use: { | ||
loader: 'null-loader', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is for style mocks, to use Jest's terminology?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pretty much. I was hoping to avoid having to do this but whatever. It's fine. It's not worth the effort trying to make it work.
/** | ||
* Entry point for the automated test suite. This script is run inside a | ||
* content-scoped sandbox in Firefox by Marionette. See bin/run_tests.py for | ||
* more info. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So for every test .jsx file, run it through Marionette (where does this happen?), concatenate the outputs for each test into a single output string and tally the failures across all tests?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Webpack processes this file (tests/index.jsx), which bundles every test file together into a single JS file.
run_tests.py
launches a browser, and sends that bundled JS via Marionette to it to be executed.- This JS, which is at this point running inside the browser, executes the test suite and returns the output and failures to
run_tests.py
.
The important bit being that this file does not execute each individual test via Marionette, it imports them so that, when Webpack bundles it, all the tests get bundled as well and run when the bundled script is run via Marionette.
}, | ||
plugins: [ | ||
new webpack.BannerPlugin({ | ||
banner: 'const marionetteScriptFinished = arguments[0];', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Brain. explode.
}, | ||
node: { | ||
fs: 'empty', | ||
}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From this or a similar bug?
|
||
module.exports = { | ||
mode: 'development', | ||
devtool: 'eval-source-map', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Comparing inline-source-map
to eval-source-map
, the only difference seems to be that the later wraps the former in an eval statement? I don't totally follow what's happening here...
Once upon a time the B2G teams were maintaining a Marionette client in JS, but that project has long-since died. The python client is actually maintained in-tree and is what we use for running Mochitests and Marionette tests in Firefox itself, so it's pretty reliably maintained. |
Thanks for the review! |
This patch will run Fathom against the page (not distinguishing a product from a non-product page) and log the extracted price value and page URL to the console via 'background.js'. Failing that, it will fall back to extraction via CSS selectors if any exist for the site in 'product_extraction_data.json', and failing that, it will try extraction via Open Graph meta tags. This is heavily based on [Swathi Iyer](https://github.com/swathiiyer2/fathom-products) and [Victor Ng’s](https://github.com/mozilla/fathom-webextension) prior work. Currently, there is only one ruleset with one naive rule for one product feature, price. This initial commit is intended cover Fathom integration into the web extension. A later commit will add rules and take training data into account. Note: The 'runRuleset' method in 'productInfo.js' returns 'NaN' if it doesn't find any elements for any of its rules. Performance observations: Originally, I had dumped Swathi's three rulesets (one each for product title, image and price) and tried to run them against any page, similar to Victor Ng's web extension. However, that was [freezing up the tab](#36 (comment)), and after profiling the content script Fathom was running in before and after replacing Swathi's rulesets with a single ruleset with only one rule for one attribute, I did not see any warnings from Firefox, nor detect any significant performance hits in the DevTools profiler due to Fathom. It would therefore appear the performance hit was related to the complex rulesets and not Fathom itself. Webpack observations: While [`jsdom`](https://www.npmjs.com/package/jsdom) is a `fathom-web` dependency, it is used only for running `fathom-web` in the Node context for testing. To avoid build errors associated with `jsdom` and its dependencies, I added a `’null-loader’` for that `require` call, which mocks the module as an empty object. This loader is also used in webpack.config.test.js, from PR #32.
This patch will run Fathom against the page (not distinguishing a product from a non-product page) and log the extracted price value and page URL to the console via 'background.js'. Failing that, it will fall back to extraction via CSS selectors if any exist for the site in 'product_extraction_data.json', and failing that, it will try extraction via Open Graph meta tags. This is heavily based on [Swathi Iyer](https://github.com/swathiiyer2/fathom-products) and [Victor Ng’s](https://github.com/mozilla/fathom-webextension) prior work. Currently, there is only one ruleset with one naive rule for one product feature, price. This initial commit is intended cover Fathom integration into the web extension. A later commit will add rules and take training data into account. Note: The 'runRuleset' method in 'productInfo.js' returns 'NaN' if it doesn't find any elements for any of its rules. Performance observations: Originally, I had dumped Swathi's three rulesets (one each for product title, image and price) and tried to run them against any page, similar to Victor Ng's web extension. However, that was [freezing up the tab](#36 (comment)), and after profiling the content script Fathom was running in before and after replacing Swathi's rulesets with a single ruleset with only one rule for one attribute, I did not see any warnings from Firefox, nor detect any significant performance hits in the DevTools profiler due to Fathom. It would therefore appear the performance hit was related to the complex rulesets and not Fathom itself. Webpack observations: While [`jsdom`](https://www.npmjs.com/package/jsdom) is a `fathom-web` dependency, it is used only for running `fathom-web` in the Node context for testing. To avoid build errors associated with `jsdom` and its dependencies, I added a `’null-loader’` for that `require` call, which mocks the module as an empty object. This loader is also used in webpack.config.test.js, from PR #32.
Finally got this working. I've been idly wanting to make this work for two years! Hopefully it'll turn out useful.
This uses Marionette/Firefox for running tests to ensure that we're testing in the environment our add-on will actually run. This will also let us test any privileged webextension experiment code by switching to a chrome context when running the test JavaScript.
The base command for running tests is
pipenv run test
, but it's layered over a few different commands and is a bit complex:pipenv run test
, which immediately callsnpm test
, which immediately callsbin/run_tests.py
The
pipenv
layer runs the test scripts inside of a Python virtualenv with the Python libraries installed. This is howrun_tests.py
dependencies are pulled in. Thenpm test
layer adds all the binaries fromnode_modules
to the path, making it easier forrun_tests.py
to launchwebpack
andtap-mocha-reporter
.run_tests.py
does the actual work of running a test by:webpack.config.test.js
to a temporary file and reading its contents.I chose to use tape because it's fairly well supported, can be bundled by Webpack, and can output the results of a test run to a stream. I originally was using mocha, but had issues bundling it and capturing the output in a way that Firefox could send back to
run_tests.py
.@mythmon I added you to help review the Python bits, particularly the Pipfile stuff, which I'm not terribly familiar with. I'm pretty sure it's fine for running tests, but a second opinion would be useful. Feel free to review the whole PR if you want.