Skip to content
This repository has been archived by the owner on Dec 3, 2020. It is now read-only.

Add automated test suite. #32

Merged
merged 2 commits into from
Jul 30, 2018
Merged

Add automated test suite. #32

merged 2 commits into from
Jul 30, 2018

Conversation

Osmose
Copy link
Contributor

@Osmose Osmose commented Jul 23, 2018

Finally got this working. I've been idly wanting to make this work for two years! Hopefully it'll turn out useful.

This uses Marionette/Firefox for running tests to ensure that we're testing in the environment our add-on will actually run. This will also let us test any privileged webextension experiment code by switching to a chrome context when running the test JavaScript.

The base command for running tests is pipenv run test, but it's layered over a few different commands and is a bit complex:

  1. pipenv run test, which immediately calls
  2. npm test, which immediately calls
  3. bin/run_tests.py

The pipenv layer runs the test scripts inside of a Python virtualenv with the Python libraries installed. This is how run_tests.py dependencies are pulled in. The npm test layer adds all the binaries from node_modules to the path, making it easier for run_tests.py to launch webpack and tap-mocha-reporter. run_tests.py does the actual work of running a test by:

  1. Building the test JavaScript with Webpack using webpack.config.test.js to a temporary file and reading its contents.
  2. Using the Marionette client to launch Firefox and run the test bundle in a content context, getting TAP-formatted results.
  3. Piping the results through a formatter to make it more human-readable, and outputting it.

I chose to use tape because it's fairly well supported, can be bundled by Webpack, and can output the results of a test run to a stream. I originally was using mocha, but had issues bundling it and capturing the output in a way that Firefox could send back to run_tests.py.

@mythmon I added you to help review the Python bits, particularly the Pipfile stuff, which I'm not terribly familiar with. I'm pretty sure it's fine for running tests, but a second opinion would be useful. Feel free to review the whole PR if you want.

@Osmose Osmose added the [ENG]: Do not merge Do not merge this pull request label Jul 23, 2018
@Osmose Osmose force-pushed the marionette branch 8 times, most recently from 1d5aa5e to aac9247 Compare July 23, 2018 17:48
Adds an automated test suite that runs JS tests using Tape. The tests are
run in Firefox using the remote code execution capabilities of Marionette.

Marionette's only well-maintained client is in Python, so we also have to add
Python dependencies, including a Pipfile.

bin/run_tests.py contains most of the plumbing for getting the test JS running,
including bundling it with Webpack, launching Firefox, and formatting the
output.
@Osmose Osmose requested review from biancadanforth and mythmon July 24, 2018 01:42
@Osmose Osmose removed the [ENG]: Do not merge Do not merge this pull request label Jul 24, 2018
npm install
pipenv run mozdownload --version latest --destination $HOME/firefox.tar.bz2
pipenv run mozinstall --destination $HOME $HOME/firefox.tar.bz2
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CircleCI browser images have a ludicrously old version of Firefox (like, 48). Maybe there's more recent versions elsewhere in the image that I didn't look, but this ensures we test using the latest version.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Edit: this was answered in slack, but I'll leave it here for posterity.

Looks like we are downloading and installing the latest Firefox for each run; could we add an "update" step or make our own image?

I see that the Shield Studies Addon Template uses "latest-browsers" for their docker image, though I haven't found a similar one that also has Python.

Looks like we may also be able to cache the Firefox binary, so we don't have to install/download all the time?


Osmose's reply in slack:

Downloading the latest firefox kinda sucks, but it also avoids having to maintain a docker image with the latest Firefox, and specifically one that also has Python and node installed
And it's really fast, like the download and install per test run takes like 10 seconds

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We know that the docker image's Firefox version was old, because Marionette outputs the Firefox version to the terminal right (on client.start_session())?

mozversion INFO | application_version: 61.0.1

@@ -1,3 +1,4 @@
node_modules
web-ext-artifacts
build
gecko.log
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is generated by Firefox during the Marionette run. It has useful info if Firefox fails in some way during the run.

"test": "bin/run_tests.py"
},
"config": {
"firefox_bin": ""
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Per-project npm config is my favorite npm trick.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought this would get populated in the case that I am running the tests locally when I call this from the README:

npm config set webext-commerce:firefox_bin <PATH_TO_FIREFOX_BINARY>

But I don't see it updated. This makes sense in part, since we don't want a dev's local Firefox binary path written in our package.json, but why do we need this in package.json at all if we are just updating the user's .npmrc file?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The value in package.json is the default value if it is not configured. The value in .npmrc is the user-set value. One overrides the other. We need it defined so that npm knows that the config value even exists and should be set as an environment variable.

Actually, I've never tested what happens if you set the config value without having it present in package.json, but it's better to have an empty default anyway to make it clear that the config value exists at all.

{
"rules": {
"import/no-extraneous-dependencies": ["error", {
"devDependencies": true,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This avoids lint errors when the test scripts import from development-only dependencies.


module.exports = {
mode: 'development',
devtool: 'eval-source-map',
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Inline source maps cause a syntax error in scripts run via Marionette. I'm not entirely sure why.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comparing inline-source-map to eval-source-map, the only difference seems to be that the later wraps the former in an eval statement? I don't totally follow what's happening here...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't either, and still don't, really, but comparing the output for background.js withinline-source-map vs eval-source-map gives some hints. eval-source-map wraps the code for the module in an eval call, with it's own sourceMappingURL comment. inline-source-map, on the other hand, has a single sourceMappingURL comment for the entire file. In other words, eval allows for a separate sourcemap for each module bundled together.

Webpack says eval-source-map is faster than inline-source-map but they both show original source for each line executed. Which kinda makes sense; eval doesn't have to do the work combining the separate source maps for each bundled file into a single map.

I still don't know why inline source maps cause a syntax error. I don't really care why, either, since eval works just as well.

test.onFailure(() => failures++);

// Import all files within the tests directory that have the word "test"
// in their filename.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good ole webpack context magic that I don't entirely understand.

Copy link
Contributor

@mythmon mythmon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall this looks great, and I'm definitely stealing this in the future. I have a couple comments, but nothing that needs to block the PR.

@@ -2,13 +2,16 @@ version: 2
jobs:
build:
docker:
- image: circleci/node:10-browsers
- image: circleci/python:2.7-node-browsers
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

2.7? 😢

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This comment recommends using Geckodriver or similar (at least for non-Mozilla projects). Why wouldn't we want to do that here? (granted this IS a Mozilla project).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Geckodriver translates the Webdriver protocol into Marionette commands. While Webdriver does include methods for executing JS in the browser, it does not have support for switching to a chrome context and executing privileged code. Since that is one of the benefits of Marionette/Firefox based testing (testing privileged chrome code), we don't want to use Geckodriver since we'd lose that ability.

},
plugins: [
new webpack.BannerPlugin({
banner: 'const marionetteScriptFinished = arguments[0];',
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What does this do?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

BannerPlugin adds a string to the top of all emitted bundles. The extra arguments here ensure it's not wrapped in a comment, since it's JS we want to run.

Marionette().execute_async_script can pass arguments to the script being executed, and they're available in a top-level arguments object. One argument (the last one in the list) is always passed: a callback that signals when the async script is finished executing. Since Webpack wraps all the processed scripts, I added this so that we could call the callback from src/tests/index.js once the test suite is finished running.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Brain. explode.

'--firefox_bin',
required=True,
envvar='npm_package_config_firefox_bin',
help='Path to Firefox binary',
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If someone doesn't read the directions thoroughly enough (like me), and forgets to set this via npm package config, it just hangs for a while, and eventually times out. It would be nice if this could detect if the path is the default npm config (the empty string), or a path that does not exist, and give a nicer error message.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good idea. I think click has something built-in for this.

@Osmose
Copy link
Contributor Author

Osmose commented Jul 24, 2018

Updated with more human-friendly error messages when Firefox is misconfigured.

Copy link
Collaborator

@biancadanforth biancadanforth left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wow this is a really dense PR for me, not having any Python knowledge. :X Seems to work though, and I am following most of it. :D Lots of questions for you, including some overarching ones I'll ask here:

  • Why do we have to run the python test script through npm test? I thought pipenv only changes the path for where to execute a Python script.
  • Why do we have to write our run_tests script in Python? Is there not a Marionette for JavaScript?
  • Why aren't we using Geckodriver?
  • Could we could talk over src/tests/index.jsx in our 1:1 this afternoon?

npm install
pipenv run mozdownload --version latest --destination $HOME/firefox.tar.bz2
pipenv run mozinstall --destination $HOME $HOME/firefox.tar.bz2
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Edit: this was answered in slack, but I'll leave it here for posterity.

Looks like we are downloading and installing the latest Firefox for each run; could we add an "update" step or make our own image?

I see that the Shield Studies Addon Template uses "latest-browsers" for their docker image, though I haven't found a similar one that also has Python.

Looks like we may also be able to cache the Firefox binary, so we don't have to install/download all the time?


Osmose's reply in slack:

Downloading the latest firefox kinda sucks, but it also avoids having to maintain a docker image with the latest Firefox, and specifically one that also has Python and node installed
And it's really fast, like the download and install per test run takes like 10 seconds

"test": "bin/run_tests.py"
},
"config": {
"firefox_bin": ""
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought this would get populated in the case that I am running the tests locally when I call this from the README:

npm config set webext-commerce:firefox_bin <PATH_TO_FIREFOX_BINARY>

But I don't see it updated. This makes sense in part, since we don't want a dev's local Firefox binary path written in our package.json, but why do we need this in package.json at all if we are just updating the user's .npmrc file?

With these installed, you can set up the test suite:

1. Install Python dependencies:

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I needed to pip install pipenv first, so maybe add a check for that?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The "Prerequisites" section above lists Pipenv as required, and links to the Pipenv website with instructions on how to download. I figured that should be enough to indicate that you need Pipenv installed.

npm install
pipenv run mozdownload --version latest --destination $HOME/firefox.tar.bz2
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there any reason why you chose pipenv run instead of pipenv shell?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's less stateful, as pipenv shell would leave you in a pipenv-enabled shell afterwards. Easier to not have to remember that if we add more commands to the script later.

2. Save the path to your Firefox binary with `npm`:

```sh
npm config set webext-commerce:firefox_bin <PATH_TO_FIREFOX_BINARY>
Copy link
Collaborator

@biancadanforth biancadanforth Jul 24, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I might add an example path,since on Mac, it's a pain to find the path to Firefox:
Example: "/Applications/Firefox.app/Contents/MacOS/firefox-bin"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is called out earlier in the README.

{
test: /\.css$/,
use: {
loader: 'null-loader',
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is for style mocks, to use Jest's terminology?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pretty much. I was hoping to avoid having to do this but whatever. It's fine. It's not worth the effort trying to make it work.

/**
* Entry point for the automated test suite. This script is run inside a
* content-scoped sandbox in Firefox by Marionette. See bin/run_tests.py for
* more info.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So for every test .jsx file, run it through Marionette (where does this happen?), concatenate the outputs for each test into a single output string and tally the failures across all tests?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. Webpack processes this file (tests/index.jsx), which bundles every test file together into a single JS file.
  2. run_tests.py launches a browser, and sends that bundled JS via Marionette to it to be executed.
  3. This JS, which is at this point running inside the browser, executes the test suite and returns the output and failures to run_tests.py.

The important bit being that this file does not execute each individual test via Marionette, it imports them so that, when Webpack bundles it, all the tests get bundled as well and run when the bundled script is run via Marionette.

},
plugins: [
new webpack.BannerPlugin({
banner: 'const marionetteScriptFinished = arguments[0];',
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Brain. explode.

},
node: {
fs: 'empty',
},
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.


module.exports = {
mode: 'development',
devtool: 'eval-source-map',
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comparing inline-source-map to eval-source-map, the only difference seems to be that the later wraps the former in an eval statement? I don't totally follow what's happening here...

@Osmose
Copy link
Contributor Author

Osmose commented Jul 30, 2018

Why do we have to run the python test script through npm test? I thought pipenv only changes the path for where to execute a Python script.

npm test adds the binaries from node_modules/.bin to the PATH, which is why we can run webpack and tap-mocha-reporter without a path in run_tests.py. It's not strictly necessary but it makes things a little easier.

Why do we have to write our run_tests script in Python? Is there not a Marionette for JavaScript?

Once upon a time the B2G teams were maintaining a Marionette client in JS, but that project has long-since died. The python client is actually maintained in-tree and is what we use for running Mochitests and Marionette tests in Firefox itself, so it's pretty reliably maintained.

@Osmose Osmose merged commit 2473c13 into master Jul 30, 2018
@Osmose Osmose deleted the marionette branch July 30, 2018 21:56
@Osmose
Copy link
Contributor Author

Osmose commented Jul 30, 2018

Thanks for the review!

biancadanforth added a commit that referenced this pull request Aug 1, 2018
This patch will run Fathom against the page (not distinguishing a product from a non-product page) and log the extracted price value and page URL to the console via 'background.js'. Failing that, it will fall back to extraction via CSS selectors if any exist for the site in 'product_extraction_data.json', and failing that, it will try extraction via Open Graph meta tags.

This is heavily based on [Swathi Iyer](https://github.com/swathiiyer2/fathom-products) and [Victor Ng’s](https://github.com/mozilla/fathom-webextension) prior work. Currently, there is only one ruleset with one naive rule for one product feature, price. This initial commit is intended cover Fathom integration into the web extension. A later commit will add rules and take training data into account.

Note: The 'runRuleset' method in 'productInfo.js' returns 'NaN' if it doesn't find any elements for any of its rules.

Performance observations:
Originally, I had dumped Swathi's three rulesets (one each for product title, image and price) and tried to run them against any page, similar to Victor Ng's web extension. However, that was [freezing up the tab](#36 (comment)), and after profiling the content script Fathom was running in before and after replacing Swathi's rulesets with a single ruleset with only one rule for one attribute, I did not see any warnings from Firefox, nor detect any significant performance hits in the DevTools profiler due to Fathom. It would therefore appear the performance hit was related to the complex rulesets and not Fathom itself.

Webpack observations:
While [`jsdom`](https://www.npmjs.com/package/jsdom) is a `fathom-web` dependency, it is used only for running `fathom-web` in the Node context for testing. To avoid build errors associated with `jsdom` and its dependencies, I added a `’null-loader’` for that `require` call, which mocks the module as an empty object. This loader is also used in webpack.config.test.js, from PR #32.
biancadanforth added a commit that referenced this pull request Aug 2, 2018
This patch will run Fathom against the page (not distinguishing a product from a non-product page) and log the extracted price value and page URL to the console via 'background.js'. Failing that, it will fall back to extraction via CSS selectors if any exist for the site in 'product_extraction_data.json', and failing that, it will try extraction via Open Graph meta tags.

This is heavily based on [Swathi Iyer](https://github.com/swathiiyer2/fathom-products) and [Victor Ng’s](https://github.com/mozilla/fathom-webextension) prior work. Currently, there is only one ruleset with one naive rule for one product feature, price. This initial commit is intended cover Fathom integration into the web extension. A later commit will add rules and take training data into account.

Note: The 'runRuleset' method in 'productInfo.js' returns 'NaN' if it doesn't find any elements for any of its rules.

Performance observations:
Originally, I had dumped Swathi's three rulesets (one each for product title, image and price) and tried to run them against any page, similar to Victor Ng's web extension. However, that was [freezing up the tab](#36 (comment)), and after profiling the content script Fathom was running in before and after replacing Swathi's rulesets with a single ruleset with only one rule for one attribute, I did not see any warnings from Firefox, nor detect any significant performance hits in the DevTools profiler due to Fathom. It would therefore appear the performance hit was related to the complex rulesets and not Fathom itself.

Webpack observations:
While [`jsdom`](https://www.npmjs.com/package/jsdom) is a `fathom-web` dependency, it is used only for running `fathom-web` in the Node context for testing. To avoid build errors associated with `jsdom` and its dependencies, I added a `’null-loader’` for that `require` call, which mocks the module as an empty object. This loader is also used in webpack.config.test.js, from PR #32.
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants