Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement test suite runner #588

Merged
merged 23 commits into from
Aug 13, 2015
Merged

Implement test suite runner #588

merged 23 commits into from
Aug 13, 2015

Conversation

apfelbox
Copy link
Contributor

@apfelbox apfelbox commented Jun 1, 2015

I finally got around on implementing a simple (?) test suite runner. For now, the test suite runs in node, so just the language definitions will be tested, not the integration with the browser. We won't catch things like unsupported methods in old browser for now. (So, if your language definition uses [].forEach the test suite will pass (as node supports this), but IEold will fail as the method is unsupported there.)

Test case location

All tests are located in /tests/languages/${language}/${file}. ${language} must match the name of the language definition in components.js. ${file} is just a file name and only used to provide an info to the test runner.

So if the directory looks like this:

.
└── languages
    ├── css
    │   └── blabla.css
    └── javascript
        └── testcase1.js

your output will look like this:
screen shot 2015-06-01 at 20 51 10

As you can see the file name doesn't matter, it is only included in the test description (that you can quickly find your failing tests).

Test case specification

Please provide feedback / better ideas than mine here.

An example test case file looks like this:

var a = 5;

--------

[
    ["keyword", "var"],
    ["operator", "="],
    ["number", "5"],
    ["punctuation", ";"]
]

It consists of

... language snippet...

/^----*\w*$/m                        <-- yep, thats the exact regex that splits the test case file

... JSON of expected token stream ...

Please consider that the test case specification does use some simplifications / expectations. I tried to find a well enough mix of "too much magic" and "way too much to write and manage". Please be lenient with me here.

  1. The expected token stream is not an array of objects (like the result of Prism.tokenize()), but it is an array of two-element arrays. The only reason to do this is to reduce the amount of text you need to write for a test case.
  2. Only token.type and token.content are compared, something like token.alias is just ignored.
  3. The JSON part must be valid JSON. Although a little bit uncomfortable, this is my take against reinventing the wheel.
  4. If the test case file can't be properly parsed (the split produces more or less that two parts) the test case is failed.
  5. All non-matched parts (strings in the compiled token stream) are simply ignored.
  6. An entry in the expected token stream consists of [${type}, ${content}]
  7. The Prism test instances are isolated.

Implemention

The implemention looks like this:

  1. Load the complete test suite (= a nested array of all test case files).
  2. Define the test for all tests.
  3. Run the tests using mocha, chai and an isolated Prism instance.

As the language definitions (and Prism itself) doesn't use proper encapsulation (Prism is a global singleton), some hoop-jumping is required for this work: we need to run the source code in a vm in node (natively supported in require("vm"), docs) and pass the context manually in their. This way we can build completely fresh and isolated Prism instances for each test (so that they won't interfere with each other).

This kills performance but as we can't just use new Prism(), this is required for ensuring that the tests run in isolation (even different tests in the same language should ideally use different instances – as they currently do).

This is not exactly ideal for performance... but let's write tests and worry about testing performance later! 🎉

ToDo

  • test it / review it
  • discuss the test case file format
  • write tests for the languages (PRs welcome 😎)
  • add travis integration
  • write documentation explaining the testing procedure

Intended to replace/refresh #355

@Golmote
Copy link
Contributor

Golmote commented Jun 4, 2015

I'll definitely try to take a look at this soon! Hopefully this weekend

@Golmote
Copy link
Contributor

Golmote commented Jun 12, 2015

I finally found some time to pull your branch to test it, although a week later than I thought. Sorry about that.
I'm currently having some troubles making it work... (weird error... I'm trying to update node.js right now)

I reviewed your code in the meantime, though, and I must say that I like the way you implemented it. The code is clean and straightforward. Well done!

I also realised it was meant to run only one test per file. Wouldn't it be useful to write several tests in a single file. Like for example, a test file that tests for strings highlighting might need to check single and double quotes. Using two files for this feels strange to me. I would have think of one file per pattern or something like that. What do you think?

Also, I'd like the test files to be suffixed with some extension (like .test?) (and we could remove that extension when displaying the filename in the test results) so that my IDE stops complaining about syntax errors:
http://puu.sh/imsyq/e2154c9638.png

@Golmote
Copy link
Contributor

Golmote commented Jun 12, 2015

Wow, it took me so much time to understand the testing is supposed to be done with the command npm test. ><

Regarding what I said earlier about the "only one test per file is not enough" stuff, I realised that multiple tests could actually be run simply by merging them into one...

"test"
'test'
"te\"st"
'te\'st'

----------------------------------------------------

[
    ["string", "\"test\""],
    ["string", "'test'"],
    ["string", "\"te\\\"st\""],
    ["string", "'te\\'st'"]
]

so I guess this is not really an issue.

@apfelbox
Copy link
Contributor Author

Thanks for taking a look at it! 🎉

I also realised it was meant to run only one test per file. Wouldn't it be useful to write several tests in a single file.

As you already mentioned you can just make the test case longer so that it tests multiple cases. We just need to find a balance between small and self-contained test cases and a myriad of too small test cases.
But I guess grouping them by topic (like "all string variants") might work well. We can also adopt the naming scheme from the PHP core test suite ("bug453.phpt") to directly guard against specific regressions (and add a hint on why a specific test case exists and what it should guard against).

Also, I'd like the test files to be suffixed with some extension (like .test?)

I also thought about the issue with broken syntax highlighting and have no real answer. Splitting the test and the expected result into two files is not a good idea so we won't find something that doesn't break syntax highlighting and in the same time works with a lot of different languages. So using a "neutral" file extension is probably the best way.

(and we could remove that extension when displaying the filename in the test results)

That is a good idea, I will add that.


Another thought: should we also add the possibility to optionally add another section for a comment to a test case? Might be good for self-documenting the test case?

Something like this:

"test"
'test'
"te\"st"
'te\'st'

----------------------------------------------------

[
    ["string", "\"test\""],
    ["string", "'test'"],
    ["string", "\"te\\\"st\""],
    ["string", "'te\\'st'"]
]

----------------------------------------------------

This is a test for testing all possible string variants.
It also checks for escaped delimiters inside the strings.

See #588 for related info.

(still open for suggestions about the test case file layout)

@Golmote
Copy link
Contributor

Golmote commented Jun 12, 2015

I like the idea of the comment part. The test case file layout looks fine to me. It is both easy to create and easy to read.

I kinda like the Phpt tests naming convention:

Phpt tests follow a very strict naming convention. This is done to easily identify what each phpt test is for. Tests should be named according to the following list:

Tests for bugs
    bug<bugid>.phpt (bug17123.phpt)
Tests for a function's basic behaviour
    <functionname>_basic.phpt (dba_open_basic.phpt)
Tests for a function's error behaviour
    <functionname>_error.phpt (dba_open_error.phpt)
Tests for variations in a function's behaviour
    <functionname>_variation.phpt (dba_open_variation.phpt)
General tests for extensions
    <extname><no>.phpt (dba_003.phpt)

It could be simplified for Prism like this, for example:

  • Tests for issues:
    issue<issueid> (issue590)
  • Tests for features:
    <featurename>_feature (string_feature, string_interpolation_feature)
  • Tests for specific usages:
    <usagename>_usage (css_in_markup_usage)
    (I can't think of anything but language inclusion that could fit in this last category... so maybe the name is not well chosen...)

BTW, is it possible to test language inclusion inside another? It requires both components to be loaded, even if they aren't dependent one from the other... Maybe the test case file could indicate another component is required?

Also, do we need to write tests for known failures too? Those would be tests that we expect to fail, so that we will know if some change fixes them? (This sounds a bit weird...)

Jannik Zschiesche added 3 commits June 13, 2015 15:12
## All test case files are now required to have the ".test" extension.
This prevents issues with syntax highlighting in common IDEs and also prevents the test system to choke on non-testcase files in these directories (like `.DS_Store` or `Thumbs.db`).

## Hide the ".test" extension in the description of the test cases
The message for a testcase `blabla.test` will now just read:

> 1) Testing language 'css' – should pass test case 'blabla':
@apfelbox
Copy link
Contributor Author

I just pushed comment support:

screen shot 2015-06-13 at 15 27 24

I just use the message parameter from chai.

I just pushed the requirement for all test cases to have the .test file extension:
This should both prevent highlighting issues in common IDEs as well as preventing the test runner to choke on non test-related files like .DS_Store or Thumbs.db.
Also the file extension is now hidden in the test description (you can see that in the screenshot above).

By using composed language names "language+language2+language3" you can test language inclusion or do integration tests.
@apfelbox
Copy link
Contributor Author

I just pushed support for language inclusion / language integration tests

By using composed language names like javascript+css or markup+php both languages will be loaded. This way you can test whether two languages interact nicely with each other and more importantly you can test language inclusions.

The first language (javascript for javascript+css) will be passed to Prism for highlighting, but the additional languages will be loaded, too. (For that I extended the language loader as it now needs to keep track on the already loaded languages, so that it doesn't load languages twice (so for bash+javascript the clike requirement will only be loaded once, etc…)).

@apfelbox
Copy link
Contributor Author

If the implementation looks good, I will write some documentation. As soon as the docs are finished, this is ready for merging.

@Golmote
Copy link
Contributor

Golmote commented Jun 13, 2015

Thanks for all those updates! This looks good to me. @LeaVerou, did you have time to take a look at this PR? What do you think of it?

@LeaVerou
Copy link
Member

Thank you both! Looks good to me from a quick look.

@Golmote
Copy link
Contributor

Golmote commented Jun 28, 2015

Great! :) Then, @apfelbox, tell us when the PR is ready for merging.

@apfelbox
Copy link
Contributor Author

apfelbox commented Jul 5, 2015

@Golmote all right. I am currently moving (as in moving to another apartment) but may get some free time next week.

var compiledTokenStream = Prism.tokenize(testCase.testSource, mainLanguageGrammar);
var simplifiedTokenStream = this.transformCompiledTokenStream(compiledTokenStream);

assert.deepEqual(simplifiedTokenStream, testCase.expectedTokenStream, testCase.comment);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@apfelbox: I think the two first parameters here should be swapped.

Currently, the "expected" and "actual" values look like they are inverted.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See comment in your PR:

The docs say .deepEqual(actual, expected, [message]), so the current implementation is correct.

@Golmote
Copy link
Contributor

Golmote commented Jul 10, 2015

@apfelbox: I ran into an issue related to language inclusion with the "lang1+lang2" feature.
There are cases in our grammars where the secondary language needs to be loaded before the primary language (see XML inside ActionScript, filters in HAML, filters in Jade...)

Actually, it seems that CSS and JS are the only exceptions here: they modify another grammar to add themselves into it.

IMO, this is not the right way to do it. Currently, the CSS component adds itself inside Markup if it is available. I think it should be the opposite: Markup should add CSS inclusion if available, because it's much easier to know which languages can be embedded inside another, than to know which are the other languages a language can be embedded into. (I hope the previous sentence makes some sense...) What's your opinion on this, @LeaVerou?

@Golmote
Copy link
Contributor

Golmote commented Jul 10, 2015

If we want to keep both ways of doing this, then we need a way to specify the main language to use in the test, that is not related to the order of the languages in the folder name.

Maybe something like markup+!actionscript? (Markup should be loaded before ActionScript, but the latter is actually the main language to be used for highlighting)

@Golmote
Copy link
Contributor

Golmote commented Jul 10, 2015

@apfelbox

5 . All non-matched parts (strings in the compiled token stream) are simply ignored.

This might be an issue for patterns using the inside feature.

Consider the following test, that checks for strings in the apacheconf component.

"foo bar"
'foo bar'
"%{REMOTE_HOST}"

Because the string pattern has an inside key to match variables, the simplified token streams produced by this test are:

[
    ["string", []],
    ["string", []],
    ["string", [
        ["variable", "%{REMOTE_HOST}"]
    ]]
]

The first two are quite useless, don't you think?

@apfelbox
Copy link
Contributor Author

Actually, it seems that CSS and JS are the only exceptions here: they modify another grammar to add themselves into it.

I think we should change that. We should unify the way languages extend each other.
(This would be a BC break, hover, as you need to reorder these language definitions.)

Because the string pattern has an inside key to match variables, the simplified token streams produced by this test are:

No, the simplified token stream would look like this:

[
    ["string", ["\"foo bar\""]]
    ["string", ["'foo bar'"]]
    ...
]

As the strings are not empty.

My description is misleading.

All non-matched parts (strings in the compiled token stream) are simply ignored.

“Strings” is meant here as generic character lists instead of the actual token string.
If you have a language that only matches a and you highlight the text:

a b a

Then you would get

[
   ["my-token", "a"]
   ["my-token", "a"]
]

b is unknown and therefore ignored – that is what is meant with 5.)

Does this clear things up?

@Golmote
Copy link
Contributor

Golmote commented Jul 14, 2015

No, the simplified token stream would look like this:

[
    ["string", ["\"foo bar\""]]
    ["string", ["'foo bar'"]]
    ...
]

Are you sure about this? Because I don't reproduce this behaviour with your current code...

With the following pattern in the grammar (here apacheconf):

'string': {
    pattern: /("|').*\1/,
    inside: {
        'variable': /(\$|%)\{?(\w\.?(\+|\-|:)?)+\}?/
    }
},

and the following test:

"foobar"
----------
[
    ["string", ["foobar"]]
]
----------

Here is the result I get:

  1) Testing language 'apacheconf' - should pass test case 'test':

      AssertionError: expected [ [ 'string', [ 'foobar' ] ] ] to deeply equal [ [ 'string', [] ] ]
      + expected - actual

       [
         [
           "string"
      -    [
      -      "foobar"
      -    ]
      +    []
         ]
       ]

@Golmote
Copy link
Contributor

Golmote commented Jul 14, 2015

Actually, it seems that CSS and JS are the only exceptions here: they modify another grammar to add themselves into it.

I think we should change that. We should unify the way languages extend each other.
(This would be a BC break, hover, as you need to reorder these language definitions.)

@LeaVerou Would you be okay with this? (moving CSS inclusion and JS inclusion into the Markup component)

@apfelbox
Copy link
Contributor Author

Should we filter the strings at all?

We could include them. But I would remove all strings, that just include whitespace, like "\r\n" and "\r\n\t".

@Golmote
Copy link
Contributor

Golmote commented Jul 14, 2015

Sounds good to me. Just make sure to do this at each level of recursion.

Jannik Zschiesche added 4 commits July 15, 2015 19:29
Pull the token stream transformation out of the test case into its own component.
It now strips empty values
We are at a point where we probably should test the test runner (especially the token stream transformer) itself.
@apfelbox
Copy link
Contributor Author

I just pushed some updates.

I think I got it all – I even managed to reduce code complexity a bit.

If you want to learn the details about the token stream transformation you can check the newly created test file testrunner-tests.js (as we are at a point where we should test our test runner).

@Golmote
Copy link
Contributor

Golmote commented Jul 15, 2015

I think you forgot the filter the blank strings (that just contain whitespaces) you were talking about.

[EDIT: Oh, sorry, you actually did filter them. I got confused again with those colors and the meaning of actual and expected in the diff...]

Also, since we don't hear from @LeaVerou regarding the BC breaking change for language inclusions, would you mind allowing both behaviours for now? i.e. we need to assume the main language is the last one listed, because this is the most common case, and we need a way to override this behaviour for the CSS and JS components (I suggested tagging the main language with some character like "!" but any solution would do).

@Golmote
Copy link
Contributor

Golmote commented Jul 15, 2015

Also, I think your example testcase1.test does not pass right now, since the variable name " a " is missing in the JSON.

@Golmote
Copy link
Contributor

Golmote commented Jul 15, 2015

Oh, and one last thing, since you commited about quote style consistency, I noticed you are not consistent in the position of the curly braces (sometimes at the end of the line, sometimes on their own line)? ^^ Prism mostly uses braces at the end of the line.

Jannik Zschiesche added 3 commits July 26, 2015 14:08
You can add an exclamation mark anywhere in the name to load it as main language. If you do not specify anything, the first entry is used as main language

css+markup! --> markup is main language
css+markup  --> css is main language
@apfelbox
Copy link
Contributor Author

@Golmote sorry for the delay. All done.

@apfelbox
Copy link
Contributor Author

Just added travis support. You can see the result of the travis build of my fork here: https://travis-ci.org/apfelbox/prism

@apfelbox
Copy link
Contributor Author

And I finally pushed a documentation page, explaining the test runner, test suite and everything in it (I am no native speaker, so please correct me if anything doesn't make sense 😉 ).

@apfelbox
Copy link
Contributor Author

This is now ready to be merged :shipit:

@apfelbox apfelbox changed the title [WIP] Implement test suite runner Implement test suite runner Jul 26, 2015
@apfelbox
Copy link
Contributor Author

apfelbox commented Aug 6, 2015

A final note:

We could probably move the prism loader to a separate helper directory, outside of /tests/ as this seems to be useful for multiple use cases and not just tests.

What do you think?

@Golmote
Copy link
Contributor

Golmote commented Aug 7, 2015

It might be useful indeed, but maybe we can wait for an actual use case. ^^

I'll merge the PR during next week, as I'm still away from home right now. I already wrote a bunch of tests for a few components, so I will also submit PRs for them.

Golmote added a commit that referenced this pull request Aug 13, 2015
Implement test suite runner
@Golmote Golmote merged commit 956cd85 into PrismJS:gh-pages Aug 13, 2015
@Golmote
Copy link
Contributor

Golmote commented Aug 13, 2015

Here we go! :)

@apfelbox
Copy link
Contributor Author

🎉 thanks

@apfelbox apfelbox deleted the mocha branch August 13, 2015 07:18
@apfelbox apfelbox mentioned this pull request Aug 13, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants