Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

document what things the perl-based tester tests #16

Open
michielbdejong opened this issue May 21, 2019 · 7 comments
Open

document what things the perl-based tester tests #16

michielbdejong opened this issue May 21, 2019 · 7 comments

Comments

@michielbdejong
Copy link
Collaborator

it reports that there are 6 tests, what are they?

@kjetilk
Copy link
Contributor

kjetilk commented May 22, 2019

The documentation for each test script are/will be in https://github.com/kjetilk/p5-web-solid-test-basic#implemented-tests

How the tests scripts are actually run is defined by the RDF-based fixture tables. Thus, I prefer to think of them as "RDF-based tester" rather than "Perl-based tester". :-)

@michielbdejong
Copy link
Collaborator Author

Ah cool! https://github.com/kjetilk/p5-web-solid-test-basic/blob/master/tests/data/basic.ttl lists 4 tests, so how does it get from 4 to 6?

@michielbdejong
Copy link
Collaborator Author

And comparing https://github.com/kjetilk/p5-web-solid-test-basic/blob/master/tests/data/basic.ttl#L10-L14 to https://github.com/kjetilk/p5-web-solid-test-basic/blob/master/tests/data/basic.ttl#L16-L20, what makes the first into an unauthorized read and the second into a write with bearer token? Which http verb does that 'write' send? It feels like this is not where the actual code of each test is, can you point to where that is?

@kjetilk
Copy link
Contributor

kjetilk commented May 23, 2019

Yes, that's in https://github.com/kjetilk/p5-web-solid-test-basic/blob/master/lib/Web/Solid/Test/Basic.pm
So, the tests in there are basically meant as illustrations on how you'd write tests, in the two first cases, with few parameters, in the third case, many parameters.

@kjetilk kjetilk added this to the Cleanup and Integration milestone Feb 25, 2020
@kjetilk
Copy link
Contributor

kjetilk commented Mar 13, 2020

I have taken this issue as the over-arching documentation improvement issue. :-) There are several aspects to it, and also several issues that are relevant to it.

The two major axes is to understand what a test does by looking at the test formulation, the other is to understand why a test fails when it does fail. An example of a state-of-the-art fixture table is https://github.com/solid/test-suite/blob/b288766c92e8c7f1f8afecc9c24d91f2ca42bfa8/testers/rdf-fixtures/fixture-tables/operations_post_with_slug.ttl

First of all the primary answer to the question is that a triple with the test:purpose predicate is required for every test. The object of that triple will be found in the test output, both in the console-bound TAP output and the EARL output. So, I could respond "look at test:purpose" and claim that it fully answers your question.

Those texts have been written using the well-known rigorous "off-the-top-of-my-head" method. ;-) Which is to say, they are not necessarily that understandable to others. I have an issue open in #86 to improve them. I'm not sure that one person working in total isolation is the right way to do that, though.

Then, there are more texts in the source files. I use an rdfs:comment to describe the fixture table as a whole. Then, much of the information of what a test actually does goes in each request and expected response. I use an rdfs:comment to describe in textual terms each of these too (unless it is obvious). These comments aren't currently exposed anywhere beyond the fixture tables. They could be, especially, they could be expose as subtest descriptions. I have #87 open, but it is a fairly big change as it influences framework and test scripts too. It is also debatable whether it is useful to keep rdfs:comment as a comment for the test writer and introduce a different predicate to expose for the subtest.

Then, some users may want to have more detailed understanding of the test internals. This does not belong in the fixture tables, but I have attempted to document that in the test script modules, as referenced above. I can imagine that these texts could be improved too, but it is hard to do in isolation, because you don't know what's unclear when you're working alone.

Then, I have some issues open around detailing test failures. This is mainly a problem in the formatter, as the explanation is in the TAP output. The issue is perlrdf/p5-tap-formatter-earl#2
I did note, though, that the TAP parser wasn't terribly helpful in this area, it didn't create classes for most of this, so it would require quite some effort to do it.

There are some issues that could be helpful if the tooling on the Linked Data side of things was helpful, like perlrdf/p5-tap-formatter-earl#1 and perlrdf/p5-test-fitesque-rdf#5 which would link and thus gather more information, and thus tooling could make test failures easier to understand. I'm not sure we should prioritize this.

Prioritization of these issues is key, I think I understand pretty well what could be done, and how detailed it could be, but I don't know that it should be prioritized.

So, that's the long way of saying "look at test:purpose" :-)

@csarven
Copy link
Contributor

csarven commented Mar 19, 2020

@kjetilk
Copy link
Contributor

kjetilk commented Mar 19, 2020

Yes, that's right. These are test of test scripts, meta tests, not the actual Solid tests. Everything needs to be tested. :-)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants