Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Consider reusing assertions and sharing them across tests #295

Closed
mfairchild365 opened this issue Aug 13, 2020 · 5 comments
Closed

Consider reusing assertions and sharing them across tests #295

mfairchild365 opened this issue Aug 13, 2020 · 5 comments
Labels
Agenda+Community Group To discuss in the next workstream summary meeting (usually the last teleconference of the month)

Comments

@mfairchild365
Copy link
Contributor

Currently, assertions are defined for each test. I think there is an opportunity to abstract the assertions away from tests. This will have the following benefits:

  • More consistent assertions across tests when the same assertions are found in many tests
  • More efficient to write tests, less likely to introduce inconsistencies and typos
  • Easer to update many tests at once as our shared understanding of what assertions should be evolves

I think it would be better to write assertions for each role and attribute, then pull those into tests that use those roles and attributes as needed.

For example, we could write a single set of assertions associated with the dialog role, then include them in tests for each of the different dialog examples.

This could even help us to divide up work further. We could one group of community members focus on writing assertions, and a separate group focuses on writing tests (and using those assertions). We could even get feedback on the assertions from AT developers before we pull them into tests and potentially save time and effort downstream.

What are your thoughts @mcking65? Would this be a good goal for V2?

@mfairchild365 mfairchild365 added the Agenda+Community Group To discuss in the next workstream summary meeting (usually the last teleconference of the month) label Aug 26, 2020
@mcking65
Copy link
Contributor

I strongly support testing the assertion that we can abstract assertions! I like this issue.

Some things to consider.

We will need multiple assertions for some ARIA attributes. For example, the assertions for aria-haspopup and aria-expanded will be worded differently depending on the type of element to which they are applied, or, in other words, said in a slightly more abstract way, depending on the context in which they are used.

Some assertions will be the identical for reading mode, interaction mode, and for screen readers that do not have a mode, and they will apply to all 3 mode conditions. While other assertions will only apply to screen readers that have a reading mode and apply only to reading mode. So, it would be good to capture the mode conditions in a way that enables reuse in all mode conditions when applicable and constraint to only specific modes when appropriate.

Some assertions will need to be written with information about the specific example to which they apply. For example, we may want to include the accessible name in the wording of the assertion. To do this, we will need a way to put tokens for those values into the assertion wording. Those tokens should be very easy to read and understand so people reviewing the assertions can easily understand them.

So, an assertion object would need fields for:

  • ARIA Attribute
  • Context: could be "All" or some specific, e.g., "On Button", "In menu", "In dialog" ...
  • Mode: perhaps just a list of specific modes if constrained, may be "All" if not, or "None" if only applies to AT that does not have a mode.
  • Assertion Phrase: The wording of the assertion including any tokens to include values from the specific example

@mfairchild365
Copy link
Contributor Author

It would good to get an estimate for how much work would be required to implement this. Would also be good to get an idea of how much time this might save during authoring (or would it add time)? Would a tool be built to make searching and including references easier and aid with test authoring.

@mfairchild365
Copy link
Contributor Author

We could track this in an excel sheet with the following columns

  • unique ID (a combination of an attribute and its assertion): something like aria-checked-covney-value-true or aria-checked@convey-value-true
  • attribute/role name
  • assertion title
  • assertion description
  • rationale for the assertion
  • examples of support
  • applies to which categories of AT (screen reader, voice control, etc)
  • etc

A single attribute may have many assertions. For example, it could have an assertion for each of its possible values. If it's a state, it could have another assertion for convey state change, etc.

As I've mentioned previously, I've done something similar in the past. The following are JSON files for a different project that accomplish a similar goal. I'm sharing here in hopes that it can better inform how we proceed.

@jscholes
Copy link
Contributor

I'd love to make this a priority for further discussion going into 2021. The comments in this thread already shine a light on many of the concerns to keep in mind, and there are likely to be edge cases. But overall I feel there's a great opportunity to increase the efficiency of test writing while also resulting in more robust, consistent tests.

@mcking65
Copy link
Contributor

mcking65 commented Feb 7, 2024

Test Format Definition V2 is now implemented and accomplishes this for a single test plan where the scope of testing is restricted to a specific test case. All assertions for a test plan are written only once in a single file of assertions.

The V2 approach is not as comprehensive as abstracting to the level of all tests. Because many assertions include values specific to the test case, "Name 'Regular Crust' is conveyed", abstracting at that level would require an additional form of tokenization, adding another layer of complexity.

Perhaps after experience with the V2 format, we may discover utility in a higher level of abstraction and we could revisit this issue. For now, I am closing this issue as complete.

@mcking65 mcking65 closed this as completed Feb 7, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Agenda+Community Group To discuss in the next workstream summary meeting (usually the last teleconference of the month)
Projects
None yet
Development

No branches or pull requests

3 participants