-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Consider reusing assertions and sharing them across tests #295
Comments
I strongly support testing the assertion that we can abstract assertions! I like this issue. Some things to consider. We will need multiple assertions for some ARIA attributes. For example, the assertions for aria-haspopup and aria-expanded will be worded differently depending on the type of element to which they are applied, or, in other words, said in a slightly more abstract way, depending on the context in which they are used. Some assertions will be the identical for reading mode, interaction mode, and for screen readers that do not have a mode, and they will apply to all 3 mode conditions. While other assertions will only apply to screen readers that have a reading mode and apply only to reading mode. So, it would be good to capture the mode conditions in a way that enables reuse in all mode conditions when applicable and constraint to only specific modes when appropriate. Some assertions will need to be written with information about the specific example to which they apply. For example, we may want to include the accessible name in the wording of the assertion. To do this, we will need a way to put tokens for those values into the assertion wording. Those tokens should be very easy to read and understand so people reviewing the assertions can easily understand them. So, an assertion object would need fields for:
|
It would good to get an estimate for how much work would be required to implement this. Would also be good to get an idea of how much time this might save during authoring (or would it add time)? Would a tool be built to make searching and including references easier and aid with test authoring. |
We could track this in an excel sheet with the following columns
A single attribute may have many assertions. For example, it could have an assertion for each of its possible values. If it's a state, it could have another assertion for As I've mentioned previously, I've done something similar in the past. The following are JSON files for a different project that accomplish a similar goal. I'm sharing here in hopes that it can better inform how we proceed. |
I'd love to make this a priority for further discussion going into 2021. The comments in this thread already shine a light on many of the concerns to keep in mind, and there are likely to be edge cases. But overall I feel there's a great opportunity to increase the efficiency of test writing while also resulting in more robust, consistent tests. |
Test Format Definition V2 is now implemented and accomplishes this for a single test plan where the scope of testing is restricted to a specific test case. All assertions for a test plan are written only once in a single file of assertions. The V2 approach is not as comprehensive as abstracting to the level of all tests. Because many assertions include values specific to the test case, "Name 'Regular Crust' is conveyed", abstracting at that level would require an additional form of tokenization, adding another layer of complexity. Perhaps after experience with the V2 format, we may discover utility in a higher level of abstraction and we could revisit this issue. For now, I am closing this issue as complete. |
Currently, assertions are defined for each test. I think there is an opportunity to abstract the assertions away from tests. This will have the following benefits:
I think it would be better to write assertions for each role and attribute, then pull those into tests that use those roles and attributes as needed.
For example, we could write a single set of assertions associated with the dialog role, then include them in tests for each of the different dialog examples.
This could even help us to divide up work further. We could one group of community members focus on writing assertions, and a separate group focuses on writing tests (and using those assertions). We could even get feedback on the assertions from AT developers before we pull them into tests and potentially save time and effort downstream.
What are your thoughts @mcking65? Would this be a good goal for V2?
The text was updated successfully, but these errors were encountered: