Skip to content

Test plan

Craig Clark edited this page Jul 6, 2023 · 7 revisions

Approach

To keep up with the fast pace of development on the data catalogue, automated tests are used to ensure the solution operates as expected. Automated tests save time as they can be run over and over again with little effort. This greatly reduced the time needed for repeated manual testing.

Automated tests are designed to determine if a feature does with it is supposed to do (unit testing) and to validate that the feature does not bring in unintended consequences to other site functionality (integration testing).

Automated tests do not evaluate how well a feature meets business needs. That is determined through the agile process of sprint review and planning.

Types of tests

The tests written for the data catalogue use the PHPUnit framework as recommended by Drupal[^1].

[^1]: Automated testing on Drupal.org

Functional tests

This is the preferred testing method.

These tests are written in code as features are developed. Functional tests create an instance of Drupal to run the tests in. These are the fastest to run. The site that is created contains no content other than what the test creates itself.

You can find the functional tests in code at /html/modules/custom/bc_dc/tests/src/Functional.

Site tests

Occasionally it happens that a feature being tested cannot run as a functional test. For example, assigning a user a role in config instead of importing the user role via config sync. In these circumstances, the test has to be run against the existing site. These tests are written in code, though in a different file from the functional tests.

You can find the site tests in code at /html/modules/custom/bc_dc/tests/src/ExistingSite.

Manual tests

Manual tests are a last resort. When there is no way to properly run a functional or site test, steps for manual tests must be written and added to the Manual tests wikipage.

Writing manual tests

Manual tests should be written in a way that identifies the feature being tested, what steps to take to perform the test and what the user should see to validate the test[^2]. On the Manual tests wikipage, there is a template to use for writing tests.

[^2]: Behat has a good example of how to write a test.

What to test

Test any code and configuration created specifically for the project.

No matter what type of test you are writing, you need to follow the same process. Make sure to add comments to the tests so other people can understand what is happening at each step of the test.

Note: Do not write tests for upstream code such as contributed modules or Drupal core.

Test the feature

Write the steps required to test that the feature meets the definition of done (DoD). The DoD is on the issue page for the feature.

Test circumstances where the feature should fail

Write steps to test the feature under circumstances where it should not work. For example, if you have a block of content that should only be visible to people with a specific role, test that users who do not have the role cannot see the block.

Running tests

When you run tests, all tests previously written are also run. This is important for integration testing.

  1. in a terminal, go to the root of the project
  2. run vendor/bin/phpunit html/modules/custom/bc_dc

QUESTION FOR LIAM How would you run this in ddev? I tried

ddev vendor/bin/phpunit html/modules/custom/bc_dc
Error: unknown command "vendor/bin/phpunit" for "ddev"

and

vendor/bin/phpunit html/modules/custom/bc_dc
env: php: No such file or directory

Not that I don't have php installed on the computer. I can only use what's available to ddev and the client may be in the same situation.

Evaluating failures

If a test fails, you need to debug why. Once you know what the issue is, decide how to proceed.

flowchart TD

A(START <br> run test) --> B((fa:fa-circle-check pass<br> END))
A -->C(fa:fa-circle-xmark Fail)
C --> D[debug]
D --> E(error in <br>config/code)
E --> F[fix error]
F --> A
D --> G(has to be tested<br> on site)
G --> H[write site test]
H --> A
D --> I(error in contrib)
I --> J[submit issue]
J --> K[report to <br> project team]
K --> L[team decision]
L --> M(no reasonable  <br> alternative to contrib)
L --> O(there is a resonable alternative to contrib)
O --> P[implement new solution]
P-->A
M --> N[write manual test]
N -->A
Loading
Clone this wiki locally