-
Notifications
You must be signed in to change notification settings - Fork 75
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
true integration testing #261
Comments
It would be good to separate the actual operation (the pulling of data and updating of files) from the pushing back to repos. |
@CJ-Wright Indeed, we should really rethink this ! Is there any new development ? I can think about this if possible (I can add to my GSoC program as a form of debug/test codes) |
There weren't any recent developments on this front to the best of my knowledge |
I want to revive this issue and came up with the following concept: GitHub Accounts and RepositoriesFor a proper integration test strategy, we must mimic the relevant GitHub accounts and repositories with which the bot interacts. I propose the following accounts ("test accounts") and repositories ("test repositories"):
I am aware this requires us to manage three additional GitHub entities. However, since production also uses three accounts this way, we should stick to this architecture and mirror it as closely as possible, preventing future headaches. Integration Test DefinitionTo define test cases, we use the following directory structure:
As shown, there are different test cases for different feedstocks. Each test case is represented by a Python module (file). Each test case module must define a The prepare method uses our yet-to-be-implemented integration test library (see below) to set up the test case by defining how the feedstock repo, the forked repo of the bot account (does it even exist?), the cf-graph data relevant to the feedstock, and possibly needed HTTP mocks (see below) look. Setting up the repositories includes preparing PRs that might be open. The check_after method is called after the bot is run and can throw several assertions against the test state (e.g., files present in the forked repository, a specific git history, cf-graph data). Helper functions provided by our integration test helper library make writing those assertions easy. Integration Test WorkflowWe run the integration tests via a single GitHub workflow in this repo. It consists of the following steps:
To emphasize, we test multiple feedstocks together in each test scenario. This speeds up test execution (because the bot works in a batch job fashion) and might uncover some bugs that only occur when multiple feedstocks or their cf-graph metadata interact with each other in some way. HTTP MocksThe version update tests, especially, will require us to mock HTTP responses to return the latest versions we want them to return. Pytest IntegrationThe test scenarios are generated by dynamically parametrizing a pytest test case. This pytest test case runs once per test scenario, dynamically importing the correct Python modules (test cases) for each feedstock that is part of the test scenario and then executing them. Integration Test Helper LibraryThe integration test helper library provides helper functions for the For check_after, we could provide helper functions for checking that a GitHub PR has been opened on the test feedstock repository with the correct title or another function for checking that the contents of the bot's fork has the expected content. The integration test library must offer an option to run Practice will show which exact helper functions are necessary. Let me know what you think! Cc @0xbe7a |
I need to read more, but the idea of a custom integration test library sounds painful. It is entirely possible to do this within pytest and we should do so. |
We might have misunderstood each other here. The goal of the "integration test helper library" is not to replace pytest or parts of it. It's simply a collection of practical helper functions to set up the text environment, including configuring the external git repos. It also offers custom functions for assertions we need in our use case, e.g., validating the contents of a pull request. Since this is very domain-specific, it cannot be done by an already-available public library. |
We may need to run true integration (with a dev graph to boot) so we don't blow up the graph by accident and can run CIs with close to 100% coverage.
The text was updated successfully, but these errors were encountered: