Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

specifying explicit dependency of a test case on a file #12

Open
tarpas opened this issue Mar 31, 2015 · 4 comments
Open

specifying explicit dependency of a test case on a file #12

tarpas opened this issue Mar 31, 2015 · 4 comments

Comments

@tarpas
Copy link
Owner

tarpas commented Mar 31, 2015

This would be needed for data files or other files which influence the execution of tests but don't contain normal python lines of code.

One option how to specify the dependency of a test would be to preceed them with a decorator.

@testmon.depends_on('../testmon/plugin.py')
def test_example(....

Another possible option would be a pragma option.

testmon has to merge this explicit information to the dependency data acquired from coverage.py

In the first phase the granularity of a whole file would need to suffice. Whenever the file modify time changes, dependent tests would be re-executed.

@tarpas tarpas changed the title quick hack: specifying explicit dependency on a file specifying explicit dependency of a test case on a file Nov 5, 2016
@tarpas
Copy link
Owner Author

tarpas commented Nov 5, 2016

See one of the use cases: #49

@ktosiek
Copy link
Contributor

ktosiek commented Nov 5, 2016

I assumed it's not a duplicate of #49, because in #49 I want to dynamically add that dependency (when a test retrieves some data through a specific ORM class), and here it's about setting it statically for each test. But now I'm thinking having a hook for "add dependency on file X to currently running test" would work too, for both the test decorator and my use case.

@AbdealiLoKo
Copy link

AbdealiLoKo commented Oct 28, 2022

I think something like this would be helpful for me too.

Providing a programmatic way to:

  1. Mark a test as "dirty" and needs to be rerun
  2. Get whether a test is "dirty" and hence needs to be rerun
    would be great. (not sure if it's there already)

Cause then I can write logic in my conftest to decide when to mark the test to be run. For example:

# conftest.py

def pytest_collection_modifyitems(session, config, items):
    for item in items:
        if uses_a_file_that_is_modified(item):  # Case 1
            mark_item_for_run(item)
        if any(is_dirty(i) for i in item.depends_on):  # Case 2
            mark_item_for_run(item)

I have some usecases where this could be helpful for me.

Case1: Depending on resource files (#178)
I have a fixture for resources - so, I can detect the fixture is being used or not and if my resource folder has been "modified" - run it.
I don't mind writing this logic out myself cause I understand testmon does not want to support this

Case 2: With pytest-order link
With pytest-order I have some projects that use @pytest.mark.order(before="test_first") to enforce dependencies across my tests
So, I can mark test_two should be run if test_one is marked as dirty

@alexrudd2
Copy link

This would be useful to me as well. The use case is an integration test where there's a 1-1 mapping between module/file and test, but Coverage can't trace it since the module is executed as a subprocess.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants