-
-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add integration tests focusing on installing sdists via pip #2863
Conversation
conftest.py
Outdated
|
||
if config.option.integration: | ||
# Assume unit tests and flake already run | ||
config.option.flake8 = False |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Running flake
again during integration would take more time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead, you can pass -p no:flake8
to the pytest run.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, rather than install the plugin and then disable it, you can instead provide separate sets of extras for the integration tests and just not include the unwanted plugins when doing integration tests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @jaraco. I added a testing-integration
to extras_require.
There is some small amount of duplication between testing-integration
and testing
but hopefully it would not add a lot of double bookkeeping.
pytest.ini
Outdated
@@ -34,3 +34,6 @@ filterwarnings= | |||
|
|||
# https://github.com/pypa/setuptools/issues/2497 | |||
ignore:.* is an invalid version and will not be supported::pkg_resources | |||
|
|||
# This will result in an error in the integration tests if not ignored: | |||
ignore:.*Coverage disabled via --no-cov switch!.* |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pytest-cov
emits a warning if it is disabled. That warning is converted to error by the strict pytest
configuration.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead, you can pass -p no:cov
to the pytest run.
tox.ini
Outdated
{[testenv]passenv} | ||
DOWNLOAD_PATH | ||
commands = | ||
pytest --integration {posargs:-vv --durations=10 --no-cov setuptools/tests/integration} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
By pointing to the specific setuptools/tests/integration
directory by default we avoid time collecting/analysing the other tests (and also avoid displaying a lot of skips in the pytest output)
According to the CI logs in abravalheri/experiment-setuptools-integration-tests, the integration tests as implemented in this PR take approx 16min to run:
|
PIP = (sys.executable, "-m", "pip") | ||
SDIST_OPTIONS = ( | ||
"--ignore-installed", | ||
"--no-build-isolation", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd prefer to use the build-isolation, as that's the most common scenario. Is there another option that can be passed that means "if requested, get setuptools from the current directory"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I asked about this possibility in the pip discord, and based on the answers there, my conclusion is that if we want to force a specific version of setuptools we have to keep the --no-build-isolation
flag.
To simulate build isolation we can create a separated virtualenv and install the build deps there.
An alternative suggestion was to try PIP_CONSTRAINTS
.
To test this suggestion, I locally tried to:
- Build a wheel of the development version of setuptools via
python -m build --wheel
(PIP_CONSTRAINTS
do not accept editable installs) - Create a
constraints.txt
file with:setuptools @ file:///home/<...>/setuptools/dist/setuptools-58.5.3.post20211113-py3-none-any.whl
- Run:
DISTUTILS_DEBUG=1 pip --verbose install -c constraints.txt dist/Brotli-1.0.9.zip
DISTUTILS_DEBUG=1 pip --verbose install -c constraints.txt --use-pep517 dist/Brotli-1.0.9.zip
DISTUTILS_DEBUG=1 PIP_CONSTRAINTS=./constraints.txt pip --verbose install --use-pep517 dist/Brotli-1.0.9.zip
For (1) pip does not seem to respect the isolation very much, also there is nothing in the stdout/stderr that could be used to assert during the tests that the correct version of setuptools is being used.
For both (2) and (3), pip ignores the constraint and re-downloads setuptools-58.5.3-py3-none-any.whl
from PyPI.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The reason why I went with pip install -t
instead of creating a virtual environment was purely to avoid the overhead and try to speed up the integration tests a little bit.
With that in mind, something I can do is to create a virtualenv for each test case and install the current version of setuptools under development in it. This way we can simulate pip's isolation, despite using the --no-build-isolation
flag.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is looking quite good. Nice work. I made a few comments, none critical. Have a look, make any changes you'd like to make now, and we can get it merged and iterate.
4529c2c
to
3c55e74
Compare
Hi @jaraco, thank you very much for the review and suggestions. This is a summary of the changes:
When using virtualenvs I had to change a little bit the tests in abravalheri/experiment-setuptools-integration-tests, mostly: - correct_setuptools = os.getenv("PROJECT_ROOT") or SETUPTOOLS_ROOT
+ correct_setuptools = (
+ "git+https://github.com/pypa/setuptools@main#egg=setuptools"
+ )
run([*venv_pip, "install", "-Ie", correct_setuptools])
run([*venv_pip, "install", *SDIST_OPTIONS, sdist]) Please feel free to revert to any of the preceding commits. I don't have a strong preference regarding using virtualenv vs |
1db94de
to
79a251d
Compare
The selection of packages used in the integration test is arbitrary, and can be changed. The main criteria used was the time to build, and the number of "non-Python" dependencies. The only exception was numpy, due to its significance to the ecosystem.
Instead of disabling pytest plugins, simply don't install them
This is more reliable then simply globing (and will also work in other operating systems)
56747de
to
d246d8a
Compare
d246d8a
to
6f48cb7
Compare
Let's try to see if this works: @Mergifyio update |
✅ Branch has been successfully updated |
Motivation
As shown in #2849, sometimes it is tricky to identify when a change introduces backward compatibility problems, specially considering that a lot o packages still rely on
distutils
.Integration tests able to identify when such rare errors scenarios happen are very useful.
Summary of changes
This PR introduces one specific type of integration tests: installing sdists using
pip
Although very specific, this class of tests works very well for setuptools, since it exercises the API exposed by users that want to not only include binary extensions, but also customise their compilation process.
The configurations for pytest, tox and Github Actions were changed in such a way that integration tests will only run before a release (not on regular merge or push).
The selection of Python packages being used for the test was only kept small, to minimise the "cost" of the test (in terms of time and resources). I tried to choose between popular packages that would cover most of the parts of setuptools API, however there is always room for improvement, and the list can be changed to better suit the needs of the project/community.
Closes #2862
Evidence that the proposed changes work
Testing the tests is something always difficult. In order to provide evidence that the proposed test and tox/pytest/CI configuration changes work, I have created the following repository:
This repository was created by taking the important parts from setuptools/this PR and organising them in a way that Github Actions would run as expected, as show in the CI logs. More information is available on the README.
Pull Request Checklist
changelog.d/
.(See documentation for details)