Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ci: optional tests fail in CI #505

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .github/workflows/pull-request.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@ env:
RUSTC_WRAPPER: "sccache"
# A constant location for the uv cache
UV_CACHE_DIR: /tmp/.uv-cache
CI: "true"


jobs:
Expand Down
25 changes: 18 additions & 7 deletions tests/integration/conftest.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,11 @@
from pathlib import Path
import pytest
import subprocess
import os


IN_CI = len(os.getenv("CI", "")) > 0

@pytest.fixture(scope="session")
def export_test_cases_dir(request):
r = request.config.getoption("--export-test-cases")
Expand All @@ -32,9 +35,12 @@ def validate(request, export_test_cases_dir: Path):
# Check if the validator is installed
validator = get_validator()
if validator is None:
pytest.fail("Run `cargo build -p release` to install the validator")
else:
pytest.skip("Skipping validation tests as requested")
if IN_CI:
pytest.fail("Validator not installed")
else:
pytest.skip(
"Skipping validation: Run `cargo build` to install the validator"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can also just run pytest --no-validation, following #472. So this is a different take on the same idea. If we go with this, we might want to revert #472? Or, we might instead want to do something like that but for execution tests too?

)

def validate_json(hugr: str):
# Executes `cargo run -p validator -- validate -`
Expand Down Expand Up @@ -74,15 +80,20 @@ def f(hugr: Package, expected: int, fn_name: str = "main"):
import execute_llvm

if not hasattr(execute_llvm, "run_int_function"):
pytest.skip("Skipping llvm execution")
if IN_CI:
pytest.fail("run_int_function not available in CI")
else:
pytest.skip("Skipping llvm execution")

hugr_json: str = hugr.modules[0].to_json()
res = execute_llvm.run_int_function(hugr_json, fn_name)
if res != expected:
raise LLVMException(
f"Expected value ({expected}) doesn't match actual value ({res})"
)
except ImportError:
pytest.skip("Skipping llvm execution")

except ImportError as e:
if IN_CI:
pytest.fail(f"run_int_fn failed in CI: {e}")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be something like "execute_llvm not installed in CI"

else:
pytest.skip("Skipping llvm execution")
return f
Loading