-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
jobs could depend on the prior upload of specific published files #46
Comments
this would be very handy for stuff like oxidecomputer/propolis#609 (comment) |
@jclulow I'd like to take a crack at implementing this, as it would be useful for work I'm doing on Propolis' test framework. If you have the time, it would be great to get some pointers for how I might start going about that? |
hawkw
added a commit
to oxidecomputer/propolis
that referenced
this issue
Jan 21, 2024
In order to ensure that changes to `propolis` don't break instance migration from previous versions, we would like to add automated testing of migrating an instance from the current `master` branch of `propolis` to run on PR branches. This PR adds an implementation of such a test to `phd`. To implement this, I've built on top of my change from PR #604 and modified the `phd` artifact store to introduce a notion of a "base" Propolis server artifact. This artifact can then be used to test migration from the "base" Propolis version to the revision under test. I've added a new test case in `migrate.rs` that creates a source VM using the "base" Propolis artifact and attempts to migrate that instance to a target VM running on the "default" Propolis artifact (the revision being tested). In order to add the new test, I've factored out test code from the existing `migrate::smoke_test` test. How `phd` should acquire a "base" Propolis artifact is configured by several new command-line arguments. `--base-propolis-branch` takes the name of a Git branch on the `propolis` repo. If this argument is provided, PHD will download the Propolis debug artifact from the HEAD commit of that branch from Buildomat. Alternatively, the `--base-propolis-commit` argument accepts a Git commit hash to download from Buildomat. Finally, the `--base-propolis-cmd` argument takes a local path to a binary to use as the "base" Propolis. All these arguments are mutually exclusive, and if none of them are provided, the migration-from-base tests are skipped. When the "base" Propolis artifact is configured from a Git branch name (i.e. the `--base-propolis-branch` CLI argument is passed), we use the Buildomat `/public/branch/{repo}/{branch-name}` endpoint, which returns the Git hash of the HEAD commit to that branch. Then, we attempt to download an artifact from Buildomat for that commit hash. An issue here is that Buildomat's branch endpoint will return the latest commit hash for that branch as soon as it sees a commit, but the artifact for that commit may not have been published yet, so downloading it will fail. Ideally, we could resolve this sort of issue by configuring the `phd-run` job for PRs to depend on the `phd-build` job for `master`, so that the branch's test run isn't started until any commits that just merged to `master` have published artifacts. However, this isn't basely possible in Buildomat (see oxidecomputer/buildomat#46). As a temporary workaround, I've added code to the PHD artifact store to retry downloading Buildomat artifacts with an exponential backoff, for up to a configurable duration (defaulting to 20 minutes). This allows us to wait for an in-progress build to complete, with a limit on how long we'll wait for. Depends on #604
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Today, a job created through the GitHub integration can depend on other jobs (by name) from the same CI run on the same commit.
If a job on a branch (e.g., for a pull request) needs access to a published file from the latest commit on another branch (e.g., master) for some sort of comparison, one would currently have to poll for those published files. When the files are not available, there is no particularly detailed feedback available as to why: perhaps a job is running that will eventually publish them, or perhaps the job has completed but did not publish the expected files.
It would be good if a job could specify, as a new sort of input dependency:
At a minimum we could hold the job in the waiting state until such a file is published. We may (currently or eventually) also be able to determine to some level of certainty whether such a published file can emerge later, though this will require sorting out a few other things around file publishing that are still a bit nascent today (e.g., see #10).
We could also conceivably arrange to download the file automatically, as we do with output artefacts from dependent jobs today.
The text was updated successfully, but these errors were encountered: