-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add test runner #1013
Add test runner #1013
Conversation
Codecov Report
@@ Coverage Diff @@
## master #1013 +/- ##
==========================================
- Coverage 42.63% 42.57% -0.06%
==========================================
Files 71 71
Lines 3209 3239 +30
==========================================
+ Hits 1368 1379 +11
- Misses 1711 1727 +16
- Partials 130 133 +3
Continue to review full report at Codecov.
|
@@ -3,6 +3,10 @@ kind: Config | |||
build: | |||
artifacts: | |||
- imageName: gcr.io/k8s-skaffold/skaffold-example | |||
test: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can see pros and cons to listing the tests here. My first intent would have been to put them next to the artifacts. I think it makes the common case easier to write and avoid duplicating the image names. However, if one uses profiles, it's more complicated to write.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I originally had them specified at the artifact level like you said, but changed it because I liked having an explicit "build/test/deploy" structure to the config. I agree though that even though the image name is required for the tests, doing it this way isn't as clear that the tests are artifact-specific, so it's a trade-off.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm...we could think about it in a spectrum way too:
- artifact level: container level tests and unittests. I think about these as "user specified compilation verification" - so quick, small and focused
- build / test / deploy makes sense too - but probably would point at multi-container type integration tests using docker only?
- integration tests/smoke tests could be in a verification phase after deploy
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, this is an interesting way to think about it. I'm definitely of the mind that container-level tests and application-level integration tests should be separate, so in that sense I like the idea of a "verification" phase after the deploy. The "test" phase should probably only operate at an artifact/single-container level.
By putting the test phase in between build and deploy, I wanted to make it more explicit that builds will always happen, but tests will only run if the builds pass, and similarly deploys will only happen if tests pass. Verification would similarly only happen if the deploys pass.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi there 👋
Just for your information, here are my "user" goal: I want to be able to test a newly built image
To do that in a artifact level, I think the main idea is to not deploy a bad container and get feedbacks from what I'm doing... So thank you for enabling "Container Structure Test"
At this point, I don't have any preference between the test
section inside the build
section or outside. If it avoids to repeat the image name, that's cool 👍
Another important features:
- I think we don't want to run the test phase of an image that didn't move. (to improve my Developer eXperience)
- I think we want to use wildcards (like I do in the
deploy.kubectl.manifests
section)
I'm happy to help!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @jlandure, thanks for your feedback! This is exactly what I'm looking for :)
I agree, repeating the image name isn't great from a user standpoint, so I'll play around with specifying the tests at the artifact level (as opposed to through a separate test
section) and see if it looks better.
Totally agree about not testing an image that didn't change, that should be covered by this PR. Only images that get rebuilt will trigger their specific tests.
Wildcards are definitely doable, I'll look into adding that.
pkg/skaffold/runner/runner.go
Outdated
// Watch test configuration | ||
if err := watcher.Register( | ||
func() ([]string, error) { return r.TestDependencies(), nil }, | ||
func() { changed.needsRedeploy = true }, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will this trigger a retest?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yep, I added a call to r.Test
here
pkg/skaffold/runner/runner_test.go
Outdated
@@ -210,7 +232,7 @@ func TestNewForConfig(t *testing.T) { | |||
|
|||
testutil.CheckError(t, test.shouldErr, err) | |||
if cfg != nil { | |||
b, d := WithTimings(test.expectedBuilder, test.expectedDeployer) | |||
b, _, d := WithTimings(test.expectedBuilder, test.expectedTester, test.expectedDeployer) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we check the tester too?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah yeah, I did this originally just to get it to compile while I was writing other tests and forgot to fix this. I'll update
deploy/skaffold/Dockerfile
Outdated
@@ -52,6 +52,10 @@ RUN echo "deb [arch=amd64] http://storage.googleapis.com/bazel-apt stable jdk1.8 | |||
RUN apt-get update \ | |||
&& apt-get install -y bazel | |||
|
|||
RUN curl -LO https://storage.googleapis.com/container-structure-test/latest/container-structure-test-linux-amd64 \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pin this to a version
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
color.Default.Fprintln(out, "Starting test...") | ||
|
||
err := w.Tester.Test(out, builds) | ||
if err == nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reverse this logic
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if err != nil {
return errors.Wrap(...)
}
Fprintln
return nil
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I copied this logic from the other timing methods in this file so this would match. Do you want me to change them all?
args = append(args, tr.testFiles...) | ||
cmd := exec.Command("container-structure-test", args...) | ||
|
||
_, err := util.RunCmdOut(cmd) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you're ignoring output, just use util.RunCmd. I think we actually want to show the user the broken test output.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I should have put a comment here. RunCmdOut
automatically logs the output if the command fails, which is why I'm ignoring the output here (we've already logged it so we don't need to again). So if tests fail the entire output gets logged, but if tests pass it doesn't. I considered changing RunCmdOut
to not log by default so it's on the caller to deal with the output, WDYT?
4e5c34b
to
4e578ac
Compare
@nkubala Sorry this needs to be rebased |
785a4a4
to
571afde
Compare
This PR adds a test phase to the runner. If any tests are specified, they will always be executed after the build phase but before the deploy phase, and if any tests fail the deploy phase will be skipped. In the dev loop, only newly built images will be tested; any artifacts that were not modified by new changes will be skipped. A 'test' section is added to the config, where users can add all of their test configuration. Each test is specified at the artifact level, matched up by image names specified in the build artifacts. The test executor is designed to be pluggable, similar to the builders and deployers. The main exception is that any number of testers can be executed in a single run; a failure in any of them counts as a failure of the entire test run. This PR implements running structure tests, but adds the infrastructure to easily add more test types in the future.
This PR adds a test phase to the runner. If any tests are specified,
they will always be executed after the build phase but before the deploy
phase, and if any tests fail the deploy phase will be skipped. In the
dev loop, only newly built images will be tested; any artifacts that
were not modified by new changes will be skipped.
A 'test' section is added to the config, where users can add all of
their test configuration. Each test is specified at the artifact level,
matched up by image names specified in the build artifacts.
The test executor is designed to be pluggable, similar to the builders
and deployers. The main exception is that any number of testers can be
executed in a single run; a failure in any of them counts as a failure
of the entire test run. This PR implements running structure tests,
but adds the infrastructure to easily add more test types in the future.
Fixes #984 and #992