Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add test runner #1013

Merged
merged 9 commits into from
Oct 3, 2018
Merged

Add test runner #1013

merged 9 commits into from
Oct 3, 2018

Conversation

nkubala
Copy link
Contributor

@nkubala nkubala commented Sep 21, 2018

This PR adds a test phase to the runner. If any tests are specified,
they will always be executed after the build phase but before the deploy
phase, and if any tests fail the deploy phase will be skipped. In the
dev loop, only newly built images will be tested; any artifacts that
were not modified by new changes will be skipped.

A 'test' section is added to the config, where users can add all of
their test configuration. Each test is specified at the artifact level,
matched up by image names specified in the build artifacts.

The test executor is designed to be pluggable, similar to the builders
and deployers. The main exception is that any number of testers can be
executed in a single run; a failure in any of them counts as a failure
of the entire test run. This PR implements running structure tests,
but adds the infrastructure to easily add more test types in the future.

Fixes #984 and #992

@codecov-io
Copy link

codecov-io commented Sep 21, 2018

Codecov Report

Merging #1013 into master will decrease coverage by 0.05%.
The diff coverage is 42.42%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #1013      +/-   ##
==========================================
- Coverage   42.63%   42.57%   -0.06%     
==========================================
  Files          71       71              
  Lines        3209     3239      +30     
==========================================
+ Hits         1368     1379      +11     
- Misses       1711     1727      +16     
- Partials      130      133       +3
Impacted Files Coverage Δ
pkg/skaffold/schema/profiles.go 92.53% <100%> (+0.11%) ⬆️
pkg/skaffold/runner/timings.go 14.63% <30%> (-0.52%) ⬇️
pkg/skaffold/runner/runner.go 53.21% <45.45%> (-1.11%) ⬇️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update bd3fc88...a7b9412. Read the comment docs.

@@ -3,6 +3,10 @@ kind: Config
build:
artifacts:
- imageName: gcr.io/k8s-skaffold/skaffold-example
test:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can see pros and cons to listing the tests here. My first intent would have been to put them next to the artifacts. I think it makes the common case easier to write and avoid duplicating the image names. However, if one uses profiles, it's more complicated to write.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I originally had them specified at the artifact level like you said, but changed it because I liked having an explicit "build/test/deploy" structure to the config. I agree though that even though the image name is required for the tests, doing it this way isn't as clear that the tests are artifact-specific, so it's a trade-off.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm...we could think about it in a spectrum way too:

  • artifact level: container level tests and unittests. I think about these as "user specified compilation verification" - so quick, small and focused
  • build / test / deploy makes sense too - but probably would point at multi-container type integration tests using docker only?
  • integration tests/smoke tests could be in a verification phase after deploy

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, this is an interesting way to think about it. I'm definitely of the mind that container-level tests and application-level integration tests should be separate, so in that sense I like the idea of a "verification" phase after the deploy. The "test" phase should probably only operate at an artifact/single-container level.

By putting the test phase in between build and deploy, I wanted to make it more explicit that builds will always happen, but tests will only run if the builds pass, and similarly deploys will only happen if tests pass. Verification would similarly only happen if the deploys pass.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi there 👋

Just for your information, here are my "user" goal: I want to be able to test a newly built image
To do that in a artifact level, I think the main idea is to not deploy a bad container and get feedbacks from what I'm doing... So thank you for enabling "Container Structure Test"

At this point, I don't have any preference between the test section inside the build section or outside. If it avoids to repeat the image name, that's cool 👍

Another important features:

  • I think we don't want to run the test phase of an image that didn't move. (to improve my Developer eXperience)
  • I think we want to use wildcards (like I do in the deploy.kubectl.manifests section)

I'm happy to help!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @jlandure, thanks for your feedback! This is exactly what I'm looking for :)

I agree, repeating the image name isn't great from a user standpoint, so I'll play around with specifying the tests at the artifact level (as opposed to through a separate test section) and see if it looks better.

Totally agree about not testing an image that didn't change, that should be covered by this PR. Only images that get rebuilt will trigger their specific tests.

Wildcards are definitely doable, I'll look into adding that.

// Watch test configuration
if err := watcher.Register(
func() ([]string, error) { return r.TestDependencies(), nil },
func() { changed.needsRedeploy = true },
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will this trigger a retest?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yep, I added a call to r.Test here

@@ -210,7 +232,7 @@ func TestNewForConfig(t *testing.T) {

testutil.CheckError(t, test.shouldErr, err)
if cfg != nil {
b, d := WithTimings(test.expectedBuilder, test.expectedDeployer)
b, _, d := WithTimings(test.expectedBuilder, test.expectedTester, test.expectedDeployer)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we check the tester too?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah yeah, I did this originally just to get it to compile while I was writing other tests and forgot to fix this. I'll update

@@ -52,6 +52,10 @@ RUN echo "deb [arch=amd64] http://storage.googleapis.com/bazel-apt stable jdk1.8
RUN apt-get update \
&& apt-get install -y bazel

RUN curl -LO https://storage.googleapis.com/container-structure-test/latest/container-structure-test-linux-amd64 \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pin this to a version

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

color.Default.Fprintln(out, "Starting test...")

err := w.Tester.Test(out, builds)
if err == nil {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reverse this logic

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if err != nil {
return errors.Wrap(...)
}

Fprintln
return nil

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I copied this logic from the other timing methods in this file so this would match. Do you want me to change them all?

args = append(args, tr.testFiles...)
cmd := exec.Command("container-structure-test", args...)

_, err := util.RunCmdOut(cmd)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you're ignoring output, just use util.RunCmd. I think we actually want to show the user the broken test output.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I should have put a comment here. RunCmdOut automatically logs the output if the command fails, which is why I'm ignoring the output here (we've already logged it so we don't need to again). So if tests fail the entire output gets logged, but if tests pass it doesn't. I considered changing RunCmdOut to not log by default so it's on the caller to deal with the output, WDYT?

@dgageot
Copy link
Contributor

dgageot commented Oct 2, 2018

@nkubala Sorry this needs to be rebased

@nkubala nkubala force-pushed the test-runner branch 5 times, most recently from 785a4a4 to 571afde Compare October 2, 2018 19:33
 This PR adds a test phase to the runner. If any tests are specified,
 they will always be executed after the build phase but before the deploy
 phase, and if any tests fail the deploy phase will be skipped. In the
 dev loop, only newly built images will be tested; any artifacts that
 were not modified by new changes will be skipped.

 A 'test' section is added to the config, where users can add all of
 their test configuration. Each test is specified at the artifact level,
 matched up by image names specified in the build artifacts.

 The test executor is designed to be pluggable, similar to the builders
 and deployers. The main exception is that any number of testers can be
 executed in a single run; a failure in any of them counts as a failure
 of the entire test run. This PR implements running structure tests,
 but adds the infrastructure to easily add more test types in the future.
@dgageot dgageot merged commit 95df153 into GoogleContainerTools:master Oct 3, 2018
@nkubala nkubala deleted the test-runner branch June 12, 2019 21:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants