Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A Test Case can fail even if all Test Steps have passed #912

Closed
brasmusson opened this issue Sep 6, 2015 · 11 comments
Closed

A Test Case can fail even if all Test Steps have passed #912

brasmusson opened this issue Sep 6, 2015 · 11 comments
Labels
⌛ stale Will soon be closed by stalebot unless there is activity

Comments

@brasmusson
Copy link
Contributor

The central model of Cucumber 2 and Cucumber-Core is that a feature file is compiled (and then further processed) to a set of Test Cases each built up by a sequence of Test Steps.

This model is also present in the new Formatter Api:

  before_test_case(test_case)
  after_test_case(test_case, result)
  before_test_step(test_step)
  after_test_step(test_step, result)
  done

However there is one type of "step" that is executed but not modelled at all in the above models, that is the Around Hooks. This creates some problems:

  • It makes it hard to correctly implement a formatter using the new Formatter Api. Will the ordinary developer of a custom formatter realise the special case of that a Test Case can fail even if all Test Steps have passed? As handled here, deep inside the LegacyApi Adapter.
  • For all other Step Definitions, Before Hooks, After Hooks and After Step Hooks executed, the formatters can pinpoint the source location they come from (see example from spec), but even when an Around Hook fails (which is the only time the formatters can detect at all that an Around Hook have been executed), the formatter knows nothing of the source location.
  • It does not seem likely that a Test Case can fail even if all Test Steps have passed, is compatible with the ideas for reporting discussed in the future (with Gherkin3 and beyond)

I think that Around Hook should be modelled as Test Steps, to the extent that they can and are passed to formatter as all other Test Steps. To keep the new Formatter Api simple, I would suggest that both before_test_step and after_test_step are sent to the Formatters after the Around Hook have executed. Then the model presented to Formatters is still that a Test Case consists for a sequence of Test Steps, even when the reality is that Around Hooks wraps all the other Test Steps of the Test Case (or wraps another Around Hook that have wrapped the other Test Steps).

There have been some discussion related to this in #909.

@mattwynne
Copy link
Member

Thanks for raising this in a separate ticket Bjorn.

The way I think about Around hooks, they're almost more a part of the overall runtime mechanism than they are part of individual scenarios. I don't see how we can model them a test steps, because they might even run the scenario more than once.

Imagine this example:

Around do |scenario, block|
  # run scenario once
  block.call

  # change environment
  ENV['ANDROID'] = true

  # run scenario again
  block.call
end

In that example you effectively have three test steps - the bit before the scenario runs for the first time, the bit in-between, and the bit after the scenario runs for the second time. Any of those bits could potentially fail.

I think this kind of thing would be better done using a filter, or even suites, once they're implemented. So maybe Around hooks could just be removed?

Before we do anything drastic I'd like to collect more examples of how they're being used in practice.

Can I ask people to please post their examples up as comments in this ticket?

@brasmusson
Copy link
Contributor Author

I think the main point is that the formatters need to know about the around hooks. If you were to investigate some test runs from last week, one passed and one failed, but you do not understand why. If you have the output the Json formatter you can see the location (file:line) for:

  • the gherkin steps
  • the matched step definitions
  • the executed before hooks
  • the executed after step hooks
  • the executed after hooks

You may suspect that the problem has to something to do with that the around hooks that were executed were not the same, but you do not know, you cannot know.

As far as I can tell the execution of the around hooks happens inside the Runner#test_case method, so I would rephrase the comments in the Around hook above to:

 # run all steps of the scenario once
  block.call

  # change environment
  ENV['ANDROID'] = true

  # run all steps of the scenario again
  block.call

I soon as the Around hook quack like test steps the the only thing needed to let the formatters know about them is to add two lines to the end of the Runner#around_hook method:

  report.before_test_step hook
  report.after_test_step hook, result

@mattwynne
Copy link
Member

OK, so in that model, the around hook becomes a kind of meta-test-step that contains all of the other test steps. Right?

@brasmusson
Copy link
Contributor Author

From the perspective of the formatters the around hooks become a hook type that are executed after the after hooks. So the formatters does not see that part of the code of the around hook is executed before all other test steps (and only another part is executed after the after hooks). From the perspective of the Cucumber runtime and the Test::Runner the around hooks will not change, they just need to also support the test step interface.

@sthulb
Copy link

sthulb commented Sep 7, 2015

@mattwynne asked me to comment here. https://twitter.com/mattwynne/status/640859862243647488

@bbcnews use around to switch drivers for a particular scenario, switching the driver back after completion.

@mattwynne
Copy link
Member

@brasmusson OK so you're proposing that the around hook before/after test step events would fire immediately after one another (right at the end), just really to artificially record the result of running the hook?

@brasmusson
Copy link
Contributor Author

@mattwynne The after_test_step would be sent at the correct time with the correct result, but the before_test_step is sent at the wrong time - in order to let the formatters keep the illusion that a test case consists of a strict sequence of test steps. If the around hook passes, the formatters knows that is was executed, if the around hook fails, the formatters will be notified the same way as any other failed test step. In case of an around hook failing before the block.call then there is a twist in terms of that none of the other test steps are executed at all, when other test steps fails they would be skipped (in case of gherkin test steps), or they could be still executed (as for after hooks).

@svandeven
Copy link

We use the around hook to decide whether the scenario should run at all (we are using a device cloud and retry only tests that failed on the specific device they failed on)

Around do | scenario, block|
if should_run_scenario?(scenario)
debug_scenario('Executing scenario', scenario)
block.call
end
end

Edited to add - we would use them to run the scenario more than once but on the version of cucumber that we have to still use, it doesn't work

@stale
Copy link

stale bot commented Nov 8, 2017

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs.

@stale stale bot added the ⌛ stale Will soon be closed by stalebot unless there is activity label Nov 8, 2017
@stale
Copy link

stale bot commented Nov 15, 2017

This issue has been automatically closed because of inactivity. You can support the Cucumber core team on opencollective.

@stale stale bot closed this as completed Nov 15, 2017
@lock
Copy link

lock bot commented Nov 15, 2018

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.

@lock lock bot locked as resolved and limited conversation to collaborators Nov 15, 2018
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
⌛ stale Will soon be closed by stalebot unless there is activity
Projects
None yet
Development

No branches or pull requests

4 participants