Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Annotation error while running via worker #5

Closed
osteenbergen opened this issue Jan 13, 2015 · 16 comments
Closed

Annotation error while running via worker #5

osteenbergen opened this issue Jan 13, 2015 · 16 comments
Assignees
Labels

Comments

@osteenbergen
Copy link
Contributor

Logfile

A, [2015-01-13T12:04:34.507688 #13548]   ANY -- : Logging to #<IO:0x000000011f5c10> initialized with level .
I, [2015-01-13T12:04:34.507831 #13548]  INFO -- : [#<IO:0x000000011f5c10>] "Starting Scenario: features_annotation.feature-annotation_01_-_scenario_outline-foo_bar"

      Scenario: | foo | bar |                        # features/annotation.feature:15
        Given I annotate a step with foo                                        # features/step_definitions/interaction_steps.rb:143
      undefined method `[]' for nil:NilClass (NoMethodError)
      /mnt/data/Code/lapis-lazuli/lib/lapis_lazuli/world/annotate.rb:37:in `annotate'
      ./features/step_definitions/interaction_steps.rb:144:in `/^I annotate a step with (.*?)$/'
      features/annotation.feature:9:in `Given I annotate a step with <data1>'
        And I annotate a step with bar                                          # features/step_definitions/interaction_steps.rb:143
        Then the report should include foo and bar in the correct place         # features/step_definitions/interaction_steps.rb:147
D, [2015-01-13T12:04:38.528534 #13548] DEBUG -- : [#<IO:0x000000011f5c10>] "Screenshot saved: ./screenshots/150113_120438__foo_bar_.png"

Json embeddings for this scenario outline:

[
  {
    "mime_type": "application/json",
    "data": "eyJmZWF0dXJlc19hbm5vdGF0aW9uLmZlYXR1cmVfMTVfaW5fZm9vX2JhciI6W3sic2NyZWVuc2hvdCI6Ii4vc2NyZWVuc2hvdHMvMTUwMTEzXzEyMDQzOF9fZm9vX2Jhcl8ucG5nIn1dfQ=="
  },
  {
    "mime_type": "application/json",
    "data": "eyJmZWF0dXJlc19hbm5vdGF0aW9uLmZlYXR1cmVfMTZfaW5fZm9vIjpbImZvbyJdfQ=="
  }
]

Decoded

1: {"features_annotation.feature_15_in_foo_bar":
  [{"screenshot":"./screenshots/150113_120438__foo_bar_.png"}]
}

2: {"features_annotation.feature_16_in_foo":["foo"]}

So it also seems that line 16 is embedded with line 15

@jfinkhaeuser
Copy link
Contributor

That's useful to understand. The scope for each annotation is a problem, then. I'll look into it.

@jfinkhaeuser jfinkhaeuser self-assigned this Jan 13, 2015
@jfinkhaeuser
Copy link
Contributor

So, a few observations.

  • Annotate uses embed, so stuff will be embedded where embed puts it.
  • The name ("features_annotation.feature_15_in_foo_bar") is generated per-scenario. In the example case, the scenario is a scenario outline, so the actual line references (15) is the example line in the feature file.

The way the name is generated depends on cucumber; cucumber re-creates World for each scenario, which provides us with a pre-scenario hook. A pre-line hook would require writing a custom formatter, which is more fragile.

Per scenario, annotations are an array. That allows for plenty of annotate() calls without overwriting data.

So for this:

Scenario: something # say it's line 10
  Given I foo
  Then I bar
Then(/^I foo$/) do
  annotate("foo")
end

Then(/^I bar$/) do
  annotate("bar")
end

The expectation would be that for the key referencing line 10, there is one annotation, and it's an array that is ["foo", "bar"].

The test case actually shows the value to be [["foo"], ["bar"]], which gives a lead on one of the things that are going wrong here.

@jfinkhaeuser
Copy link
Contributor

Reconsidering, this is actually as expected. You can call

annotate "foo", "bar"
annotate "baz"

In which case the contents would be [["foo", "bar"], ["baz"]].

In other words, it's a record of the argument lists passed to the annotate function, and that makes sense for best future processing.

@jfinkhaeuser
Copy link
Contributor

FWIW:

# This works:
bundle exec cucumber features/annotation.feature:15

# This contains the error above
bundle exec cucumber --format json features/annotation.feature:15

@jfinkhaeuser
Copy link
Contributor

Alright. Long story short, the JSON formatter bundled with cucumber can't accept embeds in the first step of a scenario.

I'll reduce the test case and file an issue upstream. Then we can also consider a workaround for our case.

@jfinkhaeuser
Copy link
Contributor

It's a little more complex. A test scenario for pure cucumber works.

The JSON formatter has two hooks:

  1. One for creating a step context
  2. One for embedding data into the step context

Turns out that in the simple cucumber test case, the above is the order in which they're called. For our LL test case, it's the other way around.

@jfinkhaeuser
Copy link
Contributor

It is reproducible with stock cucumber and the -x parameter.

@osteenbergen
Copy link
Contributor Author

Do you have a link to the upstream bug or are you still creating testcases?

@jfinkhaeuser
Copy link
Contributor

Not yet. I called it a day shortly after figuring out where the problems lie.

@jfinkhaeuser
Copy link
Contributor

cucumber/common#804

@jfinkhaeuser
Copy link
Contributor

TBH I don't think they're going to fix this. The JSON output - even when you don't use embed - is vastly different with or without -x; in fact, specifying -x seems to provide less data.

Not specifying -x works perfectly, but the embed information is associated with the scenario outline rather than any example line, much as you report.

Basically, we have some options (if you have more, let me know):

  • Patch cucumber and hope the patch gets accepted
  • Force not using -x and figure out how to reference the actual example line in annotate as a key (tricky).
  • Find some other way outside of cucumber to store annotations, but they still need to reference the actual example line somehow.
  • Don't use the JSON formatter.

Neither of them really appeal.

@jfinkhaeuser
Copy link
Contributor

There's some good news.

We don't actually need annotations, since screenshots are embedded in the JSON anyway.

My best proposal currently is:

  1. Drop generic annotation support.
  2. Add a post-processing step that:
    • Extracts screenshots from the JSON.
    • Uploads them as assets, referencing the exact step where it occurred.
    • Only returns the JSON without screenshots.

We could upload the entire JSON with screenshots, and let the reporting API sort it out, but real-time-ish reporting would be easier without that data. The downside is that the post-processing has to be in the same task, or use only local storage and be pinned to the same worker.

@osteenbergen
Copy link
Contributor Author

Screenshots can be embedded, however this increases the JSON output with a few MB. Post-processing should remove the actual image data and replace it by an asset URL, but that is a generic annotation, however its outside cucumber.

I like support for the vanilla cucumber screenshot embedding so that its easier to handle repositories that don't use LL

@jfinkhaeuser
Copy link
Contributor

Cucumber fix should be released in 1.3.20 https://github.com/cucumber/cucumber/blob/v1.3.x-bugfix/History.md

@jfinkhaeuser
Copy link
Contributor

I can confirm that embedding screenshots into JSON is not treated specially, i.e. this scenario also fails for screenshots.

IMHO the most sensible solution is to require cucumber >= 1.3.20 with LL, once that has been released.

jfinkhaeuser added a commit that referenced this issue Mar 27, 2015
cucumber-version independent choices possible
jfinkhaeuser added a commit that referenced this issue Mar 27, 2015
@jfinkhaeuser
Copy link
Contributor

Should work with cucumber 2.x, or 1.3.x when that is released.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants