-
Notifications
You must be signed in to change notification settings - Fork 45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Jest: variables in an outer scope, whose values are set in another test become undefined after a test re-runs #939
Comments
Wallaby.js from day one was designed to run a single test if only that test was affected by your code changes. This also promotes the good practice of having your tests orthogonal/isolated from each other. There're hundreds more articles about why tests should be independent from each other and about ugly consequences if they are not. There're even testing libs that deliberately shuffle unit tests randomly to enforce the practice. Performance consideration aside, the issue with your second and third (and forth) tests is that they are just a part of one test - these 2-4 tests are actually one test, just artificially divided into 3 'it' functions. If I were you, I would ask myself this question: why these are multiple tests ( Depending on the answer, I would consider two solutions. The first solution is to merge them into a single test, like: describe('MyComponent', () => {
it('...', () => {
// a
let T = snap(MyComponent, store)
// b
T.dispatch(action1)
T.snap()
// c
T.dispatch(action1)
T.dispatch(action2)
T.snap()
// d
T.dispatch(action1)
T.dispatch(action2)
T.tree().props.onChange()
T.snap()
})
}) Comments are doing the same, if not a nicer, job to describe what's going on, there's also less unnecessary code. One may argue that the test is now not testing one single thing - well, it wasn't doing it when it was in 4 The second solution was mentioned by you: to set up a context for each test in describe('MyComponent', () => {
let T
beforeEach(() => {
T = snap(MyComponent, store)
})
it('a', () => {
// T is truthy
})
it('b', () => {
T.dispatch(action1)
T.snap()
})
describe('when actions dispatched', () => {
beforeEach(() => {
// maybe also an function extracted from test 'b'
T.dispatch(action1)
T.dispatch(action2)
})
it('c', () => {
T.snap()
})
it('d', () => {
T.tree().props.onChange()
T.snap()
})
})
})
I'm quite surprised to hear that something that I have always thought of as anti-pattern, is becoming a wide-spread practice. I don't have a huge react/jest testing experience outside of wallaby.js related stuff, so could you perhaps please share some articles/docs/etc. where the practice is described/discussed/encouraged? Having said all that, as a tool developers (even though willing to enforce best practices), we can definitely consider adding the setting you're requesting, but I'd really like to hear back from you regarding the whole thing, as I sincerely think this way of writing code may hurt in a longer term: like costly maintenance long interdependent tests, migrating between testing frameworks (for example |
@ArtemGovorov i like your point:
I think you're actually right, this is a non-issue since both Jest and Wallaby will show you multiple expected failures within a single test. The only thing bad about it is it doesn't let you keep neat labels for all these expectations for browsing in Wallaby or seeing as a checklist in other test runners. I think that's enough to argue in favor for re-running the Otherwise, here's one quick post from Jest's github issues that highlights using using code in the describe block:
That obviously isn't arguing in favor of having code in one test that affects another but it's similar Basically I see no way around this if you want helpful labels for your expectations. Well, there is one: the test-runner (Wallaby in this case) labels it the line of code, eg: ..or whatever descriptive info you want to provide. Basically it seems we both understand the problem--you mentioned comments as the solution. I think Wallaby itself stands to lose a lot in that case. I think you want as much information displayed in your IDE/Test-Panel as possible. It gives it more value. And that value is obviously passed on to the developer. Fine grained results about 300 tests is a lot more informative than the same 50 tests it would take in how you re-factored it above to be a single test. That's not to mention comments suffer from less maintainability--there's something about green titles next to checkmarks; for real, there is. They are a lot more likely to stay up to date just because the weight bestowed upon them by the green color and checkmark and amazing test-runners like Wallaby that put them on a pedestal. It's psychological but true (at least for me). In addition, coverage percentages is more precise. The wallaby experience is so great only because the amount of information can be large. If it was just 5 tests, I'm not sure Wallaby would be as useful. So that means, the more tests you have, the more valuable Wallaby gets. So, as always, why not leave it up to the developer to decided how dependent their tests are on each other. At the end of the day, you're always gonna know the first broken test in the sequence is the one to fix, and often times that will fix all the following ones. I'm not too concerned about the less isolation of the tests if in return you get to think of your react component as truly alive and able to be tweaked and poked at. From my standpoint it's definitely worth the tradeoff. It's really what you wanna do here with React. It may not be the traditional way, but the past wasn't as reactive. Perhaps that's the clincher in favor of this argument. You're now dealing with a reactive component. The developer and his code should "live" along side it. Not to get too poetic, but basically you should be able to take the component somewhere. It's the difference between looking at components as photos vs videos. The reality is they are moving. Just because we don't want to move along with them, doesn't stop them from moving. The train has already left the station. The fact that tests will break in groups is of no consequence. It's the same thing as if one test broke anyway. So what? Think about it as if "the group broke." Except now, you have more details on where to look, what to expect, and what the purpose of the tests were in the first place. In short, the same thing can be said about putting all the expectations in one test, except you don't get any of the positive aspects of the tradeoff. Anyway, the components are alive reactive creatures. It's a very natural experience to poke and prod at the same component without having to re-create it. Just as I was saying yesterday how Wallaby is a natural fit as snapshot browser + snapshot approval IDE for Jest, keeping your reactive component instances alive across tests is also a natural fit and progression of the testing game when it comes to reactive components. From time to time the rules of any game evolve and change. Wallaby can be at the forefront of what the React testing is evolving into. I mean, on a side note, I'm shocked that Wallaby's name isn't mentioned as much as React, Jest, Redux and Flow among this crowd. It's literally the missing piece. Not just the perfect match, THE MISSING PIECE. WHY? Because React (and especially Redux) does make you do a lot more work and boilerplate in a few places, but what you get for it is predictability. There is no better place to showcase and put that predictability on a pedestal THAN FRIGGIN WALLABY! No, but, seriously, there is no better place to prove the hypothesis that React is more predictable than through testing. If you're not testing your React project, you're missing the point of React. That's where it shines. That's where the hypothesis of explicitness its built on is fully proven. Pass "GO," collect $200. The stars align. Game over. Case closed. The only issue is that without wallaby you're looking at a very scrollable terminal window full of snapshot comparisons. From a business standpoint, I would love to see you guys become the defacto leader of a market that--from my research--you're the only game in town. It's a market more important than ever for the react community. This product is so needed. I think you came out a year and a half ago?? Well, woah, today ur product should have headlines everywhere Jest is mentioned. Even everywhere React is mentioned. Your product makes more sense today than it did when it first came out. And by far. Facebook should straight-up buy Wallaby--at least try ;). Like they did Redux, Inferno, etc. You know what I'm talkin about. This is more valuable than Jest, Redux and Flow. The only part of the stack that beats it is React itself. Everybody should be developing like this. It puts you in full control of your codebase at all times, at least in combination with Jest's reactive rendering capabilities + snapshots. ..OK I'm out, just thought you might like to hear how good your cooking is and a few ideas for future recipes :) |
One other thing: I think the most effective use Jest + React + Redux is to think of your tests as integration tests between your components and redux (reducers + actions). Unit tests with Jest is business as usual. But as that article u linked was saying--and I only just started reading it:
So if you're writing idiomatic React primarily relying on stateless pure function-based components, and using Redux for state changes, you have a very predictable environment for the complete client side of your application. Create fixtures for actions that make async requests, and through jests's reactive component instances, you have the whole app at your finger tips. No need for webdrivers like Selenium. That's the predictable part of React--you really don't need to concern yourself with the underlying DOM anymore. So what happens is you can easily do integration tests between your components and what Redux is doing. Of course write unit tests as well--and it's very nice because you're dealing with 90-100% in pure functions--but you don't end up in the situation described in the article where you don't have a tool to uncover regressions. You absolutely do. Any changes to actions or reducers will surface in changes to your components. And any changes to components--when you're doing integration testing like in my examples above--will surface when any actions are not called or are incorrectly called. The feedback loop is very nicely completed. It's not an integration test as far as the server is concerned, but it's the easiest pure client-side integration tests that we've ever been able to write. Next: snapshots are the cherry on top. After all, you're components are just pure functions. You can subtract a lot of development time needing to write ..So you're not writing unit tests primarily if you're doing this all correctly. You're writing integration tests. You want to insure the interplay between your React components and Redux does not break. AND, to do this correctly for complex apps, you can't think of your app and its tests as a snapshot of one state. YOU HAVE TO THINK OF IT AS A VIDEO RECORDING OF MULTIPLE STATE CHANGES (DISPATCHES AND EVENT HANDLERS THAT DISPATCH THEM). Very much like you would with end to end testing through something like selenium. The result is you really have a best of all worlds scenario where you don't have to leave a typical unit test runner environment and concern yourself with unpredictable and annoying tools like PhantomJS and Selenium that have to open up browser windows and the like. Yet, you basically get end-to-end testing for your app, minus the server. So in this world where reactive component instances have a life of their own, it makes a whole lot of sense to keep them alive just as you would keep the whole client side application alive when using webdrivers. Wallaby stands to gain a lot by doing what it takes to become the defacto testing platform for the React world, and optimizing for features just entering the development world, such as reactive component instances and (though this may not be know outside of the javascript world) snapshots. Align Wallaby with that and you guys are not just king of javascript/react testing, you're more than that. Have you heard of "last mile" delivery strategy? It's the concept that the company/startup that can own this part can take the biggest percentage of the profits. They are the ones that connect with the customers at the end of the day. Whereas the guys earlier in the supply chain are subject to more competition and as a result slim margins. Well, Wallaby is the last mile provider my friend. It's the last mile provider of the React stack. It's not the IDE. Besides IDEs have way more competition. Wallaby also does something that IDEs don't do: that's validate the core React assumption: explicit functional predictable testable components (and reducers) are what apps should be built on. Wallaby puts the final checkmark on that assumption, proving its true. Anything wallaby can do, and any news you can generate for deep react support will go a very long way to get you more customers. After all, all the web app news sites are at least 75% about the React ecosystem. Maybe 85% and like only 15% for Angular. I'm surprised I haven't seen anything about Wallabyjs on echo.js, hacker news, /r/javascript, javascript weekly, etc, about Wallaby this past year. With the right deep react features, Wallaby can be just as important as Jest, and garner all the same people writing reviews of their experience with Wallaby just like they have with Redux all throughout 2016. I know I have an article in the works. |
Thanks for joining the discussion and clarifying your position! I understand your point about more granular reporting, though, as you've mentioned, in wallaby's case it's less relevant as it's reporting issues inline. But I can see why one would want to add more
How so?
I honestly don't think it's the matter of reactivity, or any new/emerging patterns. In the past, present and future, there were, there are and there will be cases when an SUT object may seem to be desirable/beneficial for re-using across multiple tests.
This is not a bad thing if I was editing a related app/component code, but having my other tests broken when I'm editing a single test would be quite a surprise and is not helpful. As a newcomer to some codebase I'll need to learn the hidden dependencies between your tests to edit an existing or add new tests, instead of just assuming the isolation and trying to use existing easy to understand
Can't agree more. Who cares what the size of the unit is or how many app layers the tests exercise (as long as they don't cross tiers and become slow/hard to setup).
This totally makes sense. So I have added the opt-in setting to run all tests in the test file affected by code changes. With the setting your example should work. Here's the example: module.exports = function () {
return {
...
runAllTestsInAffectedTestFile: true
};
};
Thank you, I really appreciate your kind words and encouraging feedback! Lots of valuable information to digest and think over, not just in context of wallaby.js but in the context of React ecosystem/testing in general. You should blog it somewhere, like write a medium post :) |
i will ...appreciate the addition. wasnt expecting it anytime soon. that's awesome! ...does wallaby just update itself in my editor? should I clone a repo and set it up from scratch? wait for it in the next release? |
...not to nitpick, but don't u think
should be added too? i.e. for a more performant halfway option. |
@faceyspacey The new version is published and wallaby updates automatically, but you may force the update. Re: |
@faceyspacey Single action/shortcut Jest snapshot updates are now available in the latest core. You may update a specific snapshot(s) right from your editor with the |
@ArtemGovorov utterly amazing!!! Put in the web panel though with a big beautiful button. That's where it shines the most. And rename it in all places to UPDATE SNAPSHOT (i.e. add a new command). How are you supposed to otherwise know that snapshots will be updated without having read your blog post? |
@faceyspacey Thanks for your feedback. As always very insightful. I'll address the comment you've left in the article here as well.
We'll better document it, however the name for editor command/action has to stay what it was. The action was there way before Jest integration was added and does what it says in other testing frameworks (as well as in Jest): it manually runs a specific test. The original action is well documented in all of the tutorials for all editors that we support. It's just for Jest we have added one more action to it - updating the snapshot. We could have added a specific action just for Jest, but it'd be showing up in all editors for other testing frameworks, and that is not desirable.
There's a plan to add the feature to the Wallaby App as well.
In both cases it's a currently active line (the line with the cursor). As you can see in GIFs (and in the docs) the line is always focussed before triggering the action. Many other wallaby.js actions work the same way (as well as many other actions in other extensions/plugins). I always thought it is a very keyboard friendly way to select the line to be the context for some action, not sure how we could further improve the UX (in the editor, Wallaby App is a separate story).
At the moment wallaby doesn't even run Flow, or any kind of static analysis that is not necessary to run tests. While it's technically possible to run various linters (ESLint/TSLint), Flow, TypeScript type checking
Considering the above said, it may be a good idea for a different project/app, with an option to run outside of the unit testing context. |
As promised, done. You'll need the latest core to use the feature. |
@ArtemGovorov sick! ..the snapshot button in the app was the clincher. For less keyboard-focused developers, that sorta stuff goes a long way. The Jest developers really should check this out and possibly highlight wallaby integration in their documentation. @cpojer check this out: https://twitter.com/wallabyjs/status/827059959728795648 notice the camera icon in the top right. Jest + Wallaby is really crushing it right now. |
I finally got around to testing the new feature, and here's what I found when using the camera icon in the wallaby app:
|
Re: 'runAllTestsInAffectedTestFile' setting will make it update all
snapshots in the file, because with setting it runs all tests in the file.
This is how jest works, it updates snapshots for the tests that it runs
with 'u' flag.
Re: updating all snapshots, please share a sample repo where I could
reproduce it, it's working for me as expected in all my tests.
On 5 Feb 2017, at 6:44 PM, James Gillmore <[email protected]> wrote:
@ArtemGovorov <https://github.com/ArtemGovorov>
I finally got around to testing the new feature, and here's what I found
when using the camera icon in the wallaby app:
- with runAllTestsInAffectedTestFile: true it does not work
- with that turned off, it updated all my snapshots, not just the
snapshot corresonding to the camera icon clicked
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#939 (comment)>,
or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AA7z_iD0437QEHPwgpqE1TbxlE2CIworks5rZYvQgaJpZM4Ld7tJ>
.
|
Here's a test repo of a smaller open source project where I encountered the same issues: https://github.com/celebvidy/pure-redux-router you can just git clone that, run I think you are using the
to update the snapshot just for the single precisely matched test. As for
I got the same message in both apps I've tested. |
@faceyspacey Thanks for the repo! It was a bug in Wallaby App (that we couldn't reproduce because we were missing an important test case). It's fixed now in the latest Wallaby App (you may just refresh the page). Note that with the |
@ArtemGovorov confirmed, only individual snapshots are updated and ...I've been trying to think of more jest/snapshot-oriented features to take this even farther, and for now all I have is what I previously mentioned about showing snapshots even if the test isn't failing, i.e. instead of an empty box saying Final thing: for each logging, it might be nice to see what was passed to ps. I LOVE HOW THE APP CHANGES THE FOCUSED TEST AS YOU USE YOUR EDITOR. AND I DIDNT EVEN TO MEAN TO MAKE THIS ANOTHER FEATURE REQUEST, BUT PERHAPS IF IT CHANGED THE FOCUSED TEST JUST BY PUTTING THE CURSOR WITHIN A GIVEN TEST WITHIN YOUR EDITOR, IT WOULD BE EVEN BETTER. |
@faceyspacey Thanks a lot for the feedback! I really like of of the ideas. will add them to our tasks. |
thanks! ..one other thing: I've added |
Just landed in the latest core: https://twitter.com/wallabyjs/status/829923244308205568, and of course in the Wallaby App: |
@ArtemGovorov very nice! |
@ArtemGovorov another small similar text improvement to the app: in these columns: https://www.dropbox.com/s/m72lv0mhiyf6x7s/Screenshot%202017-02-10%2013.14.40.png?dl=0 display the number of tests passing in green and the number of tests failing (if any) in red. and then in a second phase make the milliseconds and the 2 new columns sortable. |
@kamesh2111 Your issue doesn't look related to Wallaby. |
Issue description or question
Using Jest, variables in an outer scope, whose values are set in another test become undefined after the test you're editing re-runs.
For example:
In this example
T
is set in a previous typically synchronously/sequentially executed test. In the regular Jest test runner, you can depend on variables set in previous tests. In fact making use of the closure created by thedescribe
block is recommended and idiomatic for such things in the Jest community. But with Wallaby, you can't if you intend on editing a test, which ultimately leads to 2 different unpredictable behaviors with wallaby: once when it first runs all your tests, and another when you actually edit a test, both delivering different results. Re-running the entiredescribe
block would simply (or perhaps not so simply) solve this.So in Wallaby, to make the above call to
console.log
work, I have to edit the outerdescribe
scope, as a workaround. Also note: editing test 1 also won't giveT
a value again, and definitely editing test 2 won't.I'm assuming this probably is a very hard bug/issue, and likely not possible given the async nature of Wallaby. To give importance/weight to this issue--and to counter the only solution being to change my code--basically it's highly useful when snapshotting "alive" react components to be able to snapshot a react component and build a "live" instance, which in this case is contained within
T
and then to be able to assert against state changes in SEPARATE SEQUENTIAL TESTS.beforeEach
as a solution is also sub-par, because the idea is that you have components that change states in a sequence, so you would have to re-do the previous steps/actions/state_changes in subsequent tests anyway. That sub-par workflow looks like this basically:When really the last test, for example, should only be this:
So you are left repeating steps you did in previous tests to achieve a state that would even have in this example 'onChange' attached to a component instance (i.e.
onChange
only appears conditionally for the given state). And perhaps the previous dispatches only create that correct state if dispatched in order.So it's highly useful to be able to think of one group of tests as a sequential set of events. I've built a library called
jest-redux-renderer
that nicely keeps your components alive and reactive with redux (Jest only out-the-box keeps components reactive if you callsetState
which isn't even idiomatic anymore). I think the concept of "live" components that continue to reactively update is the secret sauce that makes Jest so great for React.Now, if you're able to reactively update in response to redux state, you're going to be doing a lot of tests against a store or component created in a previous test. You benefit HUGELY from being able to manipulate that component idiomatically by dispatching actions and calling attached event handlers. But there is no "maniuplating" i.e. "updating" in Wallaby while editing tests--everything can only be created, or you can use
beforeEach
but that only gets you 10% of the way. So that's the case for this perhaps not previously typical way of testing. In short, it makes a lot of sense for a component instance to stay alive and reactive across multiple sequential tests.Basically the only solution I can see as an option (and perhaps not enabled by default) is to re-run all tests in a
describe
group, including any code in the outerdescribe
block if any single test changes, not just the individual changed tests. That would 100% solve it. I know performance is one of Wallaby's key selling points, so perhaps optionally being to toggle this on for scenarios where it's important to you and you're editing tests makes this palatable from a performance perspective. But I see it as a must for the emerging idiomatic jest+react testing workflow where there is much to gain if your component instances live across tests.Wallaby.js configuration file
Code editor or IDE name and version
Visual Studio Code v1.8
OS name and version
OSX
The text was updated successfully, but these errors were encountered: