-
Notifications
You must be signed in to change notification settings - Fork 108
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use a faster (more time efficient) data structure for clock.timers #159
Comments
Actually, timers is a hashmap (well object) and we iterate the hashmap for the first one, if it were an array and it was kept sorted then inserts would be pretty fast and firstTimer would be similarly fast. |
My bad, yes timers is a Hashmap (object.) Even if you change timers to a sorted array, while first/last and removing a timer could be fast, insertion into a sorted array would still take Hence I suggest using any priority queue structure, such as a heap. |
@akhilkedia any interest in working on this? I can point you to the right places in the code if you'd like :) If not I can take a stab next month probably. |
@benjamingr do you mean that this https://github.com/sinonjs/lolex/blob/master/src/lolex-src.js#L345 is an error and it must be And we may stop using ugly And about |
Yeah, though given how JS works it doesn't really matter.
Also true.
I think there is some confusion there, "delete being slow" is a thing because of hidden classes - but for what we're doing the hashmap version generated by the Using a heap would make more sense anyway. |
@benjamingr I'm kinda occupied with other projects at the moment - I needed only a small subset of lolex's features, so I made a fork of my own. The critical parts are using This implementation assumes a code-flow that is valid for this subset of Lolex features, and will probably need to be amended for the full Lolex. (Depending on the APIs required from the heap, you might also consider some other heap.) |
@benjamingr Yes, you are right, the way it's used |
@akhilkedia do you have any code we can use to benchmark lolex? This seems like a fun task, but as it complicates the implementation we need to make sure it's worth it in terms of an actual performance improvement. I know that the Chai projects are really focusing on performance budgets, constantly monitoring performance, so might be worth having a look at that project. This is worth an issue of its own, probably. |
@fatso83 I actually have a branch that uses the
I definitely think a benchmark is the right way to go about this at this point to prove the problem. The potential for breakage in code paths we don't have tested and the lack of motivating benchmarks makes this a hard sell at the moment IMO. |
Agreed. No changing of fundamentals until proven we are missing a huge performance opportunity (verified using a benchmark). |
@fatso83 I can provide you with a sample code which roughly corresponds to my use case, which will show the majority of the time being spent in Lolex's Whether this issue also occurs while testing Peer5 test suite (or for some other benchmark), I cannot comment. In general, the larger the number of pending timeouts (and the smaller the processing inside each individual timeout callback), the worse this issue becomes. |
Sample code would be great. Just comment with a gist or repo link (or email - in my profile). |
Hi! Vanilla lolex takes 20 seconds for this sample on my PC, and a sample modified queue-based lolex uses 0.5. |
@akhilkedia thank you for doing this! It is very useful. I'm not sure the actual benchmark in https://github.com/akhilkedia/lolex-queue/blob/master/index.js provides a use case for a typical lolex app. That said, it is much smaller than what I had in my branch (which actually transformed I'm not sure why it would be that faster since This works for If you are willing to pursue this then I'm definitely willing to put in the work to review it and help it get merged back into lolex if we can guarantee there is no breakage in existing code. |
I would suggest simply changing the entire Whenever If yes, replacing |
Well, what I did in my fork was shift items from the queue until one is in the range from the queue and then push them back to the queue when I was done with it. Typically - it doesn't shift anything at all because the range always checks the closest timers. I outlined some of the challenges here: #159 (comment) The unfortunate part is that changing the underlying data structure can break millions of monthly downloads if not done vey carefully - that doesn't mean we can't do it but it means we have to be all that more careful. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Anyone make any progress on this? Lolex is spending way too much time iterating over arrays, and I would love some kind of solution... Note: Using https://github.com/akhilkedia/lolex-queue/blob/master/index.js worked PERFECTLY and exploded my performance: |
@arthurwolf Not anyone we know of 😉 As you see from this discussion we are more than willing to accept PRs, and the discussion here already outlines the general solution. We just need to test it thoroughly. |
Summary
Right now,
clock.timers
in Lolex is an array, which is looped through completely everytime infirstTimer()
orLastTimer()
.This is extremely inefficient, and perhaps other data structures such as a
heap
might be much faster.Background Details
Lolex is great for running simulations - but in one of my recent projects, I attempted to use Lolex to run a code over a simulated period of 1 week - the code made a LOT of
setTimeouts
, and these were all handled by Lolex.Profiling the code showed the majority of my simulation runtime was spent in Lolex's
firstTimer()
function, because it kept looping through a long array of timers everytime.Changing
clock.timers
to a simple heap fixed this performance bottleneck.Since Lolex is targeted primarily towards running simulations/tests, being able to run them faster will be great.
The text was updated successfully, but these errors were encountered: