-
Notifications
You must be signed in to change notification settings - Fork 693
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove memory resizing from the MVP #294
Comments
We should solve this issue (moving resizing out of MVP) by also figuring out how heap size is specified in MVP. |
Do you have anything in mind for that? A few obvious options seem to be
|
IMO memory issues were kind of a big deal for asm.js usability, and I really don't like the idea of punting on resizable memory for MVP, since the capability is already there to alleviate this. I especially don't like doing this without a compelling reason. And relegating this to a future feature due to a couple perf bugs that we have at least half a year to fix (that I admittedly don't know the exact complexity of) is pretty disappointing. |
Heap re-sizing is important for a bunch of scenarios so I also don't like the idea of punting on it for the MVP. I'd be okay with an approach that constrains it somehow if that will make it possible for it to run well in Chrome and Safari, but I'd rather we make applications run great in the mid-term even if it means JS engines have to adapt a bit. I think it's plausible that the necessary JS work could happen to make the polyfill work well in resize scenarios before we'd actually get to it if we punt. The polyfill will also be in use for the forseeable future, so either way this feature will need to work right in native JS. The fact that re-sizing is also a desirable feature for emscripten/asm.js can help motivate JS engine teams to deliver it so we can use it. |
I agree memory resizing is important. Hopefully @titzer and @pizlonator can provide information on the current slowdown and status of plans to remedy that. But unless things have improved since last we heard, the slowdowns are significant, and we've known about them for quite some time now. Emscripten has not turned memory resizing on by default because of them. I think it's also important to mention that asm.js without memory resizing is still very useful. It started without it, and was still fine for many games and applications. |
Possible compromises:
|
@qwertie: all of those create large differences between the polyfill and the native implementation. I think our goal is to minimize or not even have any such differences at the MVP point in time. |
Meanwhile I found the v8 bug. Status is still 'assigned'. I don't see a WebKit bug, might not be one filed. |
Re-ping @titzer and @pizlonator for information on the state of asm.js heap resizing performance in their JITs. |
Hi Alon, Having looked at the ArrayBuffer.transfer ideas, which I understand to be the proposed mechanism for asm.js heap growth, I’ve decided that I don’t think that this is a good idea. It would require WebKit to recognize the “use asm” pragma, which I don’t really want to do, because it’s kind of ugly. Our existing optimizations for JavaScript happen to do the right thing for asm.js code without the pragma. But when you do ArrayBuffer.transfer, our monotonic approach to the the inference of constant variables causes us to believe that the HEAP variables are truly variables. I believe that it’s good for our inference to believe that a variable is not a constant when it is overwritten. If you have some proposal for how to make memory resizing fast that doesn’t require recognizing “use asm” and still makes things run fast without breaking JavaScript optimizations, then we would be happy to consider it. Anyway, it’s probably an oversight that there is no bugs.webkit.org http://bugs.webkit.org/ bug for this. You should file one! ;-) -Filip
|
Thanks for the info, I filed a bug. Is the slowdown significant when memory growth makes the VM infer they are truly variables? My position here is that if asm.js memory growth is inefficient on WebKit and Blink, then we might as well remove it from the MVP, as we'll have a proper memory resizing solution after the MVP that works for WebAssembly across all browsers. (And having it in the MVP, when it's slow in >50% of the market, just makes the MVP look bad.) It's a shame to remove it from the MVP, but in the long term, no big loss. |
@pizlonator FWIW, asm.js heap resizing doesn't explicitly depend on But the main question wasn't whether JSC/V8 would add special asm.js optimizations for heap resizing but, rather, does the loss of the singleton property destroy performance so terribly that the polyfill wouldn't be viable. We can just measure this, though. Alon, perhaps you could send me some benchmark builds w/ and w/o heap resizing? Also, if the only big problem is Turbo-asm mode and not CrankShaft or JSC, we can just do what Ben suggested and have "use asm 2" which stays in CrankShaft. |
I haven’t measured it, but I would assume it to be pretty bad.
Whether WebKit has fast asm.js heap resizing may be a moot point since our WebAssembly implementation is coming along swimmingly. Even if someone took action on the bug you filed, it seems unlikely that they would finish that any sooner than the WebAssembly implementation. -Filip |
Also note that, if we keep the max heap size declaration (which I've argued elsewhere is independently valuable), then the polyfill can avoid emitting resizable asm.js when initial = max, putting this under developer control, just like today with the |
I know. My point is that we will not implement special hacks for changeHeap(). Currently, it will make us assume that the HEAP variables are all really variable, which means that:
This basically means that the typical heap access will probably have to load the HEAP variable from the closure, and then load the length, then load the backing store pointer, and then do the access. I’m assuming that this will be pretty bad for perf, but I haven’t measured it. The only way to avoid having changeHeap() cause all HEAP variables to become truly variable would be to either: (a) recognize “use asm” and then add hacks to specialize for the changeHeap() function, or (b) revisit our current type/value inference policies, in particular, the generally profitable policy of having the inferred abstract values of heap locations monotonically converge to TOP.
|
@lukewagner : the WebKit bug I filed now has fannkuch benchmarks with and without memory growth. Let me know if you want any others. |
@pizlonator Yes, that's similar to IonMonkey. Fortunately, a lot of those loads tend to be removed in practice by GVN/LICM. For example... @kripken Fannkuch is probably too synthetic to give an accurate picture but, for reference, after subtracting the time spent in enlargeMemory() from the total time in _main(), Safari shows no performance difference between fannkuch.js and fannkuch_growth.js. Chrome shows a 5.3x slowdown, but after removing "use asm" there is again no difference. Happy to test on something bigger. |
We've explored the icky strategy of matching/finding changeHeap the Firefox On Fri, Aug 21, 2015 at 12:56 AM, Alon Zakai [email protected]
|
@titzer It seems that the TF code is currently much worse than the analogous CS code. Is this just a temporary issue until TF gets general typed array optimizations? (That is, even without singleton info, it seems like you'd still want inline specialized paths for typed arrays.) Anyhow, I think we can work around this issue; the polyfill could either generate "use asm 2" or UA-sniff and just not generate "use asm" at all for Chrome. |
@lukewagner : here is the full emscripten benchmark suite, with memory growth and normal versions: https://www.dropbox.com/s/645ozoyfu9vfpgr/benchmarks.zip?dl=0 |
I'm kind of puzzled by the motivation to keep memory resizing in the MVP. What do we really gain by that? "use asm 2" and UA sniffing seem to just make the MVP story uglier, even if they could work around the perf issues that exist in over half the browser market. And we know that asm.js was useful in many ways even without heap resizing. It's fine if the MVP has that limitation as well, since we intend to move beyond the MVP very quickly anyhow. An alternative is to forgo or redefine the MVP, perhaps - to consider skipping a step where we expect the polyfill to work very well. Maybe we think all of our wasm implementations will ship soon enough that a slow polyfill wouldn't be that bad? |
Could we make the MVP limited to heap shrinkage only and define it as optional on the VM's part? I.e., you can tell it you're not using a portion of your address spce, and an implementation can shrink the allocation, but you can never grow beyond your initial allocation. This lets us solve the most significant heap/address space exhaustion issues without requiring the polyfill to be Super Slow(tm) and it can be implemented with madvise instead of necessarily having to resize the underlying arraybuffers. |
@kripken Assuming good results from the linked test suite, I think we can make it "Just Work" so it's just an internal polyfill impl detail and not really an issue for the user. Also, as you're suggesting, it is sounding like we won't be stuck in the polyfill state for long so if there is a small % (not factor, though) perf drop in some cases, it's not a big deal. @kg Growth (in particular, growth based on dynamic usage, like which files are opened, which levels are loaded) is the main use case for resizing. |
Source? :-) Native code is used a lot in libraries, including libraries that might be consumed by javascript. If heap growth isn't available in the MVP, but we allow libraries to start out with a Sufficiently Large allocation and immediately shrink it, that will enable people to ship MVP wasm that at least doesn't devour physical memory even if it devours address space. Multiple independent wasm modules in one page is a realistic scenario so I think we need to have some sort of answer for them, even if the performance isn't as good as it could be. |
Users, requesting it over several years. Shrinking is not significantly different than no resizing. If we want to decommit touched pages, |
I'm referring to the claim that growth is the main use case. Growth being the loudest request from asm.js users thus far (based on your observations) doesn't mean:
or
Furthermore, my POV is that if we ship without a good solution for heap sizes we will end up paying for it, because the average and worst-case memory usage for wasm without shrink/grow is a lot worse than for equivalent JS. |
@lukewagner: I don't think good results from that test suite would be enough, even if it would be a starting point. We'd need to see good results on more real-world code like Unity. A memory-growth version of Massive also perhaps. And I just don't see cause for optimism. We have in this thread JIT experts from the two VMs where we know there are issues. They both describe very bad things that happen when asm.js memory growth is used, and they are pessimistic about performance in that case. I don't think there is much chance of benchmarks proving them wrong, much as I'd like this to not be an issue. |
@kg Again, the use case that growth addresses is a module that has highly variable heap use but doesn't want to have to pessimistically pick a giant size (since that increases probability of OOM and plays poorly with other components/resources in the same process). I don't see how shrinking is relevant to this use case. @kripken Could you send me resizable/non-resizable builds of Massive then? Let's not speculate. |
@lukewagner: I ran Massive's lua-scimark and poppler-throughput tests on Chrome (I can't test Safari because I don't have an OS X machine). Memory growth makes lua-scimark go from 1755 to 610, and poppler-throughput from 2290 to 843. In other words, 2.9x slower on one, 2.7x on the other. And this is on a benchmark where Chrome is already significantly slower than Firefox and Edge (when Edge has asm.js opts on), so adding an almost 3x slowdown is quite bad. |
@kripken That sounds like you didn't comment out the "use asm" on Chrome (which is again, what the polyfill could do). |
Right, sorry, I forgot that additional proposal. Interestingly, when I try to check that I seem to now be hitting a Chrome bug - poppler emits "Bogus memory allocation size". It works fine in Firefox, and works fine in Chrome if I restore the "use asm" so it hits TurboFan, so I suppose this is a CrankShaft problem. @titzer, should I file a bug? |
Anyhow, it seems clear I have not managed to convince anyone on this matter, so I'll close this. |
@kripken Yes, please file a bug about the Crankshaft issue. |
@titzer: I can't reproduce it on Dev (46), so looks like it's no longer an issue. |
Followup to #285, #288.
The docs currently say that memory can be resized in the MVP. But as mentioned in those issues, we know that memory resizing in asm.js is not fast on Chrome and Safari.
Since we want the MVP to contain things that polyfill, and polyfill well, to JS, it makes sense to remove memory resizing, same as we did to some other existing things in asm.js. After the MVP, we are moving to things that diverge and cannot be polyfilled, and memory resizing makes more sense to add at that point in time.
The discussion in the prior issues appears to have ended with @lukewagner asking for further clarification from @titzer and @pizlonator about the performance of asm.js memory resizing in their respective browsers.
The text was updated successfully, but these errors were encountered: