-
Notifications
You must be signed in to change notification settings - Fork 66
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Anyref toolchain story? #122
Comments
I agree that writing code that uses anyrefs in small linkable files externally to LLVM, and then calling into these from C/C++/Rust could really simplify implementing support for them.. but I bet if you have more complex patterns of using anyref, then having direct support in the source language may be essential. Certainly this approach can be tried in parallel with trying to add them directly to LLVM. Both can be useful. |
My suggestion for this would be a non-integral address space ptr. That should have all the right semantics and the optimizer treats them mostly as opaque references. The biggest question is what to do about the table/get set intrinsics. One option thing would be to just use load/store from a global of appropriate element type (i.e. a pointer in that address space). Of course that raises the question of what the pointer type of that table pointer is. It probably has to be non-integral also (so getelementptrs get preserved through the backend). It would probably be fine to use the same address space also and pattern match in the backend or we could use a different address space. Another option would be wasm-specific intrinsics for table get/set. On the julia side, I'm planning to just have the codegen emit From our perspective, while we do need anyref support in LLVM, I don't think we care about anyref support in clang, since our runtime operates exclusively on boxed objects. Any shims we'd probably be happy to write in LLVM IR or some other sufficiently low level representation. [1] From the julia perspective, obviously it'll still be a boxed reference to the JS heap unless the VM can prove something else later. |
On the AssemblyScript side, I imagine an explicit |
I can speak to at least a Rust perspective on this issue of I've brainstormed with @fitzgen and we've come up with possible semantics to actually represent an Failing first-class language support, the next-best-thing we could think of was the model we have implemented today in Our current thinking is that this is likely good enough for the near (and possibly far?) future. We can't figure out a compelling use case where actually passing around All that's to say that I think convention-wise it'd be great if C/C++/Rust could all use the same strategy for managing these table indices that translate to values at boundaries. (If C/C++ folks agree that this is a reasonable strategy to take there as well of course). We haven't really put any thought into what it might look like to actually stabilize or implement this functionality in an official manner, it's pretty ad-hoc today. I think that we've got a good grasp on the goals to solve, just not how to model it in LLVM IR for example :) One final thing we've thought about is that tools like binaryen/wabt/etc as mentioned will all have full support for |
@alexcrichton Yeah, that basically matches my own thinking about C/C++ - to use integer indexes, and that anything more may be very hard and of unclear benefit. The wasm-bindgen approach you mention is basically the same approach emscripten uses too (e.g. the WebGL bindings have tables for Textures, Buffers, etc.); and all of this is basically the same as how file resources are typically handled (integer file descriptor), GPU resources in OpenGL (again, integer id), etc. - so I think table indexes are a natural model for languages using linear memory like C/C++/Rust. (I do think non-linear memory languages can do better here! Both in terms of full cycle collection and avoiding table indirection overhead. But that's a separate issue.) I agree that binaryen could help optimize away some of the table overhead using inlining + specific table-aware passes. I have some vague ideas about this.
I'm curious what you see as the benefit to using the same strategy in these languages? E.g. right now emscripten uses a separate table for different WebGL resources, but other use cases may want a single table for everything, etc. Another difference is emscripten doesn't plan to replace much of the WebGL bindings with wasm+anyref anytime soon, since that glue code needs to do things like subarray a buffer and other JS stuff, but other use cases may be in wasm. Overall, it's hard for me to see a single approach being enough? |
I think with the only-table-on-the-side approach we are missing some of the advantages of anyref.
Rooting has a place for limited-lifetime references, but we should have some solution for data that has indefinite lifetime and which may participate in cycles across the JS/wasm boundary. For me, a robust solution for a C++ library with "interesting" JS interaction (e.g. DOM-like) looks like this. I think these arguments apply to Rust as well, fwiw.
This approach complements the always-handles approach, as you need handles anyway. It is also similar to @kripken's original thoughts above. But from a where-are-we-going point of view I think we should keep the broader horizon in view, in order to have a reasonable answer to the cycle problem. It's not clear that we'll be able to make it there but I think we should try. |
Oh sorry I should probably clarify what I mean by this. Right now in Rust we have no way to use That's to say that when I say that C/C++/Rust should probably all converge around a similar strategy I mean we should take the common denominator (LLVM + LLD) and make sure it's supported there somehow. We could add a rustc-specific pass and Clang (and/or Emscripten) could do similar, but it'd be awesome if we could centralize the implementaiton in LLVM+LLD which everything would share. I would suspect that the language level support and even the intrinsics used across the languages would probably different, so I definitely don't think we should try to shoehorn everything into the same hole, just shoehorning into the same code generator! |
Agreed, yeah. I have some hope of optimizations in Binaryen being able to help (figuring out that an anyref is loaded more than once from the table, and reusing it, sending it as a param, etc.), but as mentioned above this is speculative.
I agree. I'm working with @aardappel atm on one approach for solving that, that may be useful under the assumption that cross-VM cycles are rare and do not need urgent collection. I think we can do that with JS WeakRefs and no other new APIs in JS or wasm (but with significant work in the compiled VM). I hope to have a prototype soon. Otherwise I think your approach is very interesting - I mostly worry about the complexity of adding anyref to LLVM and to C. But you may know more than me about the difficulty there. Thanks, I see now. Yeah, agreed we should share as much code in a central place as possible! |
I agree that binaryen can claw a fair amount of the performance back, if the rooting-table management happens on the wasm side and not a JS wrapper. Many uses of anyref will be ephemeral and will never need to be rooted, and binaryen should be able to find that out. I look forward to seeing the WeakRef work :) Sounds interesting. Many manual solutions with WeakRef can work, but a robust general mechanism wasn't apparent to my ignorant eyes. |
We'll probably only really know once there's usage in the field, but I'm curious: what's the reasoning leading to this assumption? |
It seems to me the design of I guess we really need to turn this upside down and first ask: what does it look like for a C++ or Rust program to have general collectable cycles with a JS program (again, other languages may be involved, but this is a prototypical example). Once we agree there is a solid implementation possible there, we can see which parts of toolchain and engine should represent these these references, and how. This of course becomes more interesting/complicated if the C++/Rust program actually contains an implementation for a language that does GC or other memory management in linear memory, which is something me & @kripken have been looking at. |
Not sure if it was designed that way. IIRC it originated from the GC proposal, as a Top type for the whole type hierarchy. We then split it out early, not because linear memory-based languages would be able to use it easily and naturally, but so linear memory languages would be able to reason about external references at all. So even if C++/Rust can't make natural use of it, anyref still makes sense N years from now with the full GC proposal, in the same way anyfunc isn't deprecated by typed funcrefs. |
That covers our Erlang/Elixir implementation too - we're using Rust to do the implementation and some LLVM IR directly, but we're doing our own GC on top, so we don't get |
I know this is a thread about LLVM, but @aardappel seems to be questioning the value of the anyref feature at all. Allow me to chime in with a defense and a use case :) It is perfectly possible to fix the cycle problem now with anyref. You simply represent the parts of your data that should be garbage-collected using garbage-collected memory from the host, and allow the host to GC. The Schism compiler does this and I am confident that it will never leak memory, even in the presence of cycles. Currently this solution is less than optimal given that anyref values are opaque and so you need to call out to the runtime for field access, constructors, and type predicates, but that will be fixed in the mid-term with the GC proposal, which grants you access to all these from wasm. I think it's going to be the long-term solution for all languages that need garbage collection. If in the short-term, we can't get LLVM to have this property -- I think we agree there -- then a rooting table is a perfectly good stopgap for the general case, for C++ or Rust. But a table of handles is not a great plan for languages with GC, whether they use LLVM or not. |
It's less of an assumption and more of: this is what we think we can actually solve ;) That is, I don't think C++/Rust can have optimal cycle collection with JS (without radical work). But we do think there may be use cases with few cross-VM links and where it is important to not let cycles accumulate infinitely, even if they are collected slowly. Interesting! How does that work, though - how does a schism object refer to a JS object, and vice versa? Are schism objects actually on the JS side, with maybe only their linear memory data inside wasm, something like that? |
@kripken ah, thanks for the clarification. I certainly agree with that characterization :) |
No, I am simply suggesting a different way of evaluating it. If it turns out that in practice we a) can't rely on LLVM to understand/propagate these values and b) need indices to them anyway, then their value is certainly reduced. I am not saying that that is the case, merely that we should find out.
If you're suggesting that a highly tuned strongly typed linear memory GC should simply defer all its GC work to the host by copying things into JS objects, I don't think this is an acceptable solution. Schism effectively doesn't do its own memory management. That's not solving any problem, that is moving it (outside of Wasm). Schism is also dynamically typed, so it is not a bad fit for JS. In fact, Schism would probably run significantly faster if it simply compiled itself entirely to JS instead of Wasm.
Yes, I can't see how that would be remotely acceptable for any language that cares about performance. Can you imagine how a large performance sensitive Java/C#/Go/Kotlin/.. program would run when compiled this way?
The time where every Wasm implementation on every users machine comes with built-in GC is potentially still very far away, and even once we have it, there will still be languages that may choose to do their own GC, because what they do simply differs to much from what a generic GC can offer, or how it can interact with other linear memory data and runtime code. We need anyref & cycles to work efficiently with that scenario for the foreseeable future. If we don't solve that problem, what you'll get by default is just tons of languages doing linear memory GC and where cycles just become a user (of the language) problem. Saying "wait until host GC is available or until then just use JS objects" is not going to work. |
I've taken a crack at starting support in LLVM here: https://reviews.llvm.org/D66035. I haven't been table to test this fully yet due to bugs elsewhere, but I figured having the WIP may be useful to somebody. |
@kripken -- Schism uses JS allocations to represent its objects. It has to call out to the runtime to allocate, then the runtime returns an anyref. Sadly field access and type predicates are also via the run-time, for the time being: calls to functions taking anyref. Currently Schism actually doesn't have linear memory any more, though it's probably coming back later. @aardappel -- We seem to be talking past each other; I don't know how this became an argument. Perhaps I miscommunicated. A note first on Schism to clear up some misunderstandings, then a note on other languages. Schism used to implement a simple semi-space GC in linear memory. It performed well but had some drawbacks. One problem was that we couldn't know within wasm when to GC, because it didn't maintain a shadow stack of live values. Schism also had the problem that we couldn't interop with JS in any sensible way. When Schism switched to anyref, we got the following benefits:
Performance is suboptimal, for the time being. But we see it as a temporary condition, that things will be optimal when more pieces of the GC proposal land. For a microbenchmark of raw throughput of allocation of short-lived small objects, I measure the performance diff relative to a production Scheme implementation to be on the order of 3x slower. It is acceptable for Schism's use case.
Though indeed some languages will make one of these choices, I didn't say this, and these aren't the only two options. I have colleagues that are using weakrefs manually to solve this problem for one use-case. You seem to have another approach that looks promising; great! What I would say is that I don't think it's possible for a GC implemented in terms of linear memory to have the same performance as the GC proposal in terms of mutator utilization, raw allocation throughput, peak memory use, or pause times. The host GC has too many advantages: parallel marking, ability to find roots from the stack without a shadow stack, a global vision of memory use, etc. GC on linear memory can be a good solution in the short and perhaps mid term, but I think in the long term, few languages will find it useful, and it's useful to keep the long term in mind (without blocking work in the meantime of course). |
@wingo I am not arguing there's anything wrong with Schism, I am sure for what it does it works great. I am arguing that that strategy however won't translate well to other languages that may need GC. And yes, the GC proposal may end up being the most efficient GC for manu languages. But a) we don't have it yet and b) it one size fits all GC model invariably won't fit all languages. |
We recently implemented partial Cheerp is a C++ to JS/Wasm compiler. Unlike Emscripten/upstream clang, it supports both a linear memory mode (used for Wasm) and an object memory model (used to compile to JS). We can make use of I wrote a blog post about it here. Maybe our approach could be interesting for other languages too. |
I don't think we have a full plan for anyref yet. One issue is how to implement it in LLVM - do we need a new LLVM IR type? There are also questions about how source code for using it would be written in source langues like C, C++, and Rust. Opening this issue for more discussion on this topic.
The use case I'm most familiar with is the glue code in emscripten, like the WebGL glue: Compiled C does a
glDrawArrays
or other GL call, which goes into the JS glue which holds on to WebGL JS objects like the context, textures, etc., and it does the WebGL call using those, after mapping the C texture index (an integer) into the JS object, etc. In that use case, I don't think we have immediate plans to use anyref - wasm+anyref can't do all the stuff the current JS glue does (like, say, subarray-ing a Typed Array).But for glue code that could be done in wasm (which eventually should be all of it, but that may take a while), I'm not sure we necessarily need clang and LLVM support. It would be nice, but if it's hard, another option might be to write such code in AssemblyScript or another close-to-wasm language. It's easy and natural to express anyrefs there. Then that would be compiled to wasm and linked to the LLVM output.
Curious to hear of more use cases, and whether there is a more immediate goal for using anyrefs in emscripten, LLVM, clang, etc. (for binaryen and wabt, there is the obvious immediate goal of having full anyrefs support).
cc @Keno @wingo @dcodeIO @aardappel
The text was updated successfully, but these errors were encountered: