-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Suggestion: Look into porting to Rust and using Tokio #205
Comments
One block is Deno uses Protobufs for communication but they are not officially supported in Rust, There are third party libraries tho. I think Rust would be a great fit, it seems Mozilla is putting a lot of work into WebAssembly 😄 |
I am a huge Rust fan, and I agree with the advantages you listed. But I don't think now is the best time for deno to invest to Futures/Tokio, considering
These are not deal breakers, and Rust core team is resolving them quickly. It's just not sensible to add these complexities to deno now, which itself an experimental project. |
I've been playing around with Rust and wrote this wrapper of |
The biggest problem is not going to be making the native part of deno fast - there are tons of fast native servers. Where everything is going to be determined is how lightweight the JS wrapper of this server will be. Clearly Node.js completely failed at this, even though their native components are not all that slow. A fast native server pushing 5 million pipelined hello world HTTP requests per second can be completely butchered by an inefficient JS wrapper, capping out at only a few thousand req/sec (see Node.js) while a well thought out wrapper with minimal amount of dynamic JS resources per request may very well keep up to 20% of the native performance, landing at 1 million req/sec from within JS (see "uws" for Node.js and Japronto for Python cases). The JS wrapping is by far the most perf. sensitive component. |
What I would like to see as a highly prioritized item on the agenda is a benchmark of Deno as is, with Golang. Because if Deno cannot properly retain even the perf. of the built-in Golang HTTP module then it will make no difference at all spending a lot of time rewriting the native parts in Rust. We really need to see a benchmark of Deno in action as soon as possible. This should be priority nr. 1 before every other decision really. Otherwise we end up with the same story of Node.js: an overly optimized HTTP parser written in C that does 50 billion req/sec on paper but when put into Node.js caps out at barely 18k req/sec. |
@alexhultman I've been running a few benchmarks on |
@matiasinsaurralde Your PR displays exactly what I feared: Deno will end up clogging up at the JS wrapper performing with horrible throughput. Not even beating Node.js, swapping to Tokio is completely pointless until that wrapper is fixed. The entire Deno project is completely pointless if it's just going to be a worse performing copy of Node.js. |
@alexhultman Removing Go is to avoid a double GC - which I think everyone can appreciate. And we're experimenting with interfaces - first pass implementations like the HTTP patch let us test. No one is claiming any release or interface commitment. |
In case anyone is interested in hacking around, I've pushed a few tweaks for rust-v8worker2 and a limited implementation of Deno code in Rust here (currently you can boot up the runtime and execute a program, only |
How do the HTTP benchmarks look when using reno? @matiasinsaurralde |
@brandonros I haven't implemented the HTTP module yet, there are many options for this. I've been doing some quick benchmarks with this system call
The main advantage of Rust in this scenario is that we can interact directly with the C data structures, etc. |
@matiasinsaurralde If you had to estimate, do you have faith that switching to Cap'n Proto will bring the elapsed time down from 140, close to Node's 77? For perspective, it might make sense to show how long the pure C/pure Rust version of the same benchmark would take? Much like @alexhultman, I'm struggling to see the benefit of running a message-passing layer on top of V8, given the performance. How does node.js currently solve this issue? |
For HTTP Deno would currently be about 13x off. |
NPM encountered something similar to this during a reliability-oriented rewrite of one of their components and wound up discovering that a surprising amount of the difference was down to how much optimization effort had been put into Node.js in areas you'd underestimate the impact of, like default buffer sizes. |
@matiasinsaurralde Can you tag me on any capnp-ts issues you're running into? I can't make any promises but I'm trying to make some time to play with it myself soon and I hope to help fix issues while I'm at it. (I created Cap'n Proto and the C++ reference implementation but not capnp-ts.) |
I just wanted to insert a few points. It's easy to "dislike" comments that goes against ones gut feeling and fanatism. However disliking someone's comments does not make any change on reality, just because you don't want to accept reality for what it is. You can't pull the plug on reality and the reality as in, what's measurable with scientific methods shows a very clear picture. If a project stems from the acknowledgement of prior mistakes in Node.js but still clings to it as the only source of reference then you're not going to get any further than a few baby steps past. Crystal, Golang, Rust, Swift - they all have servers that outperform Node.js by multiple significant times over. You really need to stop comparing with Node.js and raise the stakes if you want to get anywhere real. I have actual code running inside of reality in this very moment doing about 6x of Node.js, inside of Node.js via V8 native addons. It properly communicates with JavaScript callbacks and methods, has proper URL routing and SSL. It does about 1.5x of Crystal and Golang's fasthttp. My point is that solely comparing with, and settling for, Node.js is only going to lead you to yet another Node.js. Clearly it is possible to raise performance up to Golang levels from inside V8, just do it already. Comparing with Node.js and accepting even a 4/5th the perf. is simply unacceptable. Can we please start comparing with Golang's fasthttp or similar? |
@kentonv Sure, I will continue my experiments this week and will be able to report the detailed issues on the repo. Thanks. |
Link to said code? How does it compare to Rust? |
You always pay a quite hefty (but still somewhat acceptable) fine calling JS functions from C++. So of course it is impossible to measure up all the way to a native-only implementation. I lose about 30% req/sec when running inside of Node.js compared to as a stand-alone C++-only build. Still, 70% of a good C++ implementation is enough to measure up to the fastest available for Golang. |
@alexhultman I want to learn Rust. Let's work together on what you think this project should be. I've tried e-mailing you but you never wrote back. |
I don't know a single line of Rust though, so I can't help |
|
I've been watching this project since Ryan gave the talk, and have been wanting to contribute. I'm fairly familiar with Rust, but not with V8, Cap'n Proto or protobufs (I get the concept though). Is someone taking the helm on the Rust experiment and needs help? |
I'm currently researching what C++ code is required to hook a function into Javascript. So far, I pieced this together from things I found online:
Once I figure out the second function... I'm guessing we could use https://github.com/alexcrichton/rust-ffi-examples/tree/master/cpp-to-rust I obviously don't know half as much as other people in this thread, but I was thinking about something simple. Send a message from Javascript to C++, get a response back. Kind of like... pubsub? For... network connections and filesystem operations? I'm sure that'll go great.......... |
@brandonros Looking at the roadmap, it seems like the portion that will be interacting with the VM, called libdeno, will be written in C++, which will expose a C api so that Rust can bind to it. The beginnings of it are here: https://github.com/ry/deno/blob/master/src/main.rs As far as the messaging protocol you are mentioning, they already decided to use protobuf (Cap'n Proto in future?) and pub/sub. |
I would be careful about hello world benchmarks, they're notoriously... Useless lol Go seems to have a fair bit of room for improvement, at-least as far as performance is concerned (there's always some new idea bouncing around their Github), but the GC is probably always going to be a problem lurking round the corner. There are a couple of proposals including a percpu sharded value proposal which might also help with performance in some areas, but I'm not sure how much momentum it has behind it. And there are ideas being bounced around to drastically reduce allocations in Go 2 (e.g. the conversions between []byte and string which can sometimes be costly), although I can't really say much on that, as the details seem a little vague and I'm not sure when it may be released. All and all though, Rust was practically built for speed, while Go focuses more on productivity, so Go is probably always going to be slower, there is only so much people can do. It really depends on what you're really going for. |
What is a "hello world benchmark" to you? If we're talking about HTTP pipelining of the string "hello world" then I can easily outperform Node.js by 80x (thousands of times over when counting ExpressJS). That's unlikely to ever be a useful number though. The 6x comes from a far more useful case and is not a "hello world" benchmark. Or maybe you suggest we all go write our servers in Ruby on Rails? Then what was the point of Node.js in the first place? What is the point with Deno with that opinion? Just go use Apache/PHP then. |
Edited it as I didn't articulate my thoughts as well as I should have, there are pros and cons to everything. That said, if you're on this side of the VM, then you're probably going for speed more than productivity, otherwise just write the thing in TypeScript lol As for the performance bit, I was going off: https://www.techempower.com/benchmarks/#section=data-r16&hw=ph&test=json |
Yeah those 2 mil are complete nonsense. They are achieved using HTTP pipelining which basically only benchmarks the parser/formatter. I get 5 million req/sec in my implementation when I do that. |
Yes we will likely use Tokio. It's about to land in #434 |
Alex Crichton, from the Rust Core team, gave a great presentation on Concurrency in Rust with Async I/O using Tokio at code::dive 2017. As soon as I watched your Deno talk, I thought about his. It is long, but I highly recommend watching it.
From his talk: Using a simple hello world benchmark, they achieved almost 2mil req/sec - nearly 250k more than the next best implementation and 500k more than Go's fasthttp. Out of the box, Tokio supports 'TCP, UDP, Unix sockets, Named pipes, processes, signals, http, http2, web sockets, ...'
Some observations (Some may be wrong and definitely missing some 😃):
Thanks for the cool, new project and your time
Edit: Forgot to include no GC
The text was updated successfully, but these errors were encountered: