-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance studies #8
Comments
I ran Valgrind on bipf-nativebipfNotesWhat I think is going on is that |
@jerive I think we should try implementing encodingLength and encode using V8 APIs directly and check how the performance looks like. We have no use for bipf-native if it's slow. |
Sure it's what I was afraid of. |
I will certainly need support here // v8.h
#include <atomic>
#include <memory>
#include <string>
#include <type_traits>
#include <utility>
#include <vector> it should be supported (ziglang/zig#4786) but no clue how. |
I will give it a try too, tomorrow |
On ubuntu, the header files are in the package libc++-11-dev. |
Can you try |
See this issue in Node.js nodejs/node#14379 (comment) Seems like the performance drawbacks of N-API were acknowledged, but "compared the results to the original module that used V8 APIs ... overhead of N-API is fairly minimal already". I'm concerned what does that "minimal overhead" mean. If converting bipf-native to V8 APIs would only speed up by 2x compared to N-API bipf-native, that's not good enough. We would need it to be 20x to make bipf-native V8 at least 2x faster than bipf. |
I'll bet you it's the overhead of the js-native api. especially since you are doing many tiny calls. I would be interested to see how fast it went if you ran the benchmark inside zig, you wouldn't be able to do encode but it would do seek, etc. I'd also be curious how it went with wasm. However, you'd have to copy each record into wasm to be able to run bipf on it. What would be faster is better is to copy the whole block into wasm memory and then scan many bipf records without leaving wasm. However, in practical terms that means reimplementing flume/jitdb in wasm. |
Yes, @dominictarr, it smells like a bridging overhead indeed. What I'm wondering is why is there a copy in the first place. There's no FFI. How can JSON.stringify/parse operate without copying, and if we have access to V8 APIs, couldn't we get as close as JSON's status? |
Yeah, so we tried to implement BIPF in C++ first using N-API and then V8 APIs, and it's a bit hopeless: jerive/bipf-napi#1 TL;DR is that the problem isn't that the bridge between C++ and JS is slow, the problem is that pure JS code is usually optimized just in time to machine code and that's highly efficient. In our benchmarks in the issue linked, the bottleneck was literally just |
As of commit bb5b98f built with
ReleaseSmall
:cc @jerive
The text was updated successfully, but these errors were encountered: