-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: remove node buffers #125
fix: remove node buffers #125
Conversation
Replaces node buffers with browser-friendly `Uint8Array`s. Previously we didn't want to do this because there's no equivalent to node's `Buffer.allocUnsafe` which doesn't set all entries of allocated buffers to 0. The thing is we only actually use that in one place so to address this I've isolated the use of `Buffer.allocUnsafe` to the `alloc-unsafe.ts` file and used the `browser` field in package.json to override it for browser use. Running the benchmark suite in this module shows the performance is comparable to or even slightly better than master (I think due to not having to convert `Uint8Array`s to `Buffer`s any more): Before: ```console $ node benchmark.js handshake x 59.95 ops/sec ±11.20% (75 runs sampled) handshake x 54.68 ops/sec ±10.81% (68 runs sampled) handshake x 50.42 ops/sec ±11.55% (65 runs sampled) handshake x 53.41 ops/sec ±11.84% (68 runs sampled) handshake x 50.25 ops/sec ±11.80% (66 runs sampled) ``` After: ```console $ node ./benchmark.js Initializing handshake benchmark Init complete, running benchmark handshake x 61.48 ops/sec ±11.71% (76 runs sampled) handshake x 59.43 ops/sec ±11.13% (73 runs sampled) handshake x 56.09 ops/sec ±12.02% (71 runs sampled) handshake x 60.05 ops/sec ±11.69% (74 runs sampled) handshake x 59.66 ops/sec ±10.59% (74 runs sampled) ```
Codecov Report
@@ Coverage Diff @@
## master #125 +/- ##
==========================================
- Coverage 89.13% 88.60% -0.53%
==========================================
Files 16 16
Lines 1831 1826 -5
Branches 243 246 +3
==========================================
- Hits 1632 1618 -14
- Misses 199 208 +9
Continue to review full report at Codecov.
|
@achingbrain I've been benchmarking the memory usage of Uint8Array vs Buffer and Uint8Arrays take double memory to represent the same data. See benchmark results here https://github.com/ChainSafe/ssz/pull/219/files#diff-afd2490ed095916808da2e4d8c22e4c7ebc2062c020c722574214fdb908ef640 noise shouldn't persist too much data at once but it's an important consideration to be aware of |
That's interesting. The percentage memory difference between As long as these things are getting garbage collected it shouldn't be a showstopper though? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
👍 Thats my thought on it. It would be reassuring to eventually find the root cause of the Buffer/Uint8Array discrepancy tho 😅 |
@achingbrain Sorry forgot to go back to this
I did a few heap snapshots and I can't see noise retaining any substantial amount of binary data. So good on this front 👍 |
The `value` and `offset` args to `Buffer.writeUInt32LE` vs `DataView.setUint32` are the other way round. Fixes a bug introduced in ChainSafe#125 where we were using the nonce value as an offset and writing `4` rather than writing the nonce value at offset `4`.
Replaces node buffers with browser-friendly
Uint8Array
s.Previously we didn't want to do this because there's no equivalent to node's
Buffer.allocUnsafe
which doesn't set all entries of allocated buffers to 0. The thing is we only actually use that in one place so to address this I've detected the existence of ofBuffer.allocUnsafe
before using it, otherwise falling back tonew Uint8Array
which is what the nodeBuffer
polyfill does anyway.Running the benchmark suite in this module shows the performance is comparable to or even slightly better than master (I think due to not having to convert
Uint8Array
s toBuffer
s any more though please don't quote me on that 😆 ):Before:
After: