Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: remove node buffers #125

Merged
merged 5 commits into from
Jan 6, 2022

Conversation

achingbrain
Copy link
Collaborator

@achingbrain achingbrain commented Jan 3, 2022

Replaces node buffers with browser-friendly Uint8Arrays.

Previously we didn't want to do this because there's no equivalent to node's Buffer.allocUnsafe which doesn't set all entries of allocated buffers to 0. The thing is we only actually use that in one place so to address this I've detected the existence of of Buffer.allocUnsafe before using it, otherwise falling back to new Uint8Array which is what the node Buffer polyfill does anyway.

Running the benchmark suite in this module shows the performance is comparable to or even slightly better than master (I think due to not having to convert Uint8Arrays to Buffers any more though please don't quote me on that 😆 ):

Before:

$ node benchmark.js
Initializing handshake benchmark
Init complete, running benchmark
handshake x 59.95 ops/sec ±11.20% (75 runs sampled)
handshake x 54.68 ops/sec ±10.81% (68 runs sampled)
handshake x 50.42 ops/sec ±11.55% (65 runs sampled)
handshake x 53.41 ops/sec ±11.84% (68 runs sampled)
handshake x 50.25 ops/sec ±11.80% (66 runs sampled)

After:

$ node ./benchmark.js
Initializing handshake benchmark
Init complete, running benchmark
handshake x 61.48 ops/sec ±11.71% (76 runs sampled)
handshake x 59.43 ops/sec ±11.13% (73 runs sampled)
handshake x 56.09 ops/sec ±12.02% (71 runs sampled)
handshake x 60.05 ops/sec ±11.69% (74 runs sampled)
handshake x 59.66 ops/sec ±10.59% (74 runs sampled)

Replaces node buffers with browser-friendly `Uint8Array`s.

Previously we didn't want to do this because there's no equivalent
to node's `Buffer.allocUnsafe` which doesn't set all entries of
allocated buffers to 0.  The thing is we only actually use that in one
place so to address this I've isolated the use of `Buffer.allocUnsafe`
to the `alloc-unsafe.ts` file and used the `browser` field in package.json
to override it for browser use.

Running the benchmark suite in this module shows the performance
is comparable to or even slightly better than master (I think due to not
having to convert `Uint8Array`s to `Buffer`s any more):

Before:

```console
$ node benchmark.js
handshake x 59.95 ops/sec ±11.20% (75 runs sampled)
handshake x 54.68 ops/sec ±10.81% (68 runs sampled)
handshake x 50.42 ops/sec ±11.55% (65 runs sampled)
handshake x 53.41 ops/sec ±11.84% (68 runs sampled)
handshake x 50.25 ops/sec ±11.80% (66 runs sampled)
```

After:

```console
$ node ./benchmark.js
Initializing handshake benchmark
Init complete, running benchmark
handshake x 61.48 ops/sec ±11.71% (76 runs sampled)
handshake x 59.43 ops/sec ±11.13% (73 runs sampled)
handshake x 56.09 ops/sec ±12.02% (71 runs sampled)
handshake x 60.05 ops/sec ±11.69% (74 runs sampled)
handshake x 59.66 ops/sec ±10.59% (74 runs sampled)
```
@codecov-commenter
Copy link

codecov-commenter commented Jan 3, 2022

Codecov Report

Merging #125 (ee01e42) into master (61251ed) will decrease coverage by 0.52%.
The diff coverage is 85.71%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #125      +/-   ##
==========================================
- Coverage   89.13%   88.60%   -0.53%     
==========================================
  Files          16       16              
  Lines        1831     1826       -5     
  Branches      243      246       +3     
==========================================
- Hits         1632     1618      -14     
- Misses        199      208       +9     
Impacted Files Coverage Δ
src/noise.ts 76.51% <50.00%> (-0.83%) ⬇️
src/handshakes/abstract-handshake.ts 92.51% <72.72%> (-1.64%) ⬇️
src/handshake-xx-fallback.ts 92.94% <75.00%> (-0.09%) ⬇️
src/encoder.ts 88.75% <76.00%> (-6.78%) ⬇️
src/handshake-ik.ts 92.30% <92.30%> (-0.05%) ⬇️
src/crypto.ts 97.67% <100.00%> (-0.29%) ⬇️
src/errors.ts 100.00% <100.00%> (ø)
src/handshake-xx.ts 97.12% <100.00%> (-0.02%) ⬇️
src/handshakes/ik.ts 89.03% <100.00%> (-0.08%) ⬇️
src/handshakes/xx.ts 90.21% <100.00%> (-0.06%) ⬇️
... and 2 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 61251ed...ee01e42. Read the comment docs.

@dapplion
Copy link
Contributor

dapplion commented Jan 3, 2022

@achingbrain I've been benchmarking the memory usage of Uint8Array vs Buffer and Uint8Arrays take double memory to represent the same data. See benchmark results here https://github.com/ChainSafe/ssz/pull/219/files#diff-afd2490ed095916808da2e4d8c22e4c7ebc2062c020c722574214fdb908ef640

noise shouldn't persist too much data at once but it's an important consideration to be aware of

@achingbrain
Copy link
Collaborator Author

That's interesting. The percentage memory difference between new Uint8Array(...) and Buffer.from(....) seems to decrease as the size of the bytes increases so it may be some sort of fixed overhead, which is all weird as Buffer extends Uint8Array. Maybe worth opening an issue against node itself to get some clarification?

As long as these things are getting garbage collected it shouldn't be a showstopper though?

Copy link
Member

@wemeetagain wemeetagain left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@wemeetagain
Copy link
Member

wemeetagain commented Jan 6, 2022

noise shouldn't persist too much data at once but it's an important consideration to be aware of

As long as these things are getting garbage collected it shouldn't be a showstopper though?

👍 Thats my thought on it.

It would be reassuring to eventually find the root cause of the Buffer/Uint8Array discrepancy tho 😅

@dapplion
Copy link
Contributor

dapplion commented Jan 6, 2022

@achingbrain Sorry forgot to go back to this

As long as these things are getting garbage collected it shouldn't be a showstopper though?

I did a few heap snapshots and I can't see noise retaining any substantial amount of binary data. So good on this front 👍

@wemeetagain wemeetagain merged commit 01c490e into ChainSafe:master Jan 6, 2022
@achingbrain achingbrain deleted the fix/remove-node-buffers branch January 8, 2022 08:03
achingbrain added a commit to achingbrain/js-libp2p-noise that referenced this pull request Jan 14, 2022
The `value` and `offset` args to `Buffer.writeUInt32LE` vs
`DataView.setUint32` are the other way round.

Fixes a bug introduced in ChainSafe#125 where we were using the nonce value
as an offset and writing `4` rather than writing the nonce value
at offset `4`.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants