Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

core/rawdb: avoid unnecessary receipt processing for log filtering #23147

Merged
merged 4 commits into from
Sep 28, 2021

Conversation

s1na
Copy link
Contributor

@s1na s1na commented Jul 1, 2021

This is a successor for #23058. In this PR, the new lightweight receiptLogs type has been instead moved to rawdb and made private.

@s1na
Copy link
Contributor Author

s1na commented Jul 5, 2021

To see how these types are used refer here: s1na#5

Copy link
Contributor

@holiman holiman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@s1na s1na force-pushed the receipt-log-types branch from cb86b65 to 1c41fba Compare August 9, 2021 14:06
@s1na s1na changed the title core: lightweight types for block logs core,eth: avoid unnecessary receipt processing for log filtering Sep 2, 2021
@s1na
Copy link
Contributor Author

s1na commented Sep 2, 2021

To estimate how effective the triage suggestion #23058 (comment) would be I added 2 metrics in #23520 which show us how often we lookup a block with zero matching logs and it seems to happen in ~1% of cases. That means we can avoid reading block body from db in 1% of cases by filtering deeper in the stack.

@s1na s1na force-pushed the receipt-log-types branch from 1c41fba to b82b409 Compare September 2, 2021 13:31
@s1na
Copy link
Contributor Author

s1na commented Sep 7, 2021

I did a hacky implementation of the optimization in s1na@7fd9ac9 and measured the time for a few contracts/events (see script). The script sends many subsequent getLogs requests, each for a range of 2000 blocks.

Screenshot 2021-09-06 at 18 25 45 cropped

The right-side chart is the mean time for responding to a request and left-side is the mean time for processing one matching block within a request. I stopped the node twice and changed branches:

  • Last numbers are from baseline (master)
  • Middle ones is this PR
  • In the first one we avoid reading the block body from db when it has zero matching logs

Something interesting I hadn't noticed is why the response time is increasing over time (I used to test with lower number of requests)

for i, receipt := range receipts {
logs[i] = receipt.Logs
logs := rawdb.ReadLogs(db, hash, *number)
if logs == nil {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would prefer ReadLogs to return an error instead of relying on logs to be non nil here

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I hear you. just didn't return an error to be consistent with the rest of the accessors

@s1na s1na removed the status:triage label Sep 7, 2021
@s1na
Copy link
Contributor Author

s1na commented Sep 7, 2021

Added a benchmark to compare decoding a stored receipt the normal way (via ReceiptForStorage) against the new type rlpLogs. Results from my own machine:

BenchmarkDecodeRLPLogs
BenchmarkDecodeRLPLogs/ReceiptForStorage
BenchmarkDecodeRLPLogs/ReceiptForStorage-8              1000000000               0.00242 ns/op
BenchmarkDecodeRLPLogs/ReceiptForStorage-8              1000000000               0.00257 ns/op
BenchmarkDecodeRLPLogs/ReceiptForStorage-8              1000000000               0.00273 ns/op
BenchmarkDecodeRLPLogs/ReceiptForStorage-8              1000000000               0.00268 ns/op
BenchmarkDecodeRLPLogs/ReceiptForStorage-8              1000000000               0.00263 ns/op
BenchmarkDecodeRLPLogs/rlpLogs
BenchmarkDecodeRLPLogs/rlpLogs-8                        1000000000               0.000877 ns/op
BenchmarkDecodeRLPLogs/rlpLogs-8                        1000000000               0.000989 ns/op
BenchmarkDecodeRLPLogs/rlpLogs-8                        1000000000               0.000871 ns/op
BenchmarkDecodeRLPLogs/rlpLogs-8                        1000000000               0.000871 ns/op
BenchmarkDecodeRLPLogs/rlpLogs-8                        1000000000               0.000926 ns/op

@holiman
Copy link
Contributor

holiman commented Sep 7, 2021 via email

@s1na
Copy link
Contributor Author

s1na commented Sep 7, 2021

Thanks for the catch. Here are the updated result:

BenchmarkDecodeRLPLogs
BenchmarkDecodeRLPLogs/ReceiptForStorage
BenchmarkDecodeRLPLogs/ReceiptForStorage-8                   452           2589949 ns/op          660146 B/op       8021 allocs/op
BenchmarkDecodeRLPLogs/ReceiptForStorage-8                   452           2593112 ns/op          660132 B/op       8020 allocs/op
BenchmarkDecodeRLPLogs/ReceiptForStorage-8                   453           2568350 ns/op          660157 B/op       8021 allocs/op
BenchmarkDecodeRLPLogs/ReceiptForStorage-8                   459           2568151 ns/op          660126 B/op       8020 allocs/op
BenchmarkDecodeRLPLogs/ReceiptForStorage-8                   453           2599324 ns/op          660153 B/op       8021 allocs/op
BenchmarkDecodeRLPLogs/rlpLogs
BenchmarkDecodeRLPLogs/rlpLogs-8                            1237            970215 ns/op          497811 B/op       5870 allocs/op
BenchmarkDecodeRLPLogs/rlpLogs-8                            1215            966026 ns/op          497810 B/op       5870 allocs/op
BenchmarkDecodeRLPLogs/rlpLogs-8                            1234           1019230 ns/op          497813 B/op       5870 allocs/op
BenchmarkDecodeRLPLogs/rlpLogs-8                            1248            974802 ns/op          497814 B/op       5870 allocs/op
BenchmarkDecodeRLPLogs/rlpLogs-8                            1196            965926 ns/op          497813 B/op       5870 allocs/op

@s1na
Copy link
Contributor Author

s1na commented Sep 8, 2021

  • I re-ran the benchmarks with geth disconnected from the network. This clearly removes the creep-up effect.
  • Also did a few runs of each variant in semi-random order to make the distinction more visible.
  • Finally, the disk read meter shows that optim2 is in fact reading less data on the margin. Optim2 is around 6.48 and optim1 around 6.77
    • The chaindata data read meter has more variance, not sure why

Screenshot 2021-09-08 at 10 47 21

Copy link
Contributor

@holiman holiman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM
"faster is better"

@holiman
Copy link
Contributor

holiman commented Sep 8, 2021

Oh no needs a rebase

@s1na s1na force-pushed the receipt-log-types branch from b16ef5c to 7edca90 Compare September 8, 2021 12:48
@s1na
Copy link
Contributor Author

s1na commented Sep 8, 2021

@holiman I rebased. Just a reminder that this PR only has Optim1. Didn't want to make it too big by including Optim2 as well

@s1na s1na force-pushed the receipt-log-types branch from 7edca90 to 80cf4ee Compare September 28, 2021 09:51
JukLee0ira added a commit to JukLee0ira/XDPoSChain that referenced this pull request Jul 11, 2024
JukLee0ira added a commit to JukLee0ira/XDPoSChain that referenced this pull request Jul 23, 2024
JukLee0ira added a commit to JukLee0ira/XDPoSChain that referenced this pull request Aug 3, 2024
gzliudan pushed a commit to XinFinOrg/XDPoSChain that referenced this pull request Aug 3, 2024
wanwiset25 pushed a commit to XinFinOrg/XDPoSChain that referenced this pull request Aug 23, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants