-
Notifications
You must be signed in to change notification settings - Fork 20.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
core/bloombits, eth/filter: transformed bloom bitmap based log search #14631
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think I fully understand the operation of the various goroutines here. Could we schedule a call to discuss and review interactively?
core/bloombits/matcher.go
Outdated
reqLock sync.RWMutex | ||
} | ||
|
||
type req struct { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you decompress these identifiers a bit? bitIndex
, requestMap
, request
, etc.
core/bloombits/matcher.go
Outdated
func (f *fetcher) fetch(sectionCh chan uint64, distCh chan distReq, stop chan struct{}, wg *sync.WaitGroup) chan []byte { | ||
dataCh := make(chan []byte, channelCap) | ||
returnCh := make(chan uint64, channelCap) | ||
wg.Add(2) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why 2?
core/bloombits/matcher.go
Outdated
returnCh := make(chan uint64, channelCap) | ||
wg.Add(2) | ||
|
||
go func() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why two separate goroutines? Can't these be combined?
core/bloombits/matcher.go
Outdated
for i, idx := range sectionIdxList { | ||
r := f.reqMap[idx] | ||
if r.data != nil { | ||
panic("BloomBits section data delivered twice") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this really a cause for panic?
core/bloombits/matcher.go
Outdated
// set up fetchers | ||
fetchIdx := make([][3]chan uint64, len(idxs)) | ||
fetchData := make([][3]chan []byte, len(idxs)) | ||
for i, idx := range idxs { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you give 'idxs' and 'ii' more descriptive names?
core/bloombits/matcher.go
Outdated
return | ||
} | ||
|
||
for _, ff := range fetchIdx { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These variable names are also pretty hard to follow.
core/bloombits/matcher.go
Outdated
|
||
m.wg.Add(2) | ||
// goroutine for starting retrievals | ||
go func() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's unclear to me why there need to be two independent goroutines here, or how they work together. Can you elaborate?
core/bloombits/matcher.go
Outdated
func (m *Matcher) distributeRequests(stop chan struct{}) { | ||
m.distWg.Add(1) | ||
stopDist := make(chan struct{}) | ||
go func() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this really necessary?
7fae8bc
to
ef18689
Compare
5f9be14
to
900d968
Compare
b37d46f
to
06ca7f7
Compare
I am now satisfied with this PR, though given that I added heavy reworks to it, I cannot mark it as reviewed. @zsfelfoldi Please double check that the code still makes sense from your perspective. @fjl or @Arachnid Please do a final review. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Generally looks good, but I'm concerned the code to retrieve data from disk is massively overengineered.
core/bloombits/generator.go
Outdated
|
||
// AddBloom takes a single bloom filter and sets the corresponding bit column | ||
// in memory accordingly. | ||
func (b *Generator) AddBloom(bloom types.Bloom) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm a little concerned that the implied state here (which bit is being set) could lead to something becoming invisibly out of sync. Could this take the bit being set as a parameter and error if it's not the one it expects?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you're right, done
core/bloombits/generator.go
Outdated
bitMask := byte(1) << byte(7-b.nextBit%8) | ||
|
||
for i := 0; i < types.BloomBitLength; i++ { | ||
bloomByteMask := types.BloomByteLength - 1 - i/8 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is an index, not a mask, isn't it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it is, and also byteMask is. fixed.
b = crypto.Keccak256(b) | ||
|
||
var idxs bloomIndexes | ||
for i := 0; i < len(idxs); i++ { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's a lot of magic numbers here. Could they be made into constants?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These are kind of speced by the yellow paper on how to construct the header bloom filters. I'm unsure if it makes sense to separate them out since there's not much room to reuse them elsewhere. But I'm open to suggestions.
core/bloombits/matcher.go
Outdated
type Matcher struct { | ||
sectionSize uint64 // Size of the data batches to filter on | ||
|
||
addresses []bloomIndexes // Addresses the system is filtering for |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wouldn't it be cleaner to just accept a list of byte slices, and leave the address/topic distinction to the caller?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
API wise within NewMatcher
I think it might be nice thing to support the filter by being explicitly catered for that use case. Internally, I'm unsure if we need to retain this distinction. @zsfelfoldi ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Even in NewMatcher
, this seems to me like it shouldn't care about what it's matching on - they should just be opaque hashes to it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I changed it as @Arachnid suggested in the last commit, now we have to convert addresses and hashes to byte slices in filter.go but maybe it's still nicer this way. If you don't like it, we can drop the last commit.
core/bloombits/matcher.go
Outdated
// Iterate over all the blocks in the section and return the matching ones | ||
for i := first; i <= last; i++ { | ||
// If the bitset is nil, we're a special match-all cornercase | ||
if res.bitset == nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why the corner case? Why not just return a fully set bitset?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fair enough.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because that's what comes from the source: https://github.com/zsfelfoldi/go-ethereum/blob/bloombits2/core/bloombits/matcher.go#L229
I don't want to do unnecessary binary ANDs every time in the first subMatcher so I'm sending nils at the source. But in the corner case there are no subMatchers at all (we are receiving the source at the end) so we have to do some special case handling anyway. We can do it some other way but I'm not sure it's going to be any nicer. But I'm open to suggestions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've added a commit to always send 0xff-s in the first instance. I don't think doing a binary and on a few bytes will matter given that the data comes from disk or the network.
|
||
// distributor receives requests from the schedulers and queues them into a set | ||
// of pending requests, which are assigned to retrievers wanting to fulfil them. | ||
func (m *Matcher) distributor(dist chan *request, session *MatcherSession) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does having multiple parallel retrievers help performance? I wonder if it wouldn't be far simpler to just have each filter fetch data as it needs it, with a cache to prevent duplicate reads?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the complexity is to aid the light client which needs to batch multiple requests together and send them out multiple batches to different light servers. Hence why the whole distribution "mess".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, good point. Objection withdrawn, then.
@Arachnid Issues should be addresses now, PTAL |
This PR is based on #14522 and supersedes #3749 which was the same thing but based on the old version of the chain processor. Both versions are kept until one of them is merged.
Further parts of the code may be moved into separate PRs to make the review process easier (suggestions are welcome).
This PR optimizes log searching by creating a data structure (BloomBits) that makes it cheaper to retrieve bloom filter data relevant to a specific filter. When searching in a long section of the block history, we are checking three specific bits of each bloom filter per address/topic. In order to do that, currently we read/retrieve a cca. 500 byte block header for each block. The implemented structure optimizes this by a "bitwise 90 degree rotation" of the bloom filters. Blocks are grouped into sections (SectionSize is 4096 blocks at the moment), BloomBits[bitIdx][sectionIdx] is a 4096 bit (512 byte) long bit vector that contains a single bit of each bloom filter from the block range [sectionIdx*SectionSize ... (sectionIdx+1)*SectionSize-1]. (Since bloom filters are usually sparse, a simple data compression makes this structure even more efficient, especially for ODR retrieval.) By reading and binary AND-ing three BloomBits sections, we can filter for an address/topic in 4096 blocks at once ("1" bits in the binary AND result mean bloom matches).
Implementation and design rationale of the matcher logic
The matcher was designed with the needs of both full and light nodes in mind. A simpler architecture would probably be satisfactory for full nodes (where the bit vectors are available in the local database) but the network retrieval bottleneck of light clients justifies a more sophisticated algorithm that tries to minimize the amount of retrieved data and return results as soon as possible. The current implementation is a pipelined structure based on input and output channels (receiving section indexes and sending potential matches). The matcher is built from sub-matchers, one for the addresses and one for each topic group. Since we are interested in matches that each sub-matcher signals as positive, they are daisy-chained in a way that subsequent sub-matchers are only retrieving and matching the bit vectors of sections where the previous matchers have found a potential match. The "1" bits of the output of the last sub-matcher are returned as bloom filter matches.
Sub-matchers use a set of fetchers to retrieve a stream of bit vectors belonging to certain bloom filter bit indexes. Fetchers are also pipelined, receiving a stream of section indexes and sending a stream of bit vectors. As soon as each fetcher returned the bit vector belonging to a certain section index, the sub-matcher performs the necessary binary AND and OR operations and outputs the resulting vector and its belonging section index if the vector is not made of zeroes only.
Light clients retrieve the bit vectors with merkle proofs, which makes it much more efficient to retrieve batches of vectors (whose merkle proofs share most of their trie nodes) in a single request. Also, it is preferable to prioritize requests based on their section index (regardless of bit index) in order to ensure that matches are found and returned as soon as possible (and in a sequential order). Prioritizing and batching are realized by a common request distributor that receives individual bit index/section index requests from fetchers and keeps an ordered list of section indexes to be requested, grouped by bit index. It does not call any retrieval backend function but it is called by a "server" process (see serveMatcher in filter.go). NextRequest returns the next batch to be requested, retrieved vectors are returned through the Deliver function. This method ensures that the bloombits package should only care about implementing the matching logic. The caller can retain full control over the resources (CPU/disk/network bandwidth) assigned to this task.