-
Notifications
You must be signed in to change notification settings - Fork 20.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
eth/downloader: separate state sync from queue #14460
Conversation
queued := q.blockTaskQueue.Size() + q.receiptTaskQueue.Size() + q.stateTaskQueue.Size() | ||
pending := len(q.blockPendPool) + len(q.receiptPendPool) + len(q.statePendPool) | ||
queued := q.blockTaskQueue.Size() + q.receiptTaskQueue.Size() | ||
pending := len(q.blockPendPool) + len(q.receiptPendPool) | ||
cached := len(q.blockDonePool) + len(q.receiptDonePool) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The invariant of this function changed a bit. Idle will now return true even if state sync is still in progress. Not sure yet how this affects sync overall, but you should be aware. If it's fine, the docs need to be updated to state so.
eth/downloader/queue.go
Outdated
// Stop before processing the pivot block to ensure that | ||
// resultCache has space for fsHeaderForceVerify items. Not | ||
// doing this could leave us unable to download the required | ||
// amount of headers. | ||
if i > 0 || len(q.stateTaskPool) > 0 || q.pendingNodeDataLocked() > 0 { | ||
return i | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Again a tiny invariant change. I don't yet see how this could have an adverse effect, just thought I'd mention it. Previously the code handled the pivot block fully independently from any other blocks. The new code places the pivot block possibly to the end of a fast sync import batch. It should work, just mentioning the behavioral change.
eth/downloader/statesync.go
Outdated
for ; n >= 0 && len(s.retry) > 0; n-- { | ||
items = append(items, s.retry[len(s.retry)-1]) | ||
s.retry = s.retry[:len(s.retry)-1] | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You are not maintaining a set of items that a node does not have. If you have for example a single peer, which does not have any data (e.g. it's also fast syncing), this loop will request the same data over and over again indefinitely from the peer. You need to mark any data unavailable from a peer and not request it ever again from there.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note however that maintaining such a list and avoiding reassigning the same hash over and over again will cause the next line to potentially OOM the node, because there would be no memory cap on the total size of the pending state retrievals.
The original code moved trie requests from the scheduler.Missing to an intermediate pool (similar to your retry
slice, but one that contained all pending ones), and only requested new missing ones if it fit into the pool. You could do a similar thing by maintaining a counter with the total pending requests, and do:
return append(items, s.sched.Missing(min(n, maxPend - curPending))...)
That would ensure that we don't grow the number of outstanding requests unboundedly.
eth/downloader/statesync.go
Outdated
s.fetching[peer.id] = s.spawnFetch(peer, items, rtt) | ||
} | ||
peers = peers[1:] | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You need to handle the case when you have literally no nodes that can services the requests defined by the trie root. This is an attack vector where someone feeds you an invalid head block (you can't verify its validity when starting the sync). In such a case when no peers can actually provide data for the head root hash, it must be considered invalid/attack and the peer you're syncing with (the master peer) dropped.
eth/downloader/statesync.go
Outdated
|
||
func (s *stateSync) spawnFetch(p *peer, nodes []common.Hash, timeout time.Duration) *stateReq { | ||
log.Trace("Requesting node data", "peer", p.id, "items", len(nodes), "timeout", timeout) | ||
req := &stateReq{p, nodes, make(chan struct{})} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please use named fields.
eth/downloader/statesync.go
Outdated
s.requestDone(req) | ||
s.retry = append(s.retry, req.items...) | ||
for _, hash := range req.items { | ||
req.peer.MarkLacking(hash) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Timeout should not mark them lacking. The node will try to request smaller batches. The same nodes can be retrieved later too potentially, no need to never retry.
eth/downloader/statesync.go
Outdated
|
||
func (s *stateSync) requestDone(req *stateReq) { | ||
delete(s.fetching, req.peer.id) | ||
close(req.cancelTimeout) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Convert req.cancelTimeout
into req.timeoutTimer
and do a req.timeoutTimer.Stop()
instead here.
eth/downloader/statesync.go
Outdated
batch = s.db.NewBatch() | ||
) | ||
for _, blob := range pack.states { | ||
hash := common.BytesToHash(crypto.Keccak256(blob)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You probably want to reuse a hashes throughout the for loop. There are almost 400 states per request, so there's no reason to do so many hasher allocs and deallocs.
eth/downloader/statesync.go
Outdated
} | ||
s.updateStats(n, time.Since(procStart)) | ||
|
||
case req := <-s.timeout: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This select construct has the same issue as the current code:
select {
case packI := <-deliver:
// Process
case req := <-s.timeout:
}
Process can take a long time. By that time all timeouts will fire (even if all the nodes actually delivered too). The end result is that this select statement picks deliver/timeouts randomly if both are available, so it will consider requests timed out that are actually queued up on the deliver channel.
eth/downloader/statesync.go
Outdated
for _, hash := range req.items { | ||
req.peer.MarkLacking(hash) | ||
} | ||
req.peer.SetNodeDataIdle(0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Previously if a node timed out while requesting a single entry, it was dropped. This is required, otherwise a single really bad peer has the capacity to stall the entire sync. It's fine to have slow peers after sync is doe, but sync requires heavy downloads, so we can't afford slow peers. That's where the hard timeouts came into play in the original code.
4c8d741
to
203158d
Compare
@Arachnid PTAL |
eth/downloader/downloader.go
Outdated
@@ -212,7 +216,7 @@ func New(mode SyncMode, stateDb ethdb.Database, mux *event.TypeMux, hasHeader he | |||
// these are zero. | |||
func (d *Downloader) Progress() ethereum.SyncProgress { | |||
// Fetch the pending state count outside of the lock to prevent unforeseen deadlocks | |||
pendingStates := uint64(d.queue.PendingNodeData()) | |||
// pendingStates := uint64(d.queue.PendingNodeData()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please delete commented line.
eth/downloader/statesync.go
Outdated
case req := <-d.trackStateReq: | ||
activeReqs[req.peer.id] = req | ||
req.timer = time.AfterFunc(req.timeout, func() { | ||
select { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you leave a comment about the reason for this?
eth/downloader/statesync.go
Outdated
case <-s.cancel: | ||
return errCancelStateFetch | ||
case req := <-s.deliver: | ||
// response or timeout |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This path doesn't get called on timeout, so any node which timed out will forever get stuck.
eth/downloader/downloader.go
Outdated
} | ||
|
||
pivot := d.queue.FastSyncPivot() | ||
// processBlocks takes fetch results from the queue and tries to import them |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
name mismatch
LGTM, though since I've added some code, would be nice for someone else to review
Scheduling of state node downloads hogged the downloader queue lock when new requests were scheduled. This caused timeouts for other requests. With this change, state sync is fully independent of all other downloads and doesn't involve the queue at all. State sync is started and checked on in processContent. This is slightly awkward because processContent doesn't have a select loop. Instead, the queue is closed by an auxiliary goroutine when state sync fails. We tried several alternatives to this but settled on the current approach because it's the least amount of change overall. Handling of the pivot block has changed slightly: the queue previously prevented import of pivot block receipts before the state of the pivot block was available. In this commit, the receipt will be imported before the state. This causes an annoyance where the pivot block is committed as fast block head even when state downloads fail. Stay tuned for more updates in this area ;)
This change also ensures that pivot block receipts aren't imported before the pivot block itself.
It fails the sync too much.
Fixes TestDeliverHeadersHang*Fast and (hopefully) the weird cancellation behaviour at the end of fast sync.
This commit explicitly tracks duplicate and unexpected state delieveries done against a trie Sync structure, also adding there to import info logs. The commit moves the db batch used to commit trie changes one level deeper so its flushed after every node insertion. This is needed to avoid a lot of duplicate retrievals caused by inconsistencies between Sync internals and database. A better approach is to track not-yet-written states in trie.Sync and flush on commit, but I'm focuing on correctness first now. The commit fixes a regression around pivot block fail count. The counter previously was reset to 1 if and only if a sync cycle progressed (inserted at least 1 entry to the database). The current code reset it already if a node was delivered, which is not stong enough, because unless it ends up written to disk, an attacker can just loop and attack ad infinitum. The commit also fixes a regression around state deliveries and timeouts. The old downloader tracked if a delivery is stale (none of the deliveries were requestedt), in which case it didn't mark the node idle and did not send further requests, since it signals a past timeout. The current code did mark it idle even on stale deliveries, which eventually caused two requests to be in flight at the same time, making the deliveries always stale and mass duplicating retrievals between multiple peers.
This commit fixes the hang seen sometimes while doing the state sync. The cause of the hang was a rare combination of events: request state data from peer, peer drops and reconnects almost immediately. This caused a new download task to be assigned to the peer, overwriting the old one still waiting for a timeout, which in turned leaked the requests out, never to be retried. The fix is to ensure that a task assignment moves any pending one back into the retry queue. The commit also fixes a regression with peer dropping due to stalls. The current code considered a peer stalling if they timed out delivering 1 item. However, the downloader never requests only one, the minimum is 2 (attempt to fine tune estimated latency/bandwidth). The fix is simply to drop if a timeout is detected at 2 items. Apart from the above bugfixes, the commit contains some code polishes I made while debugging the hang.
I don't really know enough about the whole process to give an overall 👍 or 👎 But for what it's worth I've had a good read through the last few commits from @karalabe and they look good to me 👍 |
trie/sync.go
Outdated
@@ -48,6 +52,21 @@ type SyncResult struct { | |||
Data []byte // Data content of the retrieved node | |||
} | |||
|
|||
// SyncMemCache is an in-memory cache of successfully downloaded but not yet | |||
// persisted data items. | |||
type SyncMemCache struct { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please call it syncBatch
Scheduling of state node downloads hogged the downloader queue lock when
new requests were scheduled. This caused timeouts for other requests.
With this change, state sync is fully independent of all other downloads
and doesn't involve the queue at all.
State sync is started and checked on in processContent. This is slightly
awkward because processContent doesn't have a select loop. Instead, the
queue is closed by an auxiliary goroutine when state sync fails. We
tried several alternatives to this but settled on the current approach
because it's the least amount of change overall.
Handling of the pivot block has changed slightly: the queue previously
prevented import of pivot block receipts before the state of the pivot
block was available. In this commit, the receipt will be imported before
the state. This causes an annoyance where the pivot block is committed
as fast block head even when state downloads fail. Stay tuned for more
updates in this area ;)