-
Notifications
You must be signed in to change notification settings - Fork 20.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
core: move TxPool reorg and events to background goroutine #19705
Conversation
This isn't quite done yet. I still need to figure out how to remove the |
The 'aggregation' of promoteExecutables calls probably won't work in practice with this change alone because Consider AddRemote: we take the lock, validate the transaction, move the TX into pending or queue and drop the lock. Then we kick off a background reorg, which takes the lock again, moves TXs around, drops the lock and finally sends events. Since the next call to AddRemote will have to wait for the lock held by reorg before adding its TX, concurrency doesn't improve much. Things are slightly better than before this change when it comes to events though. If a downstream consumer of txFeed is blocked for a while, reorg calls will aggregate instead of launching many goroutines to deliver events. To really improve high load behavior, we'd need a way to quickly verify the TX without the big lock, submit the request to add it and move on. The reorg stuff could then take batches of transactions periodically and integrate them all at once, keeping the central lock free for readers to take most of the time. |
An idea: if we split up the lock so there is one for the queue and one for pending we could just always insert TXs into the queue and then always promote them in the background. |
|
||
return pool.addTxsLocked(txs, local) | ||
done := pool.requestPromoteExecutables(dirtyAddrs) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The previous code only caled promoteExecutables if len(dirty) > 0
(in addTxsLocked). Was there a reason to not carry that over?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The request is definitely needed to trigger sending of events even if
there isn't anything to promote. Furthermore, promoteExecutables is pretty
smart about avoiding work.
72e57d4
to
096e413
Compare
@holiman PTAL |
096e413
to
1f90cc7
Compare
This PR (specifically)
|
Seeing quite a lot of these in the logs;
If it is truly |
Error messages comes from
The reset tries to take transactions that were in a block that became reorged and did not make it into the new block, and shove them back into the Afterwards, it tries to demote unexecutables, and finds that the subsequent transactions are still there, and it becomes surprised since the reinjceted ones are now not there. And the effect is that the subsequent ones get shoved out from |
This change moves internal queue re-shuffling work in TxPool to a background goroutine, TxPool.runReorg. Requests to execute runReorg are accumulated by the new scheduleReorgLoop. The new loop also accumulates transaction events. The motivation for this change is making sends to txFeed synchronous instead of sending them in one-off goroutines launched by 'add' and 'promoteExecutables'. If a downstream consumer of txFeed is blocked for a while, reorg requests and events will queue up.
This change removes tracking of the homestead block number from TxPool. The homestead field was used to enforce minimum gas of 53000 for contract creations after the homestead fork, but not before it. Since nobody would want configure a non-homestead chain nowadays and contract creations usually take more than 53000 gas, the extra correctness is redundant and can be removed.
This is useless now because there is no separate code path for individual transactions anymore.
e4bf834
to
0dbe8bd
Compare
Future bootnodes now updated to run this PR |
This change moves internal queue re-shuffling work in TxPool to a background
goroutine, TxPool.runReorg. Requests to execute runReorg are accumulated
by the new scheduleReorgLoop. The new loop also accumulates transaction events.
The motivation for this change is making sends to txFeed synchronous.
Fixes #19192