-
Notifications
You must be signed in to change notification settings - Fork 45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Sequencer] performance degraded by sender recovery #1605
Comments
Geth cached sender address, in order to avoid calling ECRecover on the same sender address twice |
Erigon has this same concept of caching, but it exists with the transaction whilst it is in memory. Because the pool is a separate entity to the sequencer in erigon it passes the raw RLP over the interface, so the sequencer needs to decode this once more and won't have the sender cached. |
Geth call ecdsa_recover in parallel. https://github.com/ethereum/go-ethereum/blob/master/crypto/secp256k1/secp256.go#L48 |
Cache sender address works perfect under our scenario. We are focusing on ERC4337 transaction performance.
|
Summary Report: Performance Degradation when lots of Overflow Transactions
OverviewWhen producing a block, sequencer yield 1000 transactions from txpool, but only 2 are mined, the remaining 998 transactions are retained and need to be re-processed in subsequent blocks. Test methodPolycli loadtest 5000 ERC4337 transactions, each transaction contains 10 dummy UserOperations, each UserOperation verify 2 ECDSA signatures (P-256 + Secp256r1) Key Observations
Potential SolutionsEstimate and Check zkCounter overflow
cdk-erigon/zk/txpool/pool_zk.go Lines 185 to 243 in ba9c7a4
Dynamic Yield Size: Adjust the yieldSize dynamically based on the observed block capacity and overflow conditions to reduce unnecessary processing.
Efficient Sender Caching: Improve the caching mechanism for sender addresses to reduce the computational cost of repeated sender recovery.
|
Thanks @doutv for the detailed feedback there. A couple of things from my side:
|
Before adding TX into a block, compute hash for 1000 transactions is also time-consuming |
Another simple method: change yieldsize config from 1000 to 10.
|
Should I create a PR to fix it? Or you guys want to work on this? |
I would like to work on that. Thanx a lot for all the details, appreciate it! |
A while back we needed to perform sender recovery whilst pulling transactions from the pool to ensure this worked correctly and didn't error before adding the TX to a block. This looks to have introduced some performance hit to the sequencer which we need to investigate.
Related PR introducing these changes here #1480
Things to explore:
The text was updated successfully, but these errors were encountered: