Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

timing preparePayload #3488

Closed
g11tech opened this issue Dec 7, 2021 · 4 comments
Closed

timing preparePayload #3488

g11tech opened this issue Dec 7, 2021 · 4 comments
Assignees
Labels
spec-bellatrix 🐼 Issues targeting the merge spec version.

Comments

@g11tech
Copy link
Contributor

g11tech commented Dec 7, 2021

At time of the block proposal, one needs to signal to EL to prepare payload, it needs to be investigated if preparePayload can be executed at optimal time as early as possible giving maximum time to EL for preparing payload.
context: sigp/lighthouse#2715

@g11tech g11tech mentioned this issue Dec 7, 2021
3 tasks
@g11tech g11tech self-assigned this Dec 7, 2021
@philknows philknows added the spec-bellatrix 🐼 Issues targeting the merge spec version. label Jan 22, 2022
@g11tech
Copy link
Contributor Author

g11tech commented Feb 3, 2022

UPDATE: @MarekM25 (Nethermind) pointed out that EL's hardly get any time to prepare payloads, from nothing to max about 40-50ms

2022-02-02 07:23:36.5174|INFO|188|Executing JSON RPC call engine_forkchoiceUpdatedV1 with params [{
  "headBlockHash": "0x2c984ebdde041c106b18b43858f72bbf25984ddf5e273e816a919c2a3784ad80",
  "safeBlockHash": "0x2c984ebdde041c106b18b43858f72bbf25984ddf5e273e816a919c2a3784ad80",
  "finalizedBlockHash": "0x9b033fb932f4d35f77763240e4321e2c9ee12405e11b5e8292fbdc3c9ece499a"
}] 
2022-02-02 07:23:48.0749|INFO|148|Executing JSON RPC call engine_forkchoiceUpdatedV1 with params [{
  "headBlockHash": "0x2c984ebdde041c106b18b43858f72bbf25984ddf5e273e816a919c2a3784ad80",
  "safeBlockHash": "0x2c984ebdde041c106b18b43858f72bbf25984ddf5e273e816a919c2a3784ad80",
  "finalizedBlockHash": "0x9b033fb932f4d35f77763240e4321e2c9ee12405e11b5e8292fbdc3c9ece499a"
},{
  "timestamp": "0x61fa3184",
  "random": "0x4006d2fdf04d1896879f15012be4ef51c4383c92a8e3c46a7b9e8a8de0f588d9",
  "suggestedFeeRecipient": "0x0000000000000000000000000000000000000000"
}] 
2022-02-02 07:23:48.0749|INFO|149|Sealed block 290373 (0xce2cdc...f4fd12), diff: 0, tx count: 0 
2022-02-02 07:23:48.0763|INFO|149|Sealed eth2 block 290373 (0xce2cdc...f4fd12), diff: 0, tx count: 0 
2022-02-02 07:23:48.0763|INFO|20|Processed     290373 |   11,667ms, mgasps    0.00 total    0.02, tps    0.00 total    0.45, bps    0.09 total    0.08, recv queue 0, proc queue 0 
2022-02-02 07:23:48.0763|INFO|149|Executing JSON RPC call engine_getPayloadV1 with params [0x87ca98484a6bb833] 
2022-02-02 07:23:48.0763|INFO|149|Hash: 0xce2cdcf439c8e5913127165923361fe00b965c2e922ff5daad453bd2f2f4fd12

This behavior is similar across all clients, which means all CL clients are currently doing the same call flow of prepare payload call -> get payload call one after the other in the same produceBlock flow.

@MarekM25 futher shared this data on mainnet block execution/production times: (execution/production times would almost be similar):

image
Here, 2k = 2s is the typical time needed to prepare a mainnet block.

It means that timing PreparePayload is the next important step, here is some discussion on Eth R&D discord interop regarding the same.
tl-dr: Need to somehow call prepare payload before the actual produce block, and suggested calling prepare payload consecutively be an idempotent operation (i.e. consecutive calls to prepare payload with the same exact params be accepted without any issue, and keep building the payload). This way CL could have lot of maneuvering to devise strategies to trigger the prepare payload without worrying about caching the payloadId which is basically the handle to fetch the payload.

cc @dapplion @wemeetagain

@dapplion
Copy link
Contributor

dapplion commented Feb 4, 2022

That's a good point, let's wait for consensus on the best course of action and then implement

@g11tech
Copy link
Contributor Author

g11tech commented Feb 4, 2022

UPDATE: It has been suggested, that CL if it has to propose in the next slot, makes fcU calls

  • if the block for the slot has arrived and processed
  • 4 seconds into the slot if the block hasn't arrived by then assuming the slot is skipped
  • any subsequent head changes

However another question begs clarity:
before the fcU should the queued attestations (attestations of current slot which are applicable on next slot where the CL is proposing) be processed to get the correct head?

@g11tech
Copy link
Contributor Author

g11tech commented May 27, 2022

closing as #3965 is merged, optimization is being tackled in separate issue #4054

@g11tech g11tech closed this as completed May 27, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
spec-bellatrix 🐼 Issues targeting the merge spec version.
Projects
None yet
Development

No branches or pull requests

3 participants