Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Payload ID caching #10481

Merged
merged 12 commits into from
Apr 6, 2022
Merged

Payload ID caching #10481

merged 12 commits into from
Apr 6, 2022

Conversation

terencechain
Copy link
Member

This PR implements the payload ID cache which ensures BN gives EE enough time to prepare the payload. The accompanying doc summarizes the design and tradeoff: https://www.notion.so/Payload-ID-cache-582725c0205c45fbb853d3218e062658

@terencechain terencechain added Ready For Review Merge PRs related to the great milestone the merge labels Apr 4, 2022
@terencechain terencechain self-assigned this Apr 4, 2022
@terencechain terencechain requested a review from a team as a code owner April 4, 2022 17:18
// Get fee recipient.
feeRecipient := params.BeaconConfig().DefaultFeeRecipient
recipient, err := s.cfg.BeaconDB.FeeRecipientByValidatorID(ctx, proposerID)
switch err == nil {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cleaner to just do a switch {} and in each case be more expliti, such as err != nil

// the pending queue as it has been marked as a 'bad' block.
span.End()
continue
if !errors.Is(ErrOptimisticParent, err) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Needs rebase right?

f.Lock()
defer f.Unlock()
var vIdBytes [vIdLength]byte
copy(vIdBytes[:], bytesutil.Uint64ToBytesBigEndian(uint64(vId)))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why big endian for these? We should probably stick to little endian across the repo except for engine API calls

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

payloadID in the specs is [8]bytes by definition https://github.com/ethereum/execution-apis/blob/main/src/engine/specification.md. I would think that we are better of without any conversion to an integer type.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

big-endian works because it's not consensus, we use Uint64ToBytesBigEndian throughout prysm, we don't have BytesToUint64LittleEndian hah

Copy link
Contributor

@potuz potuz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we cache the proposals for next slot only instead of next epoch? The gain is exactly the same and the error is much less/

f.Lock()
defer f.Unlock()
var vIdBytes [vIdLength]byte
copy(vIdBytes[:], bytesutil.Uint64ToBytesBigEndian(uint64(vId)))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

payloadID in the specs is [8]bytes by definition https://github.com/ethereum/execution-apis/blob/main/src/engine/specification.md. I would think that we are better of without any conversion to an integer type.

// getPayloadAttributes returns the payload attributes for the given state and slot.
// The attribute is required to initiate a payload build process in the context of an `engine_forkchoiceUpdated` call.
func (s *Service) getPayloadAttribute(ctx context.Context, st state.BeaconState, slot types.Slot) (bool, *enginev1.PayloadAttributes, types.ValidatorIndex, error) {
proposerID, _, ok := s.cfg.ProposerSlotIndexCache.GetProposerPayloadIDs(slot)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the event of a cache miss, shouldn't we be calling the engine API instead of just failing?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's right, in the event of a cache miss, we return false and call engine API. It doesn't fail

// This returns the execution payload of a given slot. The function has full awareness of pre and post merge.
// The payload is computed given the respected time of merge.
func (vs *Server) getExecutionPayload(ctx context.Context, slot types.Slot, vIdx types.ValidatorIndex) (*enginev1.ExecutionPayload, error) {
proposerID, payloadId, ok := vs.ProposerSlotIndexCache.GetProposerPayloadIDs(slot)
if ok && proposerID == vIdx && payloadId != [8]byte{} { // Payload ID is cache hit. Return the cached payload ID.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This conditional is repeated in the cache itself. Maybe you could expose a function called IsCacheHit

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The condition is different than the one in the cache. It'll be hard to combine them.

In here. To use the payload ID. These three all have to be satisfied:

  • there exists a slot entry
  • pre-cached proposer ID matched the proposer ID requesting this block
  • the payload ID is non-empty

In the cache. To override the payload. One of these two has to be satisfied:

  • there doesn't exist a slot entry
  • there exists a slot entry and the payload is nonempty (reorg case)

Copy link
Contributor

@rauljordan rauljordan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Screen Shot 2022-04-05 at 2 24 48 PM

I just checked you have 100% coverage, great job

rauljordan
rauljordan previously approved these changes Apr 5, 2022
Copy link
Contributor

@rauljordan rauljordan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would like to see one more approval on this before we merge, perhaps from @potuz

@terencechain terencechain merged commit 0411b7e into develop Apr 6, 2022
@delete-merged-branch delete-merged-branch bot deleted the payloadid-cache branch April 6, 2022 23:36
@terencechain terencechain mentioned this pull request Apr 11, 2022
56 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Merge PRs related to the great milestone the merge
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants