Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adds mev commit support to flashbots relay #1

Merged
merged 47 commits into from
Oct 11, 2024

Conversation

ckartik
Copy link
Collaborator

@ckartik ckartik commented Sep 30, 2024

📝 Summary

This PR integrates MEV-Commit functionality into the relay, enabling support for MEV-Commit registered validators and builders. Key changes include:

Add new MevCommitClient to interact with MEV-Commit contracts
Update Housekeeper to periodically sync MEV-Commit validator and builder registrations
Modify block submission handling to check MEV-Commit registration status
Add Redis caching for MEV-Commit registration data

Key implementation details:

New mevcommitclient package to interface with MEV-Commit contracts
Updated Housekeeper to sync MEV-Commit registrations every epoch
Modified handleSubmitNewBlock to reject submissions from unregistered builders
Added Redis caching for quick lookups of MEV-Commit registration status
New database migrations for MEV-Commit related tables

Testing:

Unit tests added for MevCommitClient
Existing tests updated to account for new MEV-Commit checks

This change allows the relay to support MEV-Commit while maintaining backwards compatibility with non-MEV-Commit validators and builders. Further testing and monitoring will be needed when deploying to production.

Diagram to assist understanding of changes related to pulling information from new data sources and caching.

sequenceDiagram
    participant API as Relay API
    participant HK as Housekeeper
    participant MC as MEV-Commit Clients
    participant MCC as MEV-Commit Chain
    participant BN as Beacon Nodes
    participant Redis

    Note over HK,BN: Startup
    HK->>BN: Get head slot (only on startup)
    BN-->>HK: Return head slot
    HK->>BN: Subscribe to head events

    Note over HK,MCC: Every 32 slots
    loop Every 32 slots
        HK->>MCC: Get active MEV-Commit validators for current and next epoch
        MCC-->>HK: Return active MEV-Commit validators
        HK->>Redis: Update active MEV-Commit validators
    end

    HK->>BN: Get proposer duties
    BN-->>HK: Return proposer duties
    HK->>Redis: Update proposer duties
    
    API->>Redis: Check for initialized data
    Redis-->>API: Data ready
    API->>MC: Initialize connection
    
    Note over API,MC: API ready to serve requests

    Note over HK,BN: Continuous operation
    BN->>HK: Stream head events
    HK->>Redis: Update head slot

    Note over HK,MCC: Every slot
    loop Every slot
        HK->>MCC: Get active builders
        MCC-->>HK: Return active builders
        HK->>Redis: Update active builders
    end
Loading

⛱ Motivation and Context

📚 References


✅ I have run these commands

  • make lint
  • make test-race
  • go mod tidy
  • I have seen and agree to CONTRIBUTING.md

@ckartik ckartik requested a review from aloknerurkar October 4, 2024 15:38
database/database.go Outdated Show resolved Hide resolved
cmd/housekeeper.go Outdated Show resolved Hide resolved
database/database.go Outdated Show resolved Hide resolved
database/database.go Outdated Show resolved Hide resolved
database/migrations/012_add_mev_commit_column_builders.go Outdated Show resolved Hide resolved
database/migrations/013_add_mev_commit_validators_table.go Outdated Show resolved Hide resolved
mevcommitclient/mev_commit_client.go Outdated Show resolved Hide resolved
mevcommitclient/mev_commit_client.go Outdated Show resolved Hide resolved
mevcommitclient/mev_commit_client.go Outdated Show resolved Hide resolved
services/api/service.go Show resolved Hide resolved
services/api/service.go Outdated Show resolved Hide resolved
@@ -248,6 +273,48 @@ func (hk *Housekeeper) UpdateProposerDutiesWithoutChecks(headSlot uint64) {
log.WithField("numDuties", len(_duties)).Infof("proposer duties updated: %s", strings.Join(_duties, ", "))
}

// When the update is triggered, we store registered validators in Redis for an the current and next epoch.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Per conversation today this should just be for the current epoch, correct?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The relay already employees this as their strategy, I'm guessing it's so that at the start of the epoch they still have some semblance of which validator needs to go next. Though this is def a confusing thing for me. Because the relay auction needs to run before the slot starts. That means a validator with a proposer duty to produce at slot 0 of the next epoch, must run the auction in the previous epoch. But how would that be possible, given that we've shown the proposer duties are only stable in the current epoch.


epoch := headSlot / common.SlotsPerEpoch

duties, err := hk.beaconClient.GetProposerDuties(epoch)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't this already obtained in hk.updateProposerDuties?

Copy link
Collaborator Author

@ckartik ckartik Oct 9, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That stores a different representation of the data.

hk.log.WithError(err).Error("failed to get proposer duties for next epoch")
return
}
// Combine duties from current and next epoch into a single array
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't need this imo and should purely use current epoch (where duties are finalized)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think it hurts, in the case of a reorg in proposer duties, we'll have slightly more data in the cache. I think that's worth it for the latency benefits we're getting. Especially for the first validator in the new epoch.

datastore/redis.go Outdated Show resolved Hide resolved
go.mod Show resolved Hide resolved
mevcommitclient/mev_commit_client.go Outdated Show resolved Hide resolved
services/api/service.go Show resolved Hide resolved
@ckartik ckartik marked this pull request as ready for review October 9, 2024 20:04
@ckartik
Copy link
Collaborator Author

ckartik commented Oct 9, 2024

Continuing to add more unit testing, but would appreciate a review on the new approach for builder monitoring. I think the more stable solution is to adapt the contracts themselves. But that would require a new release, which I want to avoid for this iteration of the relay. @shaspitz @aloknerurkar

mevcommitclient/mev_commit_client.go Outdated Show resolved Hide resolved
mevcommitclient/mev_commit_client.go Show resolved Hide resolved
}
}

func (hk *Housekeeper) cleanupMevCommitBuilderRegistrations() {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Its fine to do this once in the start. But we need to monitor FundsSlashed event. Based on this event we need to Delete the registration. Currently we only check when we initially set it, however we might see providers getting their stakes slashed while this is running. So we need to handle that.

}

for {
builder := <-newBuilderRegistered

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not really good design as if the application is closed then this goroutine is blocked. Ideally there should be some mechanism to close this, like passing a context to the start/having a channel which will be closed on stop and adding a check for this channel here.

Or you can close the newBuilderRegistered channel on close and then check it here:

builder, more := <-newBuilderRegistered
if !more {
    return
}

or

for builder := range newBuilderRegistered {}

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the detailed advice. There's no context available to pass at the top-level, I don't think the housekeeper service itself handles shutdowns gracefully.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've opted for the second option you described. I could close the channel, but there's really no single on the application shutdown.


return activeBuilders, nil
}
func (m *MevCommitClient) ListenForActiveBuildersEvents() (<-chan MevCommitProvider, error) {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might be better to have 2 goroutines here, 1 to listen for registrations and other to listen for funds slashed and then updating it to redis here itself instead of returning a channel. Then the housekeeper can just call a function here to check.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure for now, ideally, we should emit a provider de-registered event as well.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll update the contracts and do simplifications after, but I'll go with the approach you're suggesting for now.

@ckartik ckartik merged commit 426c229 into main Oct 11, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants