-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adds mev commit support to flashbots relay #1
Conversation
@@ -248,6 +273,48 @@ func (hk *Housekeeper) UpdateProposerDutiesWithoutChecks(headSlot uint64) { | |||
log.WithField("numDuties", len(_duties)).Infof("proposer duties updated: %s", strings.Join(_duties, ", ")) | |||
} | |||
|
|||
// When the update is triggered, we store registered validators in Redis for an the current and next epoch. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Per conversation today this should just be for the current epoch, correct?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The relay already employees this as their strategy, I'm guessing it's so that at the start of the epoch they still have some semblance of which validator needs to go next. Though this is def a confusing thing for me. Because the relay auction needs to run before the slot starts. That means a validator with a proposer duty to produce at slot 0 of the next epoch, must run the auction in the previous epoch. But how would that be possible, given that we've shown the proposer duties are only stable in the current epoch.
|
||
epoch := headSlot / common.SlotsPerEpoch | ||
|
||
duties, err := hk.beaconClient.GetProposerDuties(epoch) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't this already obtained in hk.updateProposerDuties
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That stores a different representation of the data.
hk.log.WithError(err).Error("failed to get proposer duties for next epoch") | ||
return | ||
} | ||
// Combine duties from current and next epoch into a single array |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't need this imo and should purely use current epoch (where duties are finalized)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think it hurts, in the case of a reorg in proposer duties, we'll have slightly more data in the cache. I think that's worth it for the latency benefits we're getting. Especially for the first validator in the new epoch.
Continuing to add more unit testing, but would appreciate a review on the new approach for builder monitoring. I think the more stable solution is to adapt the contracts themselves. But that would require a new release, which I want to avoid for this iteration of the relay. @shaspitz @aloknerurkar |
} | ||
} | ||
|
||
func (hk *Housekeeper) cleanupMevCommitBuilderRegistrations() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Its fine to do this once in the start. But we need to monitor FundsSlashed
event. Based on this event we need to Delete the registration. Currently we only check when we initially set it, however we might see providers getting their stakes slashed while this is running. So we need to handle that.
services/housekeeper/housekeeper.go
Outdated
} | ||
|
||
for { | ||
builder := <-newBuilderRegistered |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not really good design as if the application is closed then this goroutine is blocked. Ideally there should be some mechanism to close this, like passing a context to the start/having a channel which will be closed on stop and adding a check for this channel here.
Or you can close the newBuilderRegistered channel on close and then check it here:
builder, more := <-newBuilderRegistered
if !more {
return
}
or
for builder := range newBuilderRegistered {}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the detailed advice. There's no context available to pass at the top-level, I don't think the housekeeper service itself handles shutdowns gracefully.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've opted for the second option you described. I could close the channel, but there's really no single on the application shutdown.
mevcommitclient/mev_commit_client.go
Outdated
|
||
return activeBuilders, nil | ||
} | ||
func (m *MevCommitClient) ListenForActiveBuildersEvents() (<-chan MevCommitProvider, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It might be better to have 2 goroutines here, 1 to listen for registrations and other to listen for funds slashed and then updating it to redis here itself instead of returning a channel. Then the housekeeper can just call a function here to check.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure for now, ideally, we should emit a provider de-registered event as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll update the contracts and do simplifications after, but I'll go with the approach you're suggesting for now.
📝 Summary
This PR integrates MEV-Commit functionality into the relay, enabling support for MEV-Commit registered validators and builders. Key changes include:
Add new MevCommitClient to interact with MEV-Commit contracts
Update Housekeeper to periodically sync MEV-Commit validator and builder registrations
Modify block submission handling to check MEV-Commit registration status
Add Redis caching for MEV-Commit registration data
Key implementation details:
New mevcommitclient package to interface with MEV-Commit contracts
Updated Housekeeper to sync MEV-Commit registrations every epoch
Modified handleSubmitNewBlock to reject submissions from unregistered builders
Added Redis caching for quick lookups of MEV-Commit registration status
New database migrations for MEV-Commit related tables
Testing:
Unit tests added for MevCommitClient
Existing tests updated to account for new MEV-Commit checks
This change allows the relay to support MEV-Commit while maintaining backwards compatibility with non-MEV-Commit validators and builders. Further testing and monitoring will be needed when deploying to production.
Diagram to assist understanding of changes related to pulling information from new data sources and caching.
⛱ Motivation and Context
📚 References
✅ I have run these commands
make lint
make test-race
go mod tidy
CONTRIBUTING.md