Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deprecate prepareBeaconProposer, consolidate on registerValidator. #435

Open
jshufro opened this issue Mar 21, 2024 · 11 comments
Open

Deprecate prepareBeaconProposer, consolidate on registerValidator. #435

jshufro opened this issue Mar 21, 2024 · 11 comments

Comments

@jshufro
Copy link
Contributor

jshufro commented Mar 21, 2024

I truly believe prepareBeaconProposer is strictly more cumbersome than registerValidator.

  1. It's vulnerable to attack
  2. Validator Clients have inherent knowledge of the private keys of their attached Validators, but not the indices, so the first thing many of them do is request, by public key, the index of each associated validator. This can lead to problems at worst, but even with the kinks get ironed out, it represents unnecessary overhead during validator client startup.

Now that builder_boost_factor is present, control over whether to use a builder block or a paired node block is satisfied at the time produceBlockV3 is called, and consolidating prepareBeaconProposer and registerValidator will not leave the BN wondering which block to return to each attached validator client.

@ajsutton
Copy link
Contributor

As the author of that blog post about the attack, it's not something I would be worried about and if anything I'd consider it a feature. A validator should not be using a public beacon node to perform its duties as that means it isn't actually validating the chain (it's just trusting the public node provider).

Otherwise I haven't thought enough about the proposed change to say if it's a good idea or not - just wanted to clarify that this isn't required to fix an actual security issue.

@mcdee
Copy link
Contributor

mcdee commented Mar 22, 2024

Registering validators registers all validators known by the validator client and is a pass-through to relays, whereas prepare proposer is meant to be limited to just those validators that are scheduled to propose within the next epoch and more focused on local block production.

There is an argument for using public key rather than index in the registration, although index is used in a number of other places in the API so it isn't a unique requirement for this endpoint. But I think some sort of fast-path notification to the beacon node that the validator client has a validator that is expecting to request a block in the near future is still useful, as it avoids the beacon node having to wade through potentially tens of thousands of registrations to see if one of them is scheduled to propose (although client teams can weigh in here if they think that the overhead isn't relevant).

Now that #386 is present, control over whether to use a builder block or a paired node block is satisfied at the time produceBlockV3 is called, and consolidating prepareBeaconProposer and registerValidator will not leave the BN wondering which block to return to each attached validator client.

Note that local payload generation is usually started ahead of the proposing slot, to allow the execution node a chance to build a good payload and provide same when requested with minimal delay. And the underlying code will usually (outside of specific values of the boost factor) talk to both relays and execution node for payloads simultaneously, so I don't think that this point is relevant for what is being discussed.

@jshufro
Copy link
Contributor Author

jshufro commented Mar 22, 2024

As the author of that blog post about the attack, it's not something I would be worried about and if anything I'd consider it a feature. A validator should not be using a public beacon node to perform its duties as that means it isn't actually validating the chain (it's just trusting the public node provider).

Hi Adrian, I have been wondering when we'd run into each other. I reference your blog post often.

Evaluating it as a feature, it also fails the sniff test in my opinion. The trust assumption with a public beacon node with prepareBeaconProposer is that not only are the maintainers benevolent but that so is every single other participant- but this can easily be reduced by the maintainers to eliminate the need to trust all the other participants. I helped build such a public beacon node.. Further, most "public beacon nodes" are public by accident- node operators spin up a VPS or baremetal and use docker to manage their clients and ufw to secure the node, but the iptables entries for each bypass one another. A quick search on shodan shows thousands of nodes with unsecured API ports.

So, it doesn't accomplish the prevention of public beacon nodes, and it also hurts less-than-professional node operators. Requiring a signature from the validator would at least protect the latter group.

Proof of custody, if well implemented, will resolve the public beacon node problem. prepareBeaconProposer is largely irrelevant at this point.


Registering validators registers all validators known by the validator client and is a pass-through to relays, whereas prepare proposer is meant to be limited to just those validators that are scheduled to propose within the next epoch and more focused on local block production.

Interesting, if this is the original intent. I have observed every single client send prepareBeaconProposer and registerValidator with the same vector sizes at the same cadence. I have graphs, if needed, to back this up.

Since both are called with every attached validator, every epoch (generally), it doesn't make sense to me that they can't serve the same purpose.

I don't think that this point is relevant for what is being discussed.

It is perhaps unrelated. I only mean that in the past, eligibility to request a blinded block was predicated on a validator's presence in registerValidators, and eligibility to request a block built by the paired execution client was predicated on a validator's presence in prepareBeaconProposer, but that need not be true. Both can be prepared by the beacon node based on registerValidator alone, called unconditionally by the validator client, and the validator client can control which one is signed and published.


Ultimately this suggestion serves two purposes:

  1. Improve the security of node operators who fail to secure their beacon node API port
  2. Simplify the clients

I'd like to hear from the client teams as to whether they think it accomplishes 2.
I can tell you that 1 is a given, though.

@nflaig
Copy link
Collaborator

nflaig commented Mar 28, 2024

send prepareBeaconProposer and registerValidator with the same vector sizes at the same cadence. I have graphs, if needed, to back this up.

This is correct, the validator client doesn't know the proposers beforehand so it has to send them all and the beacon node will prepare a payload if there is an scheduled proposal.

I'd like to hear from the client teams

Although we still need to get the validator status at startup for various other reasons, I think consolidating those two APIs is a strict improvement in terms of security and also avoids overhead of calling two different APIs. The only overhead is that we would have to verify signatures but since this API is not that time sensitive I don't think it's an issue.

Just a note that we can't just deprecate prepareBeaconProposers and use the current registerValidator API. This requires to implement a registerValidatorV2 API which consolidates the two APIs (similar to block v3), otherwise pairing two different clients might result in proposers not being prepared at all.

Maybe something to enable at Electra fork epoch?

@jshufro
Copy link
Contributor Author

jshufro commented Mar 30, 2024

I spoke out-of-band with @tersec on the nimbus discord to get his perspective, and hopefully this summary is representative:

  • He is unconcerned with the security implications of prepareBeaconProposer being unauthed, and indifferent to changes to add signatures provided they do not create excess complexity.
  • Nimbus and at least LH send prepareBeaconProposer every slot, and registerValidator every epoch.
    • They do this to prevent, say, proxy changes which switch BNs in a way opaque to the validator client from creating up-to-one-epoch delays in signaling the EC to build blocks for pending duties.
    • If we were to merge endpoints, they would either have to revert to accepting delays in prepareBeaconProposer updates, or send registerValidator once per slot and rate limit the pass-through to the relays (who do not want registerValidator calls to cause undue load).
  • He is somewhat compelled by the notion of using pubkeys in place of indices to simplify the VC lifecycle.
  • He is wary of merging what he sees as loosely related codepaths (ie, they are similar, but not so similar that they should be combined, lest we need to revert them to separate in the future)

I'm of a mind to day that proxied beacon nodes bear the responsibility to broadcast prepareBeaconProposer requests to all upstreams, and that the VCs should not be sending prepareBeaconProposer once per slot. On the other hand, I can see why they do- a node operator who is not savvy enough to secure their beacon api port certainly is not savvy enough to handle a complicated proxying setup.

@tersec
Copy link

tersec commented Mar 30, 2024

Thank you for summarizing our discussion here; yes, that's accurate from my perspective.

@michaelsproul
Copy link
Collaborator

Speaking as a Lighthouse dev, I am in favour of combining them. I think it's a good simplification. I suspect that the current per-slot schedule for prepareBeaconProposer is excessive. It serves to inform the BN of the fee recipient, which would only be unknown if the BN had just restarted and lost all its state.

We could start experimenting prior to a coordinated release (e.g. Electra) by updating BNs to always draw fee recipients from registerValidator calls, and adding a flag to VCs to stop sending prepareBeaconProposer. Then the fork would just be a matter of 1) Knowing that the BN is consuming the fee recipient data and 2) Enabling the flag to turn off prepareBeaconProposer. We could also bump the version number on registerValidator for good measure, but we would not need to.

@jshufro
Copy link
Contributor Author

jshufro commented Apr 8, 2024

It serves to inform the BN of the fee recipient, which would only be unknown if the BN had just restarted and lost all its state.

This can also be mitigated by holding a tcp connection open (or similar), and sending an adhoc request each time it's reestablished. Nimbus does something along these lines, from what I can tell through observation.

@nflaig
Copy link
Collaborator

nflaig commented Dec 10, 2024

There is ongoing discussion to pass the target gas limit configured on the vc side as part of prepareBeaconProposer which would be required for ethereum/execution-apis#608. It would be a good opportunity now to consolidate these apis as we would have to update prepareBeaconProposer either way.

The new consolidated api could be called each slot as currently to address #435 (comment) or depending on validator client use different approaches but either way on the beacon node side could limit this to only forward registrations to builder once an epoch (to still follow builder spec).

Ideally, we wanna roll this out to out now and migrate to consolidated api post-electra

@mcdee
Copy link
Contributor

mcdee commented Dec 10, 2024

If we forget prepare_beacon_proposer and focus on register_validator then we could say that the validator client would send the validator registrations:

  • on startup
  • once per epoch
  • when it detects a beacon node restart

which should cover all of the eventualities. However, the last situation here isn't something that can be recognized with the existing API. It's possible that the clients will pick up the dropped connection if using long-lived connections, but it's not guaranteed.

An alternative would be to require on-disk caching of the validator registrations on the beacon node so that they are not lost on restart, but that seems like dumping a lot of work on the client teams.

We could add an endpoint in /eth/v1/node/ to provide the startup time of the node, but then this would need to be polled regularly and that's not a great solution either.

Or we can send the data more frequently, e.g. every slot, as you say. But that's a significant amount of additional work for a corner case.

Perhaps a hybrid approach: if a validator client knows that it has a proposal coming up in the next epoch it sends the registrations each slot, otherwise once per epoch? That may fit best in terms of retaining the statelessness of the API (and the underlying beacon node), but cover the corner case. Thoughts?

@nflaig
Copy link
Collaborator

nflaig commented Dec 10, 2024

We could add an endpoint in /eth/v1/node/ to provide the startup time of the node, but then this would need to be polled regularly and that's not a great solution either.

The Lodestar vc already polls /eth/v1/node/syncing each slot to refetch duties if node goes from syncing to synced. If this api returns a http error we can infer that the node is offline and resubmit proposer data if it comes online again. The edge case would be if the node restarts within the same slot so it appears online all the time from the vc point of view but the proposer cache would be cleared. This could be resolve by adding a start_time (or similar) as you proposed to the response.

That may fit best in terms of retaining the statelessness of the API (and the underlying beacon node), but cover the corner case. Thoughts?

I also think it would be best to not make any assumptions on how the beacon node manages the cache (eg. like persisting it to disk between restarts) and leave it up to the validator client to make sure it's updated before a proposal with whatever strategy fits well with the current design, this is pretty implementation specific, especially when it comes to how the interacts with multiple connected bns.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants