Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Not allow slashed validator to become a proposer #3175

Closed
wants to merge 9 commits into from

Conversation

mkalinin
Copy link
Collaborator

@mkalinin mkalinin commented Dec 23, 2022

Checks if validator is slashed in the beacon block proposer selection.

There is a slight contradiction in the spec where it is allowed for a slashed validator to become a proposer while a block from slashed proposer is rejected. Considering a trivial possibility of this contradiction to take into effect, this change is more like a cosmetic fix. Opening a PR to see whether it will have any traction.

If we decide to merge we may also want to cover this case with a test (and fix broken tests).

UPD: We would need a HF to roll out this change, otherwise, one may slash itself (if eligible to propose) and induce a network split.

This change doesn't affect processing historical blocks as blocks proposed by slashed validators couldn't be included into canonical chain.

Copy link
Contributor

@djrtwo djrtwo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree this is a nice to have and agree that we'd need to put this into a scheduled hardfork rather than post-facto slipping it in there.

We should start a D-star-name feature list for consideration

@mkalinin
Copy link
Collaborator Author

mkalinin commented Apr 7, 2023

The change has been moved to Deneb. Unfortunately, we can't produce test vectors covering this change as blocks proposed by slashed validators are invalidated by assert not proposer.slashed line disregarding the proposed change.

@mkalinin mkalinin marked this pull request as ready for review April 7, 2023 14:32
@mkalinin mkalinin requested a review from hwwhww April 7, 2023 14:32
@mkalinin
Copy link
Collaborator Author

Side effect of the proposed fix is that get_beacon_proposer_index can no longer be used in process_block function throughout and after process_operations call due to potential slashing of a validator proposed a block.

Suggested workaround is to read proposer_index from state.latest_block_header.proposer_index (get_latest_block_proposer_index). This modification isn't ideal as it sets all block header routines in dependency to process_block_header call which at least affects some tests, additional changes in block processing make original proposal more complicated. Note that process_randao may be left unmodified (i.e. keep using get_beacon_proposer_index) as long as it is called prior to process_operations.

@djrtwo
Copy link
Contributor

djrtwo commented Apr 11, 2023

Side effect of the proposed fix is that get_beacon_proposer_index can no longer be used in process_block function throughout and after process_operations call due to potential slashing of a validator proposed a block.

Is there an explicit test for this added?

@djrtwo
Copy link
Contributor

djrtwo commented Apr 11, 2023

. This modification isn't ideal as it sets all block header routines in dependency to process_block_header call which at least affects some tests,

yeah, it's annoying that you have to do more pre-processing to make a test "correct" but clients complain that our tests aren't holistically correct enough anyway... so it's probably okay with sufficient helpers

@mkalinin
Copy link
Collaborator Author

Side effect of the proposed fix is that get_beacon_proposer_index can no longer be used in process_block function throughout and after process_operations call due to potential slashing of a validator proposed a block.

Is there an explicit test for this added?

Not yet, I thought I'd first outline all required changes and check if we want to proceed with them considering increased complexity. We will have to add tests for every affected function, namely slash_validator, process_attestation, process_randao, process_sync_aggregate

@potuz
Copy link
Contributor

potuz commented Apr 11, 2023

Is there any safety issue that this PR addresses? I feel like this may become relevant if there are tons of slashed validators, but while this isn't the case, it adds extra complexity to epoch transition and a vector for consensus failure without any gains in security nor UX on the network. While I don't disagree with the proposed change, it does seem like trying to fix what isn't broken, or simply a cosmetic change that looks nicer on paper but it's uglier in code.

@mkalinin
Copy link
Collaborator Author

Is there any safety issue that this PR addresses? I feel like this may become relevant if there are tons of slashed validators, but while this isn't the case, it adds extra complexity to epoch transition and a vector for consensus failure without any gains in security nor UX on the network. While I don't disagree with the proposed change, it does seem like trying to fix what isn't broken, or simply a cosmetic change that looks nicer on paper but it's uglier in code.

This fix avoids missed slot in the case when slashed validator is supposed to propose a block in the next few epochs (exact number of epochs depends on the churn). The probability of that to happen in case of one validator slashing can be roughly estimated using the following formula: n / len(active_validators) * (effective_balance*(1-1/MIN_SLASHING_PENALTY_QUOTIENT_BELLATRIX)/MAX_EFFECTIVE_BALANCE) where n is a number of slots. This formula doesn't factor in attestation penalties applied during epoch processing, so the real probability will be less. For len(active_validators) := 2**19, effective_balance := 32ETH, n := 6*32 (6 epochs to exit at least) we get ~ 0.00035 which would trigger this fix every ~ 2818 times we get a validator slashed.

If say 1/10th is slashed then missing slots will start to appear once per epoch on average for the next 4,096 epochs considering the churn. After the first 4,096 epochs additional slashing penalties will be gradually applied to slashed validators reducing the probability of them to be elected as proposers. But effect of the fix in this case does still look significant.

Could you please explain why epoch processing is affected? Is this because of potential increase of the compute_proposer_index computational complexity due to skipping slashed validators?

@mkalinin
Copy link
Collaborator Author

Is there an explicit test for this added?

Update on tests:

  • slash_validator change has been already covered by existing attester and proposer slashings tests
  • a couple of state tests covering attestation and sync aggregate processing are added

I am not 100% sure that we want to modify process_randao, considering that the order of block processing routines is unlikely to be changed in the future this modification looks redundant.

@djrtwo
Copy link
Contributor

djrtwo commented Apr 20, 2023

Thoughts on reducing the diff footprint by doing the following. The idea is to overload the core get_beacon_proposer_index function with a call to state (if written for current slot) so we don't have to add and figure out when to correctly use the helper in every function

def get_beacon_proposer_index(state: BeaconState) -> ValidatorIndex:
    """
    Return the beacon proposer index at the current slot.
    """
    if state.latest_block_header.slot == state.slot:
        return state.latest_block_header.proposer_index

    epoch = get_current_epoch(state)
    seed = hash(get_seed(state, epoch, DOMAIN_BEACON_PROPOSER) + uint_to_bytes(state.slot))
    indices = get_active_validator_indices(state, epoch)
    return compute_proposer_index(state, indices, seed)

@mkalinin
Copy link
Collaborator Author

Thoughts on reducing the diff footprint by doing the following. The idea is to overload the core get_beacon_proposer_index function with a call to state (if written for current slot) so we don't have to add and figure out when to correctly use the helper in every function

def get_beacon_proposer_index(state: BeaconState) -> ValidatorIndex:
    """
    Return the beacon proposer index at the current slot.
    """
    if state.latest_block_header.slot == state.slot:
        return state.latest_block_header.proposer_index

    epoch = get_current_epoch(state)
    seed = hash(get_seed(state, epoch, DOMAIN_BEACON_PROPOSER) + uint_to_bytes(state.slot))
    indices = get_active_validator_indices(state, epoch)
    return compute_proposer_index(state, indices, seed)

Great simplification 👍

Assuming block header is always inserted into a state by the process_block_header fn call there should be no issue with this modification. We can even do it since Phase0 if there is any reason for that, currently added as a modification in Deneb

@mkalinin
Copy link
Collaborator Author

Closing in favour of #3371

@mkalinin mkalinin closed this May 18, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
general:enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants