-
Notifications
You must be signed in to change notification settings - Fork 794
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Gossip conditions deneb #4164
Gossip conditions deneb #4164
Conversation
Co-authored-by: Divma <[email protected]>
// | ||
// However, we check that the proposer_index matches against the shuffling first to avoid | ||
// signature verification against an invalid proposer_index. | ||
// TODO: check if getting the shuffling more expensive than signature verification in any scenario? | ||
let proposer_shuffling_root = signed_blob_sidecar.message.block_parent_root; | ||
|
||
let (proposer_index, fork) = match chain |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
state.get_beacon_proposer_index(blob_slot, &chain.spec)?
If we get the blob before the block, the head state might be at slot - 1, so at an epoch boundary we might hit this: https://github.com/sigp/lighthouse/blob/stable/consensus/types/src/beacon_state.rs#L767
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice catch. Did something more elaborate now to get the proposer index by referring to block verification.
Need to make sure i'm not doing anything stupid playing around with beacon snapshots.
beacon_node/network/src/beacon_processor/worker/gossip_methods.rs
Outdated
Show resolved
Hide resolved
if block_opt.is_none() { | ||
return Err(BlobError::UnknownHeadBlock { | ||
beacon_block_root: block_root, | ||
// Verify that this is the first blob sidecar received for the (sidecar.block_root, sidecar.index) tuple |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should probably make this check prior to signature verification right?
I'm wondering if we need a separate "seen cache" for blobs more similar to our other seen caches. This would let us add to the cache as soon as the signature is verified (no KZG verification). The DA cache right now is just holding fully validated blobs.
Also we should keep this in mind, the seen cache for blobs is kind of a tricky thing: ethereum/consensus-specs#3261
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need this check after sig verification to ensure that it's a valid blob signed by the proposer of that slot.
Agree that we need a separate seen cache. Working on adding one now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just adding it for my notes here.
I think this seen_cache
should be strictly gossip and not contain any blobs that we receive over rpc mainly because we do not receive proposer signed blobs over rpc. We shouldn't add anything in there which does not have any DoS protection imo. There could be an attack where:
- attacker might send us an attestation to an unknown block hash
- we start a a block and blobs by root request for the unknown hash
- the attacker sends the valid block but sends us bogus blobs over rpc
If the bogus blobs get into the seen cache before the valid blob that is sent over gossip, we could end up ignoring the valid blobs.
}); | ||
} | ||
} else { | ||
return Err(BlobError::BlobParentUnknown { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we want to send the blobs with the parent unknown error so that we have them in sync for reprocessing? similar to how we handle blocks
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah I think so. I have a todo for that in gossip_methods
for this error type.
} | ||
|
||
/// Prune all values earlier than the given slot. | ||
pub fn prune(&mut self, finalized_slot: Slot) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we also allow for pruning past the data availability period in case of long periods of non finality?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interesting point! I think it's fine as is, gossiping blobs older than the da boundary doesn't seem to be explicitly disallowed. But we should just drop these blobs during processing so I think it's fine.
) | ||
} | ||
// Need to advance the state to get the proposer index |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
At this point maybe it's just better to IGNORE
for now and eventually add a smarter reprocessing logic? Adding a QueuedGossipBlob
to the reprocessing queue?
It's possible we end up doing this in parallel for all blobs for a slot as we recieve them right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Talked offline, pawan's gonna leave as is for now because this should be rarely hit
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Issue Addressed
N/A
Proposed Changes
Complete gossip conditions for deneb and do partial handling for gossip errors.