-
Notifications
You must be signed in to change notification settings - Fork 146
Conversation
af1061e
to
7e39a23
Compare
) | ||
message_hash = get_hash_tree_root(slot, sedes=uint64) | ||
|
||
if not bls.verify( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
note that we have a bls.validate
API so that we don't have to write ValidationError every time.
dd1fee2
to
a598c23
Compare
relevant: #1321 (comment) |
23524d5
to
8143ff7
Compare
The linter error will go away when #1357 is fixed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Left some comments on the validator and aggregation part. Will review the tests shortly.
state_machine=state_machine, | ||
) | ||
|
||
await self.aggregate(slot) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we can pass in the state
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, it was mostly for the symmetry to self.attest(slot)
function. I was thinking about passing both state_machine
and state
.
|
||
The given attestations SHOULD have the same `data: AttestationData` and are valid. | ||
""" | ||
signatures = [attestation.signature for attestation in attestations] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here we also assume that there's no overlap between attesters in each attestation?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's okay to have overlapped aggregation_bits
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IIRC currently we don't support validating aggregated signature where there's duplicated signature being aggregated. I would suggest we add a note in docstring or add a check and logging if there's duplicated signature being aggregated. So that it would be easier to debug or less likely to make mistake.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You're right. And I will split it into two AttestationPool
: aggregated_attestation_pool
and unaggregated_attestation_pool
. Thanks! 👍
topic = PUBSUB_TOPIC_COMMITTEE_BEACON_ATTESTATION.substitute( | ||
subnet_id=str(subnet_id) | ||
) | ||
await self._handle_message( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IIUC it will block on the first call to self._handle_message
? As there's an infinite loop in _handle_message
.
We can either (1) set up daemon task for each subnet topic or (2) in one daemon task, loop through each subnet but use queue.get_nowait()
instead of await queue.get()
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point. Using a single daemon task to handle all subnets would be more flexible for the node to handle the subnet subscription changes. So I'd try the queue.get_nowait()
solution you pointed out. Thanks!
8143ff7
to
ac36e73
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good! Just some nits.
attestation, | ||
config, | ||
) | ||
# break |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Leftover comment?
@@ -369,19 +382,21 @@ def _try_import_orphan_blocks(parent_root): | |||
return requested_blocks | |||
|
|||
with monkeypatch.context() as m: | |||
for orphan_block in (blocks[4],) + fork_blocks: | |||
receive_server.orphan_block_pool.add(orphan_block) | |||
await wait_until_true( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit:
I think we can just use assert len(receive_server.orphan_block_pool) != 0
?
|
||
The given attestations SHOULD have the same `data: AttestationData` and are valid. | ||
""" | ||
signatures = [attestation.signature for attestation in attestations] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IIRC currently we don't support validating aggregated signature where there's duplicated signature being aggregated. I would suggest we add a note in docstring or add a check and logging if there's duplicated signature being aggregated. So that it would be easier to debug or less likely to make mistake.
|
||
The given attestations SHOULD have the same `data: AttestationData` and are valid. | ||
""" | ||
signatures = [attestation.signature for attestation in attestations] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You're right. And I will split it into two AttestationPool
: aggregated_attestation_pool
and unaggregated_attestation_pool
. Thanks! 👍
d3e330c
to
4119473
Compare
Update network configs Introduce naive `subnets`: just subscribe all subnets. 1. BN subscribe all subnets. 2. At 1/3 slot-time, the validator broadcast the attestation to the subnet (`broadcast_attestation_to_subnet` with PUBSUB_TOPIC_COMMITTEE_BEACON_ATTESTATION topic) 3. Handle the attestation PUBSUB_TOPIC_COMMITTEE_BEACON_ATTESTATION as before (will change the it later). Add `get_aggregate_from_valid_committee_attestations` helper Implement `_create_and_broadcast_aggregate_and_proof` Add `validate_aggregate_and_proof` Make aggregation work Clean up Rework `get_aggregatable_attestations` PR feedback: use `bls.validate` Remove `_make_proposing_block` Refactor `validator.py` Refactor and fix Use one `run_daemon_task` for all subnets Refactor attestation pool query Fix orphan block pool tests Clean up Clean up validation
…aggregated_attestation_pool
Fix `BeaconChainFactory` after rebasing Fix after new SSZ Hashable structure
4119473
to
01b78f6
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Getting some log message like this
DEBUG 2019-12-18 15:44:42,202 TopicValidator ('The attestation is aggregated. Attestation: %s', <eth2.beacon.types.attestations.Attestation object at 0x7efed669eb10>)
) | ||
except ValidationError as error: | ||
raise InvalidGossipMessage( | ||
"Failed to fast forward to state at slot=%d, error=%s", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should use f string here.
def validate_subnet_id(attestation: Attestation, subnet_id: SubnetId) -> None: | ||
if attestation.data.index % ATTESTATION_SUBNET_COUNT != subnet_id: | ||
raise InvalidGossipMessage( | ||
"Wrong attestation subnet_id=%d, topic subnet_id=%d. Attestation: %s", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ditto
# Check if the attestation is unaggregated | ||
if len([bit for bit in attestation.aggregation_bits if bit is True]) != 1: | ||
raise InvalidGossipMessage( | ||
"The attestation is aggregated. Attestation: %s", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ditto
chain.get_block_by_root(attestation.data.beacon_block_root) | ||
except BlockNotFound: | ||
raise InvalidGossipMessage( | ||
"Failed to validate attestation=%s, attested block=%s is not validated yet", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ditto
) | ||
except ValidationError as error: | ||
InvalidGossipMessage( | ||
"Failed to validate aggregate_and_proof=%s, error=%s", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ditto
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
and missing a raise here
What was wrong?
Fix #1241
How was it fixed?
import_attestation_fn
toValidator
for importing the self-attested attestation.ATTESTATION_SUBNET_COUNT = 64
) when bootstrapping the node. The real subnet logic will not be added to this PR.beacon_attestation
topic messages, but they can still process it. (beacon_attestation
is still on the spec for interop).To-Do
validator.py
Cute Animal Picture