-
Notifications
You must be signed in to change notification settings - Fork 681
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Get/Set aggregate public key from Clarity #4165
Conversation
Signed-off-by: Jacinta Ferrant <[email protected]>
Signed-off-by: Jacinta Ferrant <[email protected]>
Signed-off-by: Jacinta Ferrant <[email protected]>
Signed-off-by: Jacinta Ferrant <[email protected]>
…p functions Signed-off-by: Jacinta Ferrant <[email protected]>
Signed-off-by: Jacinta Ferrant <[email protected]>
Signed-off-by: Jacinta Ferrant <[email protected]>
…d of push Signed-off-by: Jacinta Ferrant <[email protected]>
call set-aggregate-public-key during append_block pass reward cycle and agg pubkey as symbolic expressions to private contract call read aggregate public key from parent reward cycle then set it in the following remove TODO comment referencing the issue being fixed use from instead of as for explicit cast fmt fixes
…see if the key is set automatically for a new reward cycle
…k/execute_in_env context chain
Signed-off-by: Jacinta Ferrant <[email protected]>
Signed-off-by: Jacinta Ferrant <[email protected]>
Signed-off-by: Jacinta Ferrant <[email protected]>
Signed-off-by: Jacinta Ferrant <[email protected]>
…p functions Signed-off-by: Jacinta Ferrant <[email protected]>
Signed-off-by: Jacinta Ferrant <[email protected]>
Signed-off-by: Jacinta Ferrant <[email protected]>
Signed-off-by: Jacinta Ferrant <[email protected]>
* Refactor some of the reused structs from `neon_node` * Fix a logic-bug in `nakamoto::coordinator`: the first prepare phase information will be a Epoch2x block, so the reward set calculation has to handle that. * Add `nakamoto_node` module based on `neon_node` * Add simple integration test for `nakamoto_node`
…ommits at tenure_id changes, cargo fmt-stacks
…set-aggregate-public-key
Codecov ReportAttention:
Additional details and impacted files@@ Coverage Diff @@
## next #4165 +/- ##
==========================================
+ Coverage 82.70% 82.89% +0.19%
==========================================
Files 429 429
Lines 301662 302009 +347
==========================================
+ Hits 249478 250362 +884
+ Misses 52184 51647 -537 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's a stale comment in observe_set_aggregate_key
, but ow LGTM. Thanks so much for untangling this @jcnelson
@@ -2484,6 +2493,16 @@ impl NakamotoChainState { | |||
); | |||
} | |||
|
|||
if !clarity_tx.config.mainnet { | |||
Self::set_aggregate_public_key( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah doing this here instead of append_block
seems much more correct.
}) | ||
.expect("FATAL: failed to start mockamoto main thread"); | ||
|
||
// complete within 5 seconds or abort (we are only observing one block) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This comment was in the original code, but we're now observing up to 100
blocks in 120
secs. Assuming that this is what we really want to be doing, we should fix this comment.
@@ -218,13 +225,49 @@ impl RunLoop { | |||
.map(|e| (e.address.clone(), e.amount)) | |||
.collect(); | |||
|
|||
// TODO (nakamoto-neon): check if we're trying to setup a self-signing network | |||
// and set the right genesis data | |||
let agg_pubkey_boot_callback = if let Some(self_signer) = self.config.self_signing() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah this was the key I was needing. Was wondering how to set boot contract state, but couldn't seem to figure it out!
…eward cycles_ because integration tests can run in epoch 2.x for many reward cycles before the epoch 3 transition (and, the neon boot code sets the initial aggregate public key)
for parent_reward_cycle in (0..=my_reward_cycle).rev() { | ||
// carry forward the aggregate public key in the past reward cycle to the current | ||
// reward cycle. It may be several cycles back, such as in integration tests where | ||
// nakamoto boots up several reward cycles after the initial aggregate public key was set. | ||
// TODO: replace with signer voting | ||
debug!( | ||
"Try setting aggregate public key in reward cycle {}, parent {}", | ||
my_reward_cycle, parent_reward_cycle | ||
); | ||
// execute `set-aggregate-public-key` using `clarity-tx` | ||
let Some(aggregate_public_key) = clarity_tx | ||
.connection() | ||
.with_readonly_clarity_env( | ||
mainnet, | ||
chain_id, | ||
ClarityVersion::Clarity2, | ||
StacksAddress::burn_address(mainnet).into(), | ||
None, | ||
LimitedCostTracker::Free, | ||
|vm_env| { | ||
vm_env.execute_contract_allow_private( | ||
&boot_code_id(POX_4_NAME, mainnet), | ||
"get-aggregate-public-key", | ||
&vec![SymbolicExpression::atom_value(Value::UInt(u128::from( | ||
parent_reward_cycle, | ||
)))], | ||
true, | ||
) | ||
}, | ||
) | ||
.ok() | ||
.map(|agg_key_value| { | ||
let agg_key_opt = agg_key_value.expect_optional().map(|agg_key_buff| { | ||
Value::buff_from(agg_key_buff.expect_buff(33)) | ||
.expect("failed to reconstruct buffer") | ||
}); | ||
agg_key_opt | ||
}) | ||
.flatten() | ||
else { | ||
debug!( | ||
"No aggregate public key in parent cycle {}", | ||
parent_reward_cycle | ||
); | ||
continue; | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looping over reward cycles should be unnecessary. If an integration test requires it, the integration test itself should be altered, rather than this code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The problem that this fixes is that integration tests do the following:
- Install the aggregate public key in
pox-4
for reward cycle 0 - Mine 200-ish Bitcoin blocks (which is many reward cycles)
- Mine some epoch 2.x blocks and instantiate
pox-4
(more reward cycles) - Start epoch 3, and try to build a block
Without searching back to reward cycle 0, this last step would fail with a panic because (1) there's no way to "pull forward" the aggregate public key from reward cycle 0 when the first Stacks block is mined without this search code, and (2) epoch 2.x block-production doesn't use this code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remember, this whole code path goes away once aggregate public key voting comes online. Then, we'd have stackers vote for (and thus set) the aggregate public key in pox-4
in the penultimate reward cycle prior to epoch 3.0. This removes the need for this entire method.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What you described sounds incorrect to me— the epoch 3.0 instantiation code sets all pre-pox-4 cycles’ agg key. Then the set-agg-key method is invoked in every block after that. This loop is masking an issue.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, it's the epoch 2.5 instantiation logic that populates the pre-pox-4 reward cycles' aggregate public key. And in the integration tests, epoch 2.5 starts at Bitcoin block height 6, which is well before the first Stacks block is mined.
.ok_or(ChainstateError::DBError(DBError::NotFoundError))?; | ||
let aggregate_key_block_header = | ||
Self::get_canonical_block_header(chainstate.db(), sortdb)?.unwrap(); | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you add a todo comment here with a reference to #4109 ? As currently implemented here (which I think is fine for now) the at_block
is variable: when a new stacks block is received, get_canonical_block_header()
returns a different Stacks block, meaning this get_aggregate_public_key()
will return different values for the same block
at different times.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm actually no longer sure that #4109 is desirable, if we're going to allow stackers to re-try DKG in the middle of the cycle. If that were the case, then we'd want the current behavior.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe so, but that merits discussion and that discussion shouldn't stall this PR. Leaving a pointer to that issue makes it so that it's easy to find the code that assumes one approach or the other.
As written, this function may be somewhat dangerous -- I don't think it's strange for a developer to believe that the expected aggregate key for a given block is always the same, but this function will return different results at different times.
@@ -1803,6 +1800,29 @@ impl NakamotoChainState { | |||
)) | |||
} | |||
|
|||
/// Get the aggregate public key for a block |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe just expand this comment that this is fetching the expected stacker aggregate key for a given block.
…egration test framework has mined some blocks
This pull request has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |
This merges #4113 and #4126