-
Notifications
You must be signed in to change notification settings - Fork 301
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(StateVariables): SharedMutable storage #4761
Comments
IIUC a user would not be able to send a private tx right before a Another concern on this approach is that we don't have a global epoch, but rather each slot has its own clock for effecting changes. I'm worried that if a tx touches multiple SharedMutable slots, the updates may not be "in sync" and this would always lead to a very short |
I don't think we can do this. As it is only a pending, it could change before becoming "active". In that case I'm not sure how we would catch the issue.
On this, yes it can be an issue that they are staggered and short. What we were discussing was likely having a minimum I don't recall if @nventuro had more comments on it. |
How does this approach compare against the other various suggestions of "Slow" state that were devised at the offsite? Are any of the other approaches viable and worth consideration? This approach looks the same as the one we discussed on the whiteboards downstairs?
What's a "forced transaction"?
I didn't quite follow this. Is it saying that for a particular function, the It might need an oracle call, such as:
The code snippets (in the original post) don't mention the archive tree. To read in private, the tuple (pre, post, time_of_change) of the Further to @spalladino's points, we will have to be mindful of the impact this could have on the size of privacy sets of the network. There's a lot of leakage happening in places, without proper standardisation and developer education: # notes, # nullifiers, max_block_num, # logs, log lengths, # enqueued calls, the public function calls that have been made, etc. For this particular topic, it might help standardisation if aztec.nr ensured the Edit: and yes, I'm aware I was against enshrining the slow updates tree because I didn't want to enshrine an epoch length, but as I think more about the potential for network-wide leakiness if users are left to standardise things for themselves, I'm becoming more comfortable with such enshrinement. :insert_face_of_someone_awkwardly_changing_their_mind: Further to @spalladino's points (again), Is there a risk of spikes/troughs in network activity around |
Yes, they're all quite similar. I think Lasse was simply pointing out at the beginning that this can also be seen as a kind of slow updates tree where one of the constraints is slightly different.
Maybe I'm missing something, but this doesn't seem like a difficult problem to solve. We'd add a new max block number field to the context, which gets populated and possible updated with lower values whenever application code read this kind of state. At the end we'll have the highest block number that fullfills all assertions.
Yes, |
My worry is that if a function wants to read N different slow states, there might be N different time horizons. Those time horizons will only be discovered as the function is simulated. But the |
I remember at the offsite Joe was proposing a Slow state approach which didn't need the The task list might need to include lines in the kernel circuit to compute the minimum max_block_num from the previous kernel and the latest-popped function. Are we sure we want to implement this approach, or do we feel we need to explore and compare more approaches before deciding? I'm conscious the design has changed several times recently, so might still have room for change? I'm also conscious the slow updates tree implementation had some drawbacks and might have benefited from more rounds of criticism before being implemented. |
I'm thinking of a simpler model in which we find the correct value as the simulation goes, but perhaps I'm missing some critical context. Similarly to how we have impl SharedMutableState {
fn read_shared_mutable_state(&self, &mut context) -> T {
context.request_max_block_number(self.time_of_change);
if context.now() < self.time_of_change {
self.pre
} else {
self.post
}
}
}
impl PrivateContext {
fn new() -> Self {
...
max_block_number = MAX_UINT64;
}
fn request_max_block_number(&mut self, max_block_number) {
// max_block_number may only decrease, so prior assertions about it being smaller than
// some value still hold.
self.max_block_number = min(self.max_block_number, max_block_number);
}
} And then the kernel would also keep the minimum of its current |
Part of #4761. This adds a new validity condition to transactions called `max_block_number`, causing them to fail if the current block is larger than a requested max block. This can be used to construct proofs that are only valid if included before a certain block (which is exactly how SharedMutableStorage/SlowJoe/SlowUpdatesTree2.0 works). --- I made `max_block_number` an `Option<u32>` both to not have to include a initial value equal to the largest block, and also to avoid issues that arise from abuse of `std::unsafe::zeroed`. Many parts of the stack assume a (mostly) zeroed transaction is a valid one, but a `max_block_number` value of 0 is not useful. With `Option`, a zeroed value means no max block number was requested (`is_none()` returns true), and this entire issue is avoided. This property is initially set to `is_none()`, meaning there's no max block number constraint. The `PrivateContext` now has a `request_max_block_number` function that can be used to add constraints. Each time a lower max block number is seen it replaces the current one. The private kernel aggregates these across private calls and ends up with the smallest one. This value is stored in a new struct called `RollupValidationRequests`, an extension from @LeilaWang's work in #5236. These are validation requests accumulated during private and public execution that are forwarded to the rollup for it to check. Currently we only have `max_block_number`, but there may be more. Note that we currently have a slight duplication in the public kernal tail public inputs, but this is expected to be sorted out very soon as this struct is refactored. --- Note that in the end to end tests we're only testing that the sequencer drops the transaction, but not that the base rollup rejects this transaction (this is only tested in the rollup circuit unit tests). Testing this would require bypassing the sequencer tx validation logic and manually building a block, but this is a fairly involved endeavor and one that our infrastructure does not currently easily support. I'm still looking into a way to add this test.
(Large) part of #4761. This is an initial implementation of `SharedMutableStorage`, with some limitations. I think those are best worked on in follow-up PRs, once we have the bones working. The bulk of the SharedMutable pattern is in `ScheduledValueChange`, a pure Noir struct that has all of the block number related logic. `SharedMutable` then makes a state variable out of that struct, adding public storage access both in public and private (via historical reads - see #5379), and using the new `request_max_block_number` function (from #5251). I made an effort to test as much as I could of these in Noir, with partial success in the case of `SharedMutable` due to lack of certain features, notably noir-lang/noir#4652. There is also an end-to-end test that goes through two scheuled value changes, showing that scheduled values do not affect the current one. I added some inline docs but didn't include proper docsite pages yet so that we can discuss the implementation, API, etc., and make e.g. renamings less troublesome. ### Notable implementation details I chose to make the delay a type parameter instead of a value mostly because of two reasons: - it lets us nicely serialize and deserialize `ScheduledValueChange` without including this field (which we are not currently interested in storing) - it lets us declare a state variable of type `SharedMutable<T, DELAY>` without having to change the signature of the `new` function, which is automatically injected by the macro. Overall I think this is fine, especially since we may later make the delay mutable (see below), but still worth noting. Additionally, I created a simple `public_storage` module to get slightly nicer API and encapsulation. This highlighted a Noir issue (noir-lang/noir#4633), which currently only affects public historical reads but will also affect current reads once we migrate to using the AVM opcodes. ### Future work - #5491 - #5492 (this takes care of padding during storage slot allocation) - #5501 - #5493 --------- Co-authored-by: Jan Beneš <[email protected]>
(Large) part of AztecProtocol/aztec-packages#4761. This is an initial implementation of `SharedMutableStorage`, with some limitations. I think those are best worked on in follow-up PRs, once we have the bones working. The bulk of the SharedMutable pattern is in `ScheduledValueChange`, a pure Noir struct that has all of the block number related logic. `SharedMutable` then makes a state variable out of that struct, adding public storage access both in public and private (via historical reads - see #5379), and using the new `request_max_block_number` function (from #5251). I made an effort to test as much as I could of these in Noir, with partial success in the case of `SharedMutable` due to lack of certain features, notably noir-lang/noir#4652. There is also an end-to-end test that goes through two scheuled value changes, showing that scheduled values do not affect the current one. I added some inline docs but didn't include proper docsite pages yet so that we can discuss the implementation, API, etc., and make e.g. renamings less troublesome. ### Notable implementation details I chose to make the delay a type parameter instead of a value mostly because of two reasons: - it lets us nicely serialize and deserialize `ScheduledValueChange` without including this field (which we are not currently interested in storing) - it lets us declare a state variable of type `SharedMutable<T, DELAY>` without having to change the signature of the `new` function, which is automatically injected by the macro. Overall I think this is fine, especially since we may later make the delay mutable (see below), but still worth noting. Additionally, I created a simple `public_storage` module to get slightly nicer API and encapsulation. This highlighted a Noir issue (noir-lang/noir#4633), which currently only affects public historical reads but will also affect current reads once we migrate to using the AVM opcodes. ### Future work - AztecProtocol/aztec-packages#5491 - AztecProtocol/aztec-packages#5492 (this takes care of padding during storage slot allocation) - AztecProtocol/aztec-packages#5501 - AztecProtocol/aztec-packages#5493 --------- Co-authored-by: Jan Beneš <[email protected]>
(Large) part of AztecProtocol/aztec-packages#4761. This is an initial implementation of `SharedMutableStorage`, with some limitations. I think those are best worked on in follow-up PRs, once we have the bones working. The bulk of the SharedMutable pattern is in `ScheduledValueChange`, a pure Noir struct that has all of the block number related logic. `SharedMutable` then makes a state variable out of that struct, adding public storage access both in public and private (via historical reads - see #5379), and using the new `request_max_block_number` function (from #5251). I made an effort to test as much as I could of these in Noir, with partial success in the case of `SharedMutable` due to lack of certain features, notably noir-lang/noir#4652. There is also an end-to-end test that goes through two scheuled value changes, showing that scheduled values do not affect the current one. I added some inline docs but didn't include proper docsite pages yet so that we can discuss the implementation, API, etc., and make e.g. renamings less troublesome. ### Notable implementation details I chose to make the delay a type parameter instead of a value mostly because of two reasons: - it lets us nicely serialize and deserialize `ScheduledValueChange` without including this field (which we are not currently interested in storing) - it lets us declare a state variable of type `SharedMutable<T, DELAY>` without having to change the signature of the `new` function, which is automatically injected by the macro. Overall I think this is fine, especially since we may later make the delay mutable (see below), but still worth noting. Additionally, I created a simple `public_storage` module to get slightly nicer API and encapsulation. This highlighted a Noir issue (noir-lang/noir#4633), which currently only affects public historical reads but will also affect current reads once we migrate to using the AVM opcodes. ### Future work - AztecProtocol/aztec-packages#5491 - AztecProtocol/aztec-packages#5492 (this takes care of padding during storage slot allocation) - AztecProtocol/aztec-packages#5501 - AztecProtocol/aztec-packages#5493 --------- Co-authored-by: Jan Beneš <[email protected]>
Also see: #5078 for changes to macros that is required with this type of storage.
The Slow updates tree (https://github.com/LHerskind/RandomRamblings/blob/main/SST.pdf) is to be replaced by a new type of shared mutable state that should be easier to use.
@nventuro figured that by altering the constrains that we are checking in the slow updates tree, we can "discard" most of the tree that we previously stored in public state, and instead just have the leafs directly.
The idea
As for the slow updates tree, we have leafs defined as
leaf: {pre, post, time_of_change}
.But for a storage declaration, we also have a delay
D
. Whenever anupdate
is initiated, we update thepost
to be the new value, and require a providedtime_of_change
to be at leastnow + D
.This ensures that an
update
will always be delayed by at leastD
.With this constraint, we can compute a
time_horizon
which tells us when the next change "could" happen.If there is a pending update, then the
time_horizon
is simply the "shortest" time that we could have until the next update, e.g.,min(leaf.time_of_change, now + D)
.If there are no pending updates, then the
time_horizon
isnow + D
, e.g., the earliest time an update could happen if someone later in the same block as your tx enqueues an update.As long as the transaction is included BEFORE this potential change, then we know the exact value and that it is matching.
We therefore need a way to ensure that this is the case. We could do so by making public calls, which we would very much prefer not to, or we can introduce a new value
max_block_number
to thetx_context
.This
max_block_number
defines the last block where this transaction can be included, and thebase_rollup
circuit must therefore check thatmax_block_number <= global_variables.block_number
To consider:
max_block_number
could make it possible for sequencers to censor forced transactions that rely on it.max_block_number
will likely require a new oracle that throughout a simulation is outputting constraints of the values read.len(T)*2+1
values so macro serialization need to consider it.Python pseudo implementation
Tasks
The text was updated successfully, but these errors were encountered: