Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Passing by value into unconstrained function generates constraints #4244

Open
zac-williamson opened this issue Feb 2, 2024 · 6 comments
Open
Labels
bug Something isn't working compiler

Comments

@zac-williamson
Copy link

Aim

I have a Struct that contains arrays. I iterate over the arrays in an unconstrained function. Many constraints generated. Sad face.

use dep::std::field::bn254::assert_gt;

struct ListItem {
    key: Field,
    previous: Field,
    next: Field,
}

impl ListItem {
    fn default() -> ListItem {
        ListItem {
            key: 0,
            previous: 0,
            next: 0,
        }
    }
}

struct Map<Size> {
    entries: [ListItem; Size],
    size: Field,
    is_empty: bool,
    first_index: Field,
    last_index: Field,
}

unconstrained fn find_previous_key_location<Size>(map: Map<Size>, key: Field) -> (Field) {
    let mut found_index: Field = 0;
    found_index
}

unconstrained fn find_previous_key_location_nocopy<Size>(key: Field) -> (Field) {
    let mut found_index: Field = 0;
    found_index
}

unconstrained fn find_previous_key_location_two<Size>(map: &mut Map<Size>, key: Field) -> (Field) {
    let mut found_index: Field = 0;
    found_index
}

impl<Size> Map<Size> {
    fn default() -> Map<Size> {
        Map{
            entries: [ListItem::default(); Size], // todo fix
            size: 0,
            is_empty: true,
            first_index: 0,
            last_index: 0,
        }
    }


    fn insert_expensive(&mut self, key: Field) {
        let previous_indexx = find_previous_key_location(*self, key);
        self.entries[previous_indexx].key = key;
    }

    fn insert_cheap(&mut self, key: Field) {
        let previous_indexx = find_previous_key_location_nocopy(key);
        self.entries[previous_indexx].key = key;
    }
}

fn main(x: Field, y: pub Field) {
    let mut test_list: Map<50> = Map::default();
    test_list.insert(x);
    test_list.insert(x + y);
    test_list.insert(x + y * y);
}

Expected Behavior

6 constraints. Not 2000+

Bug

Passing by value into unconstrained fn should not generate constraints

To Reproduce

  1. nargo info
  2. observe high gate count
  3. replace call to find_previous_key_location with find_previous_key_location_nocopy
  4. observe low gate count

Installation Method

Binary

Nargo Version

0.23.0

Additional Context

No response

Would you like to submit a PR for this Issue?

No

Support Needs

No response

@TomAFrench
Copy link
Member

Interesting, can you post the output if you compile with the --show-ssa flag?

@TomAFrench
Copy link
Member

Just showing the difference between the final SSA for insert_cheap and insert_expensive. I've reduced the map's size to 5 for brevity.

insert_cheap:

After Dead Instruction Elimination:
acir fn main f0 {
  b0(v0: Field, v1: Field):
    v329 = call f3(v0)
    v341 = add v0, v1
    v342 = call f3(v341)
    v354 = mul v1, v1
    v355 = add v0, v354
    v356 = call f3(v355)
    return 
}
brillig fn find_previous_key_location_nocopy f3 {
  b0(v0: Field):
    return Field 0
}

insert_expensive:

After Dead Instruction Elimination:
acir fn main f0 {
  b0(v0: Field, v1: Field):
    v320 = call f3([Field 0, Field 0, Field 0, Field 0, Field 0, Field 0, Field 0, Field 0, Field 0, Field 0, Field 0, Field 0, Field 0, Field 0, Field 0], Field 0, u1 1, Field 0, Field 0, v0)
    v321 = mul v320, Field 3
    v323 = add v321, Field 1
    v324 = array_get [Field 0, Field 0, Field 0, Field 0, Field 0, Field 0, Field 0, Field 0, Field 0, Field 0, Field 0, Field 0, Field 0, Field 0, Field 0], index v323
    v325 = add v321, Field 2
    v326 = array_get [Field 0, Field 0, Field 0, Field 0, Field 0, Field 0, Field 0, Field 0, Field 0, Field 0, Field 0, Field 0, Field 0, Field 0, Field 0], index v325
    v327 = array_set [Field 0, Field 0, Field 0, Field 0, Field 0, Field 0, Field 0, Field 0, Field 0, Field 0, Field 0, Field 0, Field 0, Field 0, Field 0], index v321, value v0
    v328 = array_set v327, index v323, value v324
    v329 = add v323, Field 1
    v330 = array_set v328, index v329, value v326
    v332 = add v0, v1
    inc_rc v330
    inc_rc v330
    v333 = call f3(v330, Field 0, u1 1, Field 0, Field 0, v332)
    v334 = mul v333, Field 3
    v336 = add v334, Field 1
    v337 = array_get v330, index v336
    v338 = add v334, Field 2
    v339 = array_get v330, index v338
    v340 = array_set v330, index v334, value v332
    v341 = array_set v340, index v336, value v337
    v342 = add v336, Field 1
    v343 = array_set v341, index v342, value v339
    v345 = mul v1, v1
    v346 = add v0, v345
    inc_rc v343
    inc_rc v343
    v347 = call f3(v343, Field 0, u1 1, Field 0, Field 0, v346)
    return 
}
brillig fn find_previous_key_location f3 {
  b0(v0: [Field, Field, Field; 5], v1: Field, v2: u1, v3: Field, v4: Field, v5: Field):
    return Field 0
}

@TomAFrench
Copy link
Member

TomAFrench commented Feb 3, 2024

The reason for the additional constraints is that in insert_cheap, we never actually read any values from the array, we're then able to just optimize out the array entirely. We then just need to calculate x, x+y, and x + y * y to pass into the brillig functions.

Meanwhile for insert_expensive we need to pass the actual array into the various brillig functions so we can't optimise it out.

There are a couple of issues in how we're compiling this program however.

  • We read the entirety of the ListItem out of the array and then rewrite the same values back in again just to update the key field.
    • Ideally we'd not perform any reads here and just write directly to the key field.
  • Another issue which is manifesting because of the above. When we're doing these loads from a literal array on the first call to insert_expensive we initialise the same array multiple times in ACIR gen when we're just performing reads on it.
    • Can we add an array equivalent to DataFlowGraph.constants to reuse the value ids?

If we fix the first bullet point then we'll end up with the program as so which should be optimal.

After Dead Instruction Elimination:
acir fn main f0 {
  b0(v0: Field, v1: Field):
    v320 = call f3([Field 0, Field 0, Field 0, Field 0, Field 0, Field 0, Field 0, Field 0, Field 0, Field 0, Field 0, Field 0, Field 0, Field 0, Field 0], Field 0, u1 1, Field 0, Field 0, v0)
    v321 = mul v320, Field 3
    v327 = array_set [Field 0, Field 0, Field 0, Field 0, Field 0, Field 0, Field 0, Field 0, Field 0, Field 0, Field 0, Field 0, Field 0, Field 0, Field 0], index v321, value v0
    v332 = add v0, v1
    inc_rc v330
    inc_rc v330
    v333 = call f3(v327, Field 0, u1 1, Field 0, Field 0, v332)
    v334 = mul v333, Field 3
    v340 = array_set v327, index v334, value v332
    v345 = mul v1, v1
    v346 = add v0, v345
    inc_rc v340
    inc_rc v340
    v347 = call f3(v340, Field 0, u1 1, Field 0, Field 0, v346)
    return 
}
brillig fn find_previous_key_location f3 {
  b0(v0: [Field, Field, Field; 5], v1: Field, v2: u1, v3: Field, v4: Field, v5: Field):
    return Field 0
}

@zac-williamson
Copy link
Author

@TomAFrench I would like to validate whether the items you listed address the core of the problem.

I have a more developed program where the constraint cost of performing reads/writes into the array scales with the size of the array - this should not happen and from what I can understand, the two points you listed would not cause that behaviour?

@zac-williamson
Copy link
Author

@TomAFrench to explain in more detail - here is the full program I was writing when I encounter this bug: https://gist.github.com/zac-williamson/4bee81b912471395b4e3c9b6029bad81

The cost of the insert method scales with the maximum size of the Map - this should not be the case as the insert method only iterates over the map in an unconstrained function.

I would expect that increasing the size of the map would increase the number of constraints by a constant, but every insert call should have the same cost.

@TomAFrench
Copy link
Member

Copying across from DMs:

The core of the problem is that we've got an array in memory which we want to push across to brillig, however brillig only accepts inputs from the witness map. This means that whenever we want to send this array across the brillig boundary we need to read the entire array from memory (as we could have written to arbitrary indices since the last time we read from memory).

This issue is stemming from the fact that we want to pass values in memory to the brillig VM however the only method we have to do this is by reading every single value out of memory and into the witness map which ends up creating a boatload of constraints.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working compiler
Projects
Status: 📋 Backlog
Development

No branches or pull requests

3 participants