Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ram and compilation time blowup when using to_be_bytes + sha #6267

Closed
sirasistant opened this issue Oct 10, 2024 · 0 comments · Fixed by #6307
Closed

Ram and compilation time blowup when using to_be_bytes + sha #6267

sirasistant opened this issue Oct 10, 2024 · 0 comments · Fixed by #6307
Assignees
Labels
bug Something isn't working

Comments

@sirasistant
Copy link
Contributor

Aim

Compile programs that use this pattern:

global TX_EFFECTS_HASH_INPUT_FIELDS = 256;

// Convert a 32 byte array to a field element by truncating the final byte
pub fn field_from_bytes_32_trunc(bytes32: [u8; 32]) -> Field {
    // Convert it to a field element
    let mut v = 1;
    let mut high = 0 as Field;
    let mut low = 0 as Field;

    for i in 0..15 {
        // covers bytes 16..30 (31 is truncated and ignored)
        low = low + (bytes32[15 + 15 - i] as Field) * v;
        v = v * 256;
        // covers bytes 0..14
        high = high + (bytes32[14 - i] as Field) * v;
    }
    // covers byte 15
    low = low + (bytes32[15] as Field) * v;

    low + high * v
}

pub fn sha256_to_field<let N: u32>(bytes_to_hash: [u8; N]) -> Field {
    let sha256_hashed = std::hash::sha256(bytes_to_hash);
    let hash_in_a_field = field_from_bytes_32_trunc(sha256_hashed);

    hash_in_a_field
}

fn main(tx_effects_hash_input: [Field; TX_EFFECTS_HASH_INPUT_FIELDS]) -> pub Field {
    let mut hash_input_flattened = [0; TX_EFFECTS_HASH_INPUT_FIELDS * 32];
    for offset in 0..TX_EFFECTS_HASH_INPUT_FIELDS {
        let input_as_bytes: [u8; 32] = tx_effects_hash_input[offset].to_be_bytes();
        for byte_index in 0..32 {
            hash_input_flattened[offset * 32 + byte_index] = input_as_bytes[byte_index];
        }
    }

    let sha_digest = sha256_to_field(hash_input_flattened);
    sha_digest
}

Expected Behavior

Program should not take 20gb of ram to compile

Bug

High RAM usage during mem2reg after unrolling, and slow compilation

To Reproduce

Workaround

Yes

Workaround Description

Use the unsafe equivalent to_be_radix(256) instead of to_be_bytes()

Additional Context

No response

Project Impact

None

Blocker Context

No response

Nargo Version

No response

NoirJS Version

No response

Proving Backend Tooling & Version

No response

Would you like to submit a PR for this Issue?

None

Support Needs

No response

@sirasistant sirasistant added the bug Something isn't working label Oct 10, 2024
@github-project-automation github-project-automation bot moved this to 📋 Backlog in Noir Oct 10, 2024
@vezenovm vezenovm self-assigned this Oct 21, 2024
github-merge-queue bot pushed a commit that referenced this issue Oct 23, 2024
…ime (#6307)

# Description

## Problem\*

Resolves #6267

Pushing as a draft to see bytecode size regressions we get.

## Summary\*

This version for the code in #6267 took less than 2 gigs of ram and
4.66s to compile. The code on master takes ~20 gigs and 94.3s to
compile. The noir-contracts workspace with this mem2reg takes ~20 gigs
of ram and 94.3s to compile vs. >70 gigs of ram and 4.5 min to compile
w/ the mem2reg on master.

## Additional Context



## Documentation\*

Check one:
- [X] No documentation needed.
- [ ] Documentation included in this PR.
- [ ] **[For Experimental Features]** Documentation to be submitted in a
separate PR.

# PR Checklist\*

- [X] I have tested the changes locally.
- [X] I have formatted the changes with [Prettier](https://prettier.io/)
and/or `cargo fmt` on default settings.

---------

Co-authored-by: Tom French <[email protected]>
@github-project-automation github-project-automation bot moved this from 📋 Backlog to ✅ Done in Noir Oct 23, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
Status: ✅ Done
Development

Successfully merging a pull request may close this issue.

2 participants