Skip to content

Commit

Permalink
refactor: using Tuples in TxEffects and renaming note commitments (#…
Browse files Browse the repository at this point in the history
…4717)

Co-authored-by: esau <[email protected]>
  • Loading branch information
benesjan and sklppy88 authored Feb 22, 2024
1 parent 8c697ce commit 3dd3c46
Show file tree
Hide file tree
Showing 105 changed files with 1,186 additions and 1,139 deletions.
2 changes: 1 addition & 1 deletion boxes/blank/src/contracts/target/blank-Blank.json

Large diffs are not rendered by default.

Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ On this page, you'll learn
- The details and functionalities of the private context in Aztec.nr
- Difference between the private and public contexts and their unified APIs
- Components of the private context, such as inputs, block header, and contract deployment data
- Elements like return values, read requests, new commitments, and nullifiers in transaction processing
- Elements like return values, read requests, new note hashes, and nullifiers in transaction processing
- Differences between the private and public contexts, especially the unique features and variables in the public context

## Two contexts, one API
Expand Down Expand Up @@ -105,18 +105,18 @@ The return values are a set of values that are returned from an applications exe

<!-- TODO(maddiaa): leaving as todo until their is further clarification around their implementation in the protocol -->

### New Commitments
### New Note Hashes

New commitments contains an array of all of the commitments created in the current execution context.
New note hashes contains an array of all of the note hashes created in the current execution context.

### New Nullifiers

New nullifiers contains an array of the new nullifiers emitted from the current execution context.

### Nullified Commitments
### Nullified Note Hashes

Nullified commitments is an optimization for introduced to help reduce state growth. There are often cases where commitments are created and nullified within the same transaction.
In these cases there is no reason that these commitments should take up space on the node's commitment/nullifier trees. Keeping track of nullified commitments allows us to "cancel out" and prove these cases.
Nullified note hashes is an optimization for introduced to help reduce state growth. There are often cases where note hashes are created and nullified within the same transaction.
In these cases there is no reason that these note hashes should take up space on the node's commitment/nullifier trees. Keeping track of nullified note hashes allows us to "cancel out" and prove these cases.

### Private Call Stack

Expand Down
22 changes: 11 additions & 11 deletions docs/docs/developers/debugging/sandbox-errors.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,9 +22,9 @@ Remember that for each function call (i.e. each item in the call stack), there i
Cannot call contract at address(0x0) privately.
This error may also happen when you deploy a new contract and the contract data hash is inconsistent to the expected contract address.

#### 2005 - PRIVATE_KERNEL\_\_NEW_COMMITMENTS_PROHIBITED_IN_STATIC_CALL
#### 2005 - PRIVATE_KERNEL\_\_NEW_NOTE_HASHES_PROHIBITED_IN_STATIC_CALL

For static calls, new commitments aren't allowed
For static calls, new note hashes aren't allowed

#### 2006 - PRIVATE_KERNEL\_\_NEW_NULLIFIERS_PROHIBITED_IN_STATIC_CALL

Expand Down Expand Up @@ -67,13 +67,13 @@ For a non transient read, we fetch the merkle root from the membership witnesses

#### 2019 - PRIVATE_KERNEL\_\_TRANSIENT_READ_REQUEST_NO_MATCH

A pending commitment is the one that is not yet added to note hash tree.
A transient read is when we try to "read" a pending commitment.
This error happens when you try to read a pending commitment that doesn't exist.
A pending note hash is the one that is not yet added to note hash tree.
A transient read is when we try to "read" a pending note hash.
This error happens when you try to read a pending note hash that doesn't exist.

#### 2021 - PRIVATE_KERNEL\_\_UNRESOLVED_NON_TRANSIENT_READ_REQUEST

For a transient read request we skip merkle membership checks since pending commitments aren't inserted into the note hash tree yet.
For a transient read request we skip merkle membership checks since pending note hashes aren't inserted into the note hash tree yet.
But for non transient reads, we do a merkle membership check. Reads are done at the kernel circuit. So this checks that there are no already unresolved reads from a previous kernel iteration (other than non transient ones).

#### 3001 - PUBLIC_KERNEL\_\_UNSUPPORTED_OP
Expand Down Expand Up @@ -120,13 +120,13 @@ For static calls, no contract storage change requests are allowed.

Same as [3022](#3022---public_kernel__public_call_stack_contract_storage_updates_prohibited_for_static_call), no contract changes are allowed for static calls.

#### 3026 - PUBLIC_KERNEL\_\_NEW_COMMITMENTS_PROHIBITED_IN_STATIC_CALL
#### 3026 - PUBLIC_KERNEL\_\_NOTE_HASHES_PROHIBITED_IN_STATIC_CALL

For static calls, no new commitments or nullifiers can be added to the state.
For static calls, no new note hashes or nullifiers can be added to the state.

#### 3027 - PUBLIC_KERNEL\_\_NEW_NULLIFIERS_PROHIBITED_IN_STATIC_CALL

For static calls, no new commitments or nullifiers can be added to the state.
For static calls, no new note hashes or nullifiers can be added to the state.

### Rollup circuit errors

Expand All @@ -148,7 +148,7 @@ Some scary bugs like `4003 - BASE__INVALID_NULLIFIER_SUBTREE` and `4004 - BASE__

Circuits work by having a fixed size array. As such, we have limits on how many UTXOs can be created (aka "commitments") or destroyed/nullified (aka "nullifiers") in a transaction. Similarly we have limits on many reads or writes you can do, how many contracts you can create in a transaction. This error typically says that you have reached the current limits of what you can do in a transaction. Some examples when you may hit this error are:

- too many new commitments in one tx
- too many new note hashes in one tx
- too many new nullifiers in one tx
- Note: Nullifiers may be created even outside the context of your Aztec.nr code. Eg, when creating a contract, we add a nullifier for its address to prevent same address from ever occurring. Similarly, we add a nullifier for your transaction hash too.
- too many private function calls in one tx (i.e. call stack size exceeded)
Expand All @@ -170,7 +170,7 @@ Users may create a proof against a historical state in Aztec. The rollup circuit
- using invalid historical contracts data tree state
- using invalid historical L1 to L2 message data tree state
- inserting a subtree into the greater tree
- we make a smaller merkle tree of all the new commitments/nullifiers etc that were created in a transaction or in a rollup and add it to the bigger state tree. Before inserting, we do a merkle membership check to ensure that the index to insert at is indeed an empty subtree (otherwise we would be overwriting state). This can happen when `next_available_leaf_index` in the state tree's snapshot is wrong (it is fetched by the sequencer from the archiver). The error message should reveal which tree is causing this issue
- we make a smaller merkle tree of all the new note hashes/nullifiers etc that were created in a transaction or in a rollup and add it to the bigger state tree. Before inserting, we do a merkle membership check to ensure that the index to insert at is indeed an empty subtree (otherwise we would be overwriting state). This can happen when `next_available_leaf_index` in the state tree's snapshot is wrong (it is fetched by the sequencer from the archiver). The error message should reveal which tree is causing this issue
- nullifier tree related errors - The nullifier tree uses an [Indexed Merkle Tree](../../learn/concepts/storage/trees/indexed_merkle_tree.md). It requires additional data from the archiver to know which is the nullifier in the tree that is just below the current nullifier before it can perform batch insertion. If the low nullifier is wrong, or the nullifier is in incorrect range, you may receive this error.

---
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/developers/limitations/main.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ Help shape and define:
- The initial `msg_sender` is 0, which can be problematic for some contracts, see [function visibility](../contracts/writing_contracts/functions/visibility.md).
- Unencrypted logs don't link to the contract that emitted it, so essentially just a `debug_log`` that you can match values against.
- A note that is created and nullified in the same transaction will still emit an encrypted log.
- A limited amount of new commitments, nullifiers and calls that are supported by a transaction, see [circuit limitations](#circuit-limitations).
- A limited amount of new note hashes, nullifiers and calls that are supported by a transaction, see [circuit limitations](#circuit-limitations).

## Limitations

Expand Down
4 changes: 2 additions & 2 deletions docs/docs/developers/privacy/main.md
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,7 @@ A 'Function Fingerprint' is any data which is exposed by a function to the outsi
- All unencrypted logs (topics and arguments).
- The roots of all trees which have been read from.
- The _number_ of ['side effects'](<https://en.wikipedia.org/wiki/Side_effect_(computer_science)>):
- \# new commitments
- \# new note hashes
- \# new nullifiers
- \# bytes of encrypted logs
- \# public function calls
Expand All @@ -91,7 +91,7 @@ A 'Function Fingerprint' is any data which is exposed by a function to the outsi
#### Standardizing Fingerprints

If each private function were to have a unique Fingerprint, then all private functions would be distinguishable from each-other, and all of the efforts of the Aztec protocol to enable 'private function execution' would have been pointless. Standards need to be developed, to encourage smart contract developers to adhere to a restricted set of Tx Fingerprints. For example, a standard might propose that the number of new commitments, nullifiers, logs, etc. must always be equal, and must always equal a power of two. Such a standard would effectively group private functions/txs into 'privacy sets', where all functions/txs in a particular 'privacy set' would look indistinguishable from each-other, when executed.
If each private function were to have a unique Fingerprint, then all private functions would be distinguishable from each-other, and all of the efforts of the Aztec protocol to enable 'private function execution' would have been pointless. Standards need to be developed, to encourage smart contract developers to adhere to a restricted set of Tx Fingerprints. For example, a standard might propose that the number of new note hashes, nullifiers, logs, etc. must always be equal, and must always equal a power of two. Such a standard would effectively group private functions/txs into 'privacy sets', where all functions/txs in a particular 'privacy set' would look indistinguishable from each-other, when executed.

### Data queries

Expand Down
2 changes: 1 addition & 1 deletion docs/docs/learn/concepts/circuits/rollup_circuits/main.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ For both transactions, it:
- Updates the public data tree in line with the requested state transitions.
- Checks that the nullifiers haven't previously been inserted into the [indexed nullifier tree](../../storage/trees/indexed_merkle_tree.md#primer-on-nullifier-trees).
- Batch-inserts new nullifiers into the nullifier tree.
- Batch-inserts new commitments into the note hash tree
- Batch-inserts new note hashes into the note hash tree
- Batch-inserts any new contract deployments into the contract tree.
- Hashes all the new nullifiers, commitments, public state transitions, and new contract deployments, to prevent exponential growth in public inputs with each later layer of recursion.
- Verifies the input kernel proof.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ This works perfectly well when everything is public and a single builder is awar

To avoid this issue, we permit the use of historical data as long as the data has not been nullified previously. Note, that because this must include nullifiers that were inserted after the proof generation, but before execution we need to nullify (and insert the data again) to prove that it was not nullified. Without emitting the nullifier we would need our proof to point to the current head of the nullifier tree to have the same effect, e.g., back to the race conditions we were trying to avoid.

In this model, instead of informing the builder of our intentions, we construct the proof $\pi$ and then provide them with the transaction results (new commitments and nullifiers, contract deployments and cross-chain messages) in addition to $\pi$. The builder will then be responsible for inserting these new commitments and nullifiers into the state. They will be aware of the intermediates and can discard transactions that try to produce existing nullifiers (double spend), as doing so would invalidate the rollup proof.
In this model, instead of informing the builder of our intentions, we construct the proof $\pi$ and then provide them with the transaction results (new note hashes and nullifiers, contract deployments and cross-chain messages) in addition to $\pi$. The builder will then be responsible for inserting these new note hashes and nullifiers into the state. They will be aware of the intermediates and can discard transactions that try to produce existing nullifiers (double spend), as doing so would invalidate the rollup proof.

On the left-hand side of the diagram below, we see the fully public world where storage is shared, while on the right-hand side, we see the private world where all reads are historical.

Expand All @@ -56,11 +56,11 @@ Be mindful that if part of a transaction is reverting, say the public part of a

To summarize:

- _Private_ function calls are fully "prepared" and proven by the user, which provides the kernel proof along with new commitments and nullifiers to the sequencer.
- _Private_ function calls are fully "prepared" and proven by the user, which provides the kernel proof along with new note hashes and nullifiers to the sequencer.
- _Public_ functions altering public state (updatable storage) must be executed at the current "head" of the chain, which only the sequencer can ensure, so these must be executed separately to the _private_ functions.
- _Private_ and _public_ functions within an Aztec transaction are therefore ordered such that first _private_ functions are executed, and then _public_.

A more comprehensive overview of the interplay between private and public functions and their ability to manipulate data is presented below. It is worth noting that all data reads performed by private functions are historical in nature, and that private functions are not capable of modifying public storage. Conversely, public functions have the capacity to manipulate private storage (e.g., inserting new commitments, potentially as part of transferring funds from the public domain to the secret domain).
A more comprehensive overview of the interplay between private and public functions and their ability to manipulate data is presented below. It is worth noting that all data reads performed by private functions are historical in nature, and that private functions are not capable of modifying public storage. Conversely, public functions have the capacity to manipulate private storage (e.g., inserting new note hashes, potentially as part of transferring funds from the public domain to the secret domain).

<Image img={require("/img/com-abs-4.png")} />

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ graph TD;
CurrentM --> Value1[Current Value 1]
CurrentM --> Value2[Current Value 2]
CurrentM --> ValueN[Current Value n]
Pending --> PendingM[Pending Commitment 1]
Pending --> PendingM[Pending Note Hash 1]
PendingM --> PValue1[Pending Value 1]
PendingM --> PValue2[Pending Value 2]
PendingM --> PValueN[Pending Value n]
Expand Down
5 changes: 2 additions & 3 deletions docs/docs/learn/concepts/storage/storage_slots.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,8 +39,7 @@ If we include the storage slot, as part of the note whose commitment is stored i
Similarly to how we siloed the public storage slots, we can silo our private storage by hashing the logical storage slot together with the note content.

```rust
note_hash = H(...note_content);
commitment = H(logical_storage_slot, note_hash);
note_hash = H(logical_storage_slot, note_content_hash);
```

This siloing (there will be more) is done in the application circuit, since it is not necessary for security of the network (but only the application).
Expand All @@ -53,7 +52,7 @@ When reading the values for these notes, the application circuit can then constr
To ensure that one contract cannot insert storage that other contracts would believe is theirs, we do a second siloing by hashing the `commitment` with the contract address.

```rust
siloed_commitment = H(contract_address, commitment);
siloed_note_hash = H(contract_address, note_hash);
```

By doing this address-siloing at the kernel circuit we *force* the inserted commitments to include and not lie about the `contract_address`.
Expand Down
6 changes: 3 additions & 3 deletions docs/docs/misc/roadmap/engineering_roadmap.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ The engineering roadmap is long. There are no timings assigned here. In a loose
- Just emit the initially-enqueued public function request data? (The 'inputs' of the tx);
- I.e. contract address, function selector, args, call_context.
- OR, Just emit the final state transitions? (The 'outputs' of the tx)
- I.e. the leaf indices and new values of the public data tree; and the new commitments/nullifiers of the note hash tree; and logs; and l2->L1 messages.
- I.e. the leaf indices and new values of the public data tree; and the new note hashes/nullifiers of the note hash tree; and logs; and l2->L1 messages.

## Proper specs

Expand Down Expand Up @@ -177,7 +177,7 @@ Some example features:
- This would give much more flexibility over the sizes of various arrays that a circuit can output. Without it, if one array of an app circuit needs to be size 2000, but other arrays aren't used, we'd use a kernel where every array is size 2048, meaning a huge amount of unnecessary loops of computation for those empty arrays.
- Improvements
- We can definitely change how call stacks are processed within a kernel, to reduce hashing.
- Squash pending commitments/nullifiers in every kernel iteration, to enable a deeper nested call depth.
- Squash pending note hashes/nullifiers in every kernel iteration, to enable a deeper nested call depth.
- Topology of a rollup
- Revisit the current topology:
- We can make the rollup trees 'wonky' (rather than balanced), meaning a sequencer doesn't need to prove a load of pointless 'padding' proofs?
Expand All @@ -195,7 +195,7 @@ We often pack data in circuit A, and then unpack it again in circuit B.

Also, for logs in particular, we allow arbitrary-sized logs. But this requires sha256 packing inside an app circuit (which is slow) (and sha256 unpacking in Solidity (which is relatively cheap)). Perhaps we also use the bus ideas for logs, to give _some_ variability in log length, but up to an upper bound.

Also, we do a lot of sha256-compressing in our kernel and rollup circuits for data which must be checked on-chain, but grows exponentially with every round of iteration. E.g.: new contract deployment data, new nullifiers, new commitments, public state transition data, etc. This might be unavoidable. Maybe all we can do is use polynomial commitments when the EIP-4844 work is done. But maybe we can use the bus for this stuff too.
Also, we do a lot of sha256-compressing in our kernel and rollup circuits for data which must be checked on-chain, but grows exponentially with every round of iteration. E.g.: new contract deployment data, new nullifiers, new note hashes, public state transition data, etc. This might be unavoidable. Maybe all we can do is use polynomial commitments when the EIP-4844 work is done. But maybe we can use the bus for this stuff too.

### Write proper circuits

Expand Down
Loading

0 comments on commit 3dd3c46

Please sign in to comment.