Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore: trivial change roundup #3556

Merged
merged 2 commits into from
Dec 5, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion barretenberg/cpp/src/barretenberg/flavor/CMakeLists.txt
Original file line number Diff line number Diff line change
@@ -1 +1 @@
barretenberg_module(flavor commitment_schemes ecc polynomials proof_system)
barretenberg_module(flavor commitment_schemes ecc polynomials proof_system)
2 changes: 1 addition & 1 deletion docs/docs/about_aztec/overview.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ Watch Zac, CEO of Aztec, describe our approach to building a privacy preserving

### Private-public Composability

You can watch Mike, Aztec PM, talk about public-private composablity in Aztec at Devcon here.
You can watch Mike, Aztec PM, talk about public-private composability in Aztec at Devcon here.

<ReactPlayer
controls
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/about_aztec/roadmap/cryptography_roadmap.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,4 +21,4 @@ List of Honk projects:

## Goblin projects

Goblin is a deferred verification framework thats allow for an order-of-magnitude increase in the complexity of computations that Aztec users can execute with full privacy. This corresponds to a 10x increase in the expressivity of Noir programs that can be run in practice without melting anybody's favorite phone or laptop. Read more [here](https://hackmd.io/@aztec-network/B19AA8812).
Goblin is a deferred verification framework that allows for an order-of-magnitude increase in the complexity of computations that Aztec users can execute with full privacy. This corresponds to a 10x increase in the expressivity of Noir programs that can be run in practice without melting anybody's favorite phone or laptop. Read more [here](https://hackmd.io/@aztec-network/B19AA8812).
4 changes: 2 additions & 2 deletions docs/docs/concepts/advanced/contract_creation.md
Original file line number Diff line number Diff line change
Expand Up @@ -155,14 +155,14 @@ The contract address is calculated by the contract deployer, deterministically,

- `deployerAddress` is included to prevent frontrunning of deployment requests, but it does reveal who is deploying the contract. To remain private a user would have to use a burner address, or deploy a contract _through_ a private contract which can deploy contracts.
- :question: Why does CREATE2 include a deployerAddress?
- :exclamation: So that contracts can deploy contracts to deterministic addresses. Original goal was to enable pre-funding contracts before they were deployed. Not v. relevant for us though
- :exclamation: So that contracts can deploy contracts to deterministic addresses. The original goal was to enable pre-funding contracts before they were deployed. Not v. relevant for us though
- `salt` gives the deployer some 'choice' over the eventual contract address; they can loop through salts until they find an address they like.
- `functionTreeRoot` is like the bytecode without constructors or constructor arguments. This allows people to validate the functions of the contract.
- `constructorHash = hash(privateConstructorPublicInputsHash, publicConstructorPublicInputsHash, privateConstructorVKHash, publicConstructorVKHash)` - this allows people to validate the initial states of the contract. (Note: this is similar to how the `bytecode` in create2 includes an encoding of the constructor arguments).

To prevent duplicate contract addresses existing, a 'nullifier' is submitted when each new contract is deployed. `newContractAddressNullifier = hash(newContractAddress)`.

In order to link a `contractAddress`, with with a leafIndex in the `contractTree`, we reserve the storageSlot `0` of each contract's public data tree storage to store that leafIndex.
In order to link a `contractAddress`, with a leafIndex in the `contractTree`, we reserve the storageSlot `0` of each contract's public data tree storage to store that leafIndex.

The `contractAddress` is stored within the contract's leaf of the `contractTree`, so that the private kernel circuit may validate contract address <--> vk relationships.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ This page will answer:
- How indexed merkle trees work
- How they can be used for membership exclusion proofs
- How they can leverage batch insertions
- Tradoffs of using indexed merkle trees
- Tradeoffs of using indexed merkle trees

The content was also covered in a presentation for the [Privacy + Scaling Explorations team at the Ethereum Foundation](https://pse.dev/).

Expand All @@ -34,7 +34,7 @@ A sparse merkle tree (not every leaf stores a value):

<Image img={require("/img/indexed-merkle-tree/sparse-merkle-tree.png")} />

In order to spend / modify a note in the private state tree, one must create a nullifier for it, and prove that the nullifier does not already exist in the nullifier tree. As nullifier trees are modelled as sparse merkle trees, non membership checks are (conceptually) trivial.
In order to spend / modify a note in the private state tree, one must create a nullifier for it, and prove that the nullifier does not already exist in the nullifier tree. As nullifier trees are modeled as sparse merkle trees, non membership checks are (conceptually) trivial.

Data is stored at the leaf index corresponding to its value. E.g. if I have a sparse tree that can contain $2^{256}$ values and want to prove non membership of the value $2^{128}$. I can prove via a merkle membership proof that $tree\_values[2^{128}] = 0$, conversely if I can prove that $tree\_values[2^{128}] == 1$ I can prove that the item exists.

Expand Down Expand Up @@ -155,8 +155,8 @@ Suppose we want to show that the value `20` doesn't exist in the tree. We just r
- Special case, the low leaf is at the very end, so the new_value must be higher than all values in the tree:
- $assert(low\_nullifier_{\textsf{value}} < new\_value_{\textsf{value}})$
- Else:
- $assert(low\_nullifier_{\textsf{value}} < low\_nullifier_{\textsf{value}})$
- $assert(low\_nullifier_{\textsf{next\_value}} > low\_nullifier_{\textsf{value}})$
- $assert(low\_nullifier_{\textsf{value}} < new\_value_{\textsf{value}})$
- $assert(low\_nullifier_{\textsf{next\_value}} > new\_value_{\textsf{value}})$

This is already a massive performance improvement, however we can go further, as this tree is not sparse. We can perform batch insertions.

Expand Down Expand Up @@ -282,7 +282,7 @@ From looking at the code above we can probably deduce why we need pending insert

To perform batched insertions, our circuit must keep track of all values that are pending insertion.

- If the `low_nullifier_membership_witness` is identified to be nonsense ( all zeros, or has a leaf index of -1 ) we will know that this is an pending low nullifier read request and we will have to look within our pending subtree for the nearest low nullifier.
- If the `low_nullifier_membership_witness` is identified to be nonsense ( all zeros, or has a leaf index of -1 ) we will know that this is a pending low nullifier read request and we will have to look within our pending subtree for the nearest low nullifier.
- Loop back through all "pending_insertions"
- If the pending insertion value is lower than the nullifier we are trying to insert
- If the pending insertion value is NOT found, then out circuit is invalid and should self abort.
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/concepts/advanced/public_vm.md
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ It verifies a _verifier circuit_ that verifies a public function proof!

Why? Modularity, ease of development, backwards compatibility support.

Proceed with following development phases:
Proceed with the following development phases:

#### Phase 0: Full Proverless

Expand Down
8 changes: 4 additions & 4 deletions docs/docs/concepts/advanced/sequencer_selection.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ During the initial proposal phase, proposers submit to L1 a **block commitment**
- Identifier of the previous block in the chain.
- The output of the VRF for this sequencer.

At the end of the proposal phase, the sequencer with the highest score submitted becomes the leader for this cycle, and has exclusive rights to deciding the contents of the block. Note that this plays nicely with private mempools, since having exclusive rights allows the leader to disclose private transaction data in the reveal phase.
At the end of the proposal phase, the sequencer with the highest score submitted becomes the leader for this cycle, and has exclusive rights to decide the contents of the block. Note that this plays nicely with private mempools, since having exclusive rights allows the leader to disclose private transaction data in the reveal phase.

> _In the original version of Fernet, multiple competing proposals could enter the proving phase. Read more about the rationale for this change [here](https://hackmd.io/0cI_xVsaSVi7PToCJ9A2Ew?both#Mitigation-Elect-single-leader-after-proposal-phase)._

Expand Down Expand Up @@ -79,15 +79,15 @@ The only way to trigger an L2 reorg (without an L1 one) is if block N is reveale

![](https://hackmd.io/_uploads/HkMDHxxC2.png)

To mitigate the effect of wasted effort by all sequencers from block N+1 until the reorg, we could implement uncle rewards for these sequencers. And if we are comfortable with slashing, take those rewards out of the pocket of the sequencer that failed to finalise their block.
To mitigate the effect of wasted effort by all sequencers from block N+1 until the reorg, we could implement uncle rewards for these sequencers. And if we are comfortable with slashing, take those rewards out of the pocket of the sequencer that failed to finalize their block.

## Batching

> _Read more approaches to batching [here](https://hackmd.io/0cI_xVsaSVi7PToCJ9A2Ew?both#Batching)._

As an extension to the protocol, we can bake in batching of multiple blocks. Rather than creating one proof per block, we can aggregate multiple blocks into a single proof, in order to amortise the cost of verifying the root rollup ZKP on L1, thus reducing fees.
As an extension to the protocol, we can bake in batching of multiple blocks. Rather than creating one proof per block, we can aggregate multiple blocks into a single proof, in order to amortize the cost of verifying the root rollup ZKP on L1, thus reducing fees.

The tradeoff in batching is delayed finalisation: if we are not posting proofs to L1 for every block, then the network needs to wait until the batch proof is submitted for finalisation. This can also lead to deeper L2 reorgs.
The tradeoff in batching is delayed finalization: if we are not posting proofs to L1 for every block, then the network needs to wait until the batch proof is submitted for finalization. This can also lead to deeper L2 reorgs.

In a batching model, proving for each block happens immediately as the block is revealed, same as usual. But the resulting proof is not submitted to L1: instead, it is aggregated into the proof of the next block.

Expand Down
4 changes: 2 additions & 2 deletions docs/docs/concepts/foundation/accounts/keys.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ A side effect of enshrining and encoding privacy keys into the account address i

### Encryption keys

The privacy master key is used to derive encryption keys. Encryption keys, as their name imply, are used for encrypting private notes for a recipient, where the public key is used for encryption and the corresponding private key used for decryption.
The privacy master key is used to derive encryption keys. Encryption keys, as their name implies, are used for encrypting private notes for a recipient, where the public key is used for encryption and the corresponding private key used for decryption.

In a future version, encryption keys will be differentiated between incoming and outgoing. When sending a note to another user, the sender will use the recipient's incoming encryption key for encrypting the data for them, and will optionally use their own outgoing encryption key for encrypting any data about the destination of that note. This is useful for reconstructing transaction history from on-chain data. For example, during a token transfer, the token contract may dictate that the sender encrypts the note with value with the recipient's incoming key, but also records the transfer with its own outgoing key for bookkeeping purposes.

Expand Down Expand Up @@ -106,4 +106,4 @@ Nevertheless, the attacker cannot steal the affected user's funds, since authent

:::info
Note that, in the current architecture, the user's wallet needs direct access to the privacy private key, since the wallet needs to use this key for attempting decryption of all notes potentially sent to the user. This means that the privacy private key cannot be stored in a hardware wallet or hardware security module, since the wallet software uses the private key material directly. This may change in future versions in order to enhance security.
:::
:::
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@ In a logical sense, a Message Box functions as a one-way message passing mechani
- At some point, a rollup will be executed, in this step messages are "moved" from pending on Domain A, to ready on Domain B. Note that consuming the message is "pulling & deleting" (or nullifying). The action is atomic, so a message that is consumed from the pending set MUST be added to the ready set, or the state transition should fail. A further constraint on moving messages along the way, is that only messages where the `sender` and `recipient` pair exists in a leaf in the contracts tree are allowed!
- When the message have been added to the ready set, the `recipient` can consume the message as part of a function call.

Something that might seem weird when comparing to other cross-chain setups, is that we are "pulling" messages, and that the message don't need to be calldata for a function call. For _Arbitrum_ and the like, execution is happening FROM the "message bridge", which then calls the L1 contract. For us, you call the L1 contract, and it should then consume messages from the message box.
Something that might seem weird when compared to other cross-chain setups, is that we are "pulling" messages, and that the message don't need to be calldata for a function call. For _Arbitrum_ and the like, execution is happening FROM the "message bridge", which then calls the L1 contract. For us, you call the L1 contract, and it should then consume messages from the message box.
Why? _Privacy_! When pushing, we would be needing full `calldata`. Which for functions with private inputs is not really something we want as that calldata for L1 -> L2 transactions are committed to on L1, e.g., publicly sharing the inputs to a private function.

By instead pulling, we can have the "message" be something that is derived from the arguments instead. This way, a private function to perform second half of a deposit, could leak the "value" deposited and "who" made the deposit (as this is done on L1), but the new owner can be hidden on L2.
Expand Down Expand Up @@ -107,7 +107,7 @@ For the sake of cross-chain messages, this means inserting and nullifying L1 $\r

### Messages

While a message could theoretically be arbitrary long, we want to limit the cost of the insertion on L1 as much as possible. Therefore, we allow the users to send 32 bytes of "content" between L1 and L2. If 32 suffices, no packing required. If the 32 is too "small" for the message directly, the sender should simply pass along a `sha256(content)` instead of the content directly (note that this hash should fit in a field element which is ~254 bits. More info on this below). The content can then either be emitted as an event on L2 or kept by the sender, who should then be the only entity that can "unpack" the message.
While a message could theoretically be arbitrarily long, we want to limit the cost of the insertion on L1 as much as possible. Therefore, we allow the users to send 32 bytes of "content" between L1 and L2. If 32 suffices, no packing required. If the 32 is too "small" for the message directly, the sender should simply pass along a `sha256(content)` instead of the content directly (note that this hash should fit in a field element which is ~254 bits. More info on this below). The content can then either be emitted as an event on L2 or kept by the sender, who should then be the only entity that can "unpack" the message.
In this manner, there is some way to "unpack" the content on the receiving domain.

The message that is passed along, require the `sender/recipient` pair to be communicated as well (we need to know who should receive the message and be able to check). By having the pending messages be a contract on L1, we can ensure that the `sender = msg.sender` and let only `content` and `recipient` be provided by the caller. Summing up, we can use the struct's seen below, and only store the commitment (`sha256(LxToLyMsg)`) on chain or in the trees, this way, we need only update a single storage slot per message.
Expand Down Expand Up @@ -159,7 +159,7 @@ The following diagram shows the overall architecture, combining the earlier sect

As mentioned earlier, there will be a link between L1 and L2 contracts (with the L1 part of the link being the portal contract), this link is created at "birth" when the contract leaf is inserted. However, the specific requirements of the link is not yet fully decided. And we will outline a few options below.

The reasoning behind having a link, comes from the difficulty of L2 access control (see "A note on L2 access control"). By having a link that only allows 1 contract (specified at deployment) to send messages to the L2 contract makes this issue "go away" from the application developers point of view as the message could only come from the specified contract. The complexity is moved to the protocol layer, that must now ensure that messages to the L2 contract are only sent from the specified L1 contract.
The reasoning behind having a link, comes from the difficulty of L2 access control (see "A note on L2 access control"). By having a link that only allows 1 contract (specified at deployment) to send messages to the L2 contract makes this issue "go away" from the application developers point of view as the message could only come from the specified contract. The complexity is moved to the protocol layer, which must now ensure that messages to the L2 contract are only sent from the specified L1 contract.

:::info
The design space for linking L1 and L2 contracts is still open, and we are looking into making access control more efficient to use in the models.
Expand All @@ -179,7 +179,7 @@ From the L2 contract receiving messages, this model is very similar to the 1:1,

When the L1 contract can itself handle where messages are coming from (it could before as well but useless as only 1 address could send), we don't need to worry about it being in only a single pair. The circuits can therefore simply insert the contract leafs without requiring it to ensure that neither have been used before.

With many L2's reading from the same L1, we can also more easily setup generic bridges (with many assets) living in a single L1 contract but minting multiple L2 assets, as the L1 contract can handle the access control and the L2's simply point to it as the portal. This reduces complexity of the L2 contracts as all access control is handled by the L1 contract.
With many L2's reading from the same L1, we can also more easily setup generic bridges (with many assets) living in a single L1 contract but minting multiple L2 assets, as the L1 contract can handle the access control and the L2's simply point to it as the portal. This reduces the complexity of the L2 contracts as all access control is handled by the L1 contract.

## Open Questions

Expand Down
4 changes: 2 additions & 2 deletions docs/docs/concepts/foundation/communication/main.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,8 @@
title: Contract Communication
---

This section will walk over communication types that behaves differently than normal function calls from.
This section will walk over communication types that behaves differently than normal function calls.

Namely, if functions are in different domains, private vs. public, their execution behaves a little differently to what you might expect! See [Private <--> Public execution](./public_private_calls/main.md).

Likewise, executing a function on a different domain than its origin needs a bit extra thought. See [L1 <--> L2 communication](./cross_chain_calls.md).
Likewise, executing a function on a different domain than its origin needs a bit extra thought. See [L1 <--> L2 communication](./cross_chain_calls.md).
Loading