Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: hoist ADR status #1407

Merged
merged 6 commits into from
Feb 22, 2023
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 4 additions & 4 deletions docs/architecture/adr-001-abci++-adoption.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,9 @@
# ADR 001: ABCI++ Adoption

## Status

The proposed and initial implementation is complete.
rootulp marked this conversation as resolved.
Show resolved Hide resolved

## Changelog

- 2022-03-03: Initial Commit
Expand Down Expand Up @@ -370,10 +374,6 @@ func (app *App) ProcessProposal(req abci.RequestProcessProposal) abci.ResponsePr
}
```

## Status

The proposed and initial implementation is complete.

## Consequences

### Positive
Expand Down
12 changes: 4 additions & 8 deletions docs/architecture/adr-002-qgb-valset.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,9 @@
# ADR 002: QGB ValSet

## Status

Accepted

## Context

To accommodate the requirements of the [Quantum Gravity Bridge](https://github.com/celestiaorg/quantum-gravity-bridge/blob/76efeca0be1a17d32ef633c0fdbd3c8f5e4cc53f/src/QuantumGravityBridge.sol), We will need to add support for `ValSet`s, i.e. Validator Sets, which reflect the current state of the bridge validators.
Expand Down Expand Up @@ -328,11 +332,3 @@ ctx.EventManager().EmitEvent(
),
)
```

## Status

Accepted

## References

- {reference link}
12 changes: 4 additions & 8 deletions docs/architecture/adr-003-qgb-data-commitments.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,9 @@
# ADR 003: QGB Data Commitments

## Status

Accepted

## Context

To accommodate the requirements of the [Quantum Gravity Bridge](https://github.com/celestiaorg/quantum-gravity-bridge/blob/76efeca0be1a17d32ef633c0fdbd3c8f5e4cc53f/src/QuantumGravityBridge.sol), We will need to add support for `DataCommitment`s messages, i.e. commitments generated over a set of blocks to attest their existence.
Expand Down Expand Up @@ -129,11 +133,3 @@ ctx.EventManager().EmitEvent(
),
)
```

## Status

Accepted

## References

- {reference link}
8 changes: 4 additions & 4 deletions docs/architecture/adr-004-qgb-relayer-security.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,9 @@
# ADR 004: QGB Relayer Security

## Status

Accepted

## Changelog

- 2022-06-05: Synchronous QGB implementation
Expand Down Expand Up @@ -397,10 +401,6 @@ In a nutshell, a new valset will be emitted if any of the following is true:
significantPowerDiff = intCurrMembers.PowerDiff(*intLatestMembers) > 0.05
```

## Status

Accepted

## References

- Tracker issue for the tasks [here](https://github.com/celestiaorg/celestia-app/issues/467).
12 changes: 6 additions & 6 deletions docs/architecture/adr-005-qgb-reduce-state-usage.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,9 @@
# ADR 005: QGB Reduce State Usage

## Status

Proposed
rootulp marked this conversation as resolved.
Show resolved Hide resolved

## Context

The first design for the QGB was to use the state extensively to store all the QGB-related data: Attestations, `Valset Confirms` and `DataCommitment Confirms`.
Expand Down Expand Up @@ -40,7 +44,7 @@ However, slashing will be very difficult, especially for liveness, i.e. an orche

Remove the `MsgValsetConfirm` defined in [here](https://github.com/celestiaorg/celestia-app/blob/a965914b8a467f0384b17d9a8a0bb1ac62f384db/proto/qgb/msgs.proto#L24-L49)
And also, the `MsgDataCommitmentConfirm` defined in [here](
https://github.com/celestiaorg/celestia-app/blob/a965914b8a467f0384b17d9a8a0bb1ac62f384db/proto/qgb/msgs.proto#L55-L76).
<https://github.com/celestiaorg/celestia-app/blob/a965914b8a467f0384b17d9a8a0bb1ac62f384db/proto/qgb/msgs.proto#L55-L76>).
rootulp marked this conversation as resolved.
Show resolved Hide resolved
Which were the way orchestrators were able to post confirms to the QGB module.
Then, keep only the state that is created in [EndBlocker](https://github.com/celestiaorg/celestia-app/blob/a965914b8a467f0384b17d9a8a0bb1ac62f384db/x/qgb/abci.go#L12-L16).
Which are `Attestations`, i.e. `Valset`s and `DataCommitmentRequest`s.
Expand All @@ -63,7 +67,7 @@ We will need to decide on two things:
## Detailed Design

The proposed design consists of keeping the same transaction types we currently have : the `MsgValsetConfirm` defined in [here](https://github.com/celestiaorg/celestia-app/blob/a965914b8a467f0384b17d9a8a0bb1ac62f384db/proto/qgb/msgs.proto#L24-L49), and the `MsgDataCommitmentConfirm` defined in [here](
https://github.com/celestiaorg/celestia-app/blob/a965914b8a467f0384b17d9a8a0bb1ac62f384db/proto/qgb/msgs.proto#L55-L76). However, remove all the message server checks defined in the [msg_server.go](https://github.com/celestiaorg/celestia-app/blob/9867b653b2a253ba01cb7889e2dbfa6c9ff67909/x/qgb/keeper/msg_server.go) :
<https://github.com/celestiaorg/celestia-app/blob/a965914b8a467f0384b17d9a8a0bb1ac62f384db/proto/qgb/msgs.proto#L55-L76>). However, remove all the message server checks defined in the [msg_server.go](https://github.com/celestiaorg/celestia-app/blob/9867b653b2a253ba01cb7889e2dbfa6c9ff67909/x/qgb/keeper/msg_server.go) :

```go
// ValsetConfirm handles MsgValsetConfirm.
Expand Down Expand Up @@ -95,10 +99,6 @@ For posting transactions, we will rely on gas fees as a mechanism to limit malic

When it comes to slashing, we can add the `dataRoot` of the blocks to the state during `ProcessProposal`, `FinalizeCommit`, or in some other way to be defined. Then, we will have a way to slash orchestrators after a certain period of time if they didn't post any confirms. The exact details of this will be left for another ADR.

## Status

Proposed

## Consequences

### Positive
Expand Down
8 changes: 4 additions & 4 deletions docs/architecture/adr-006-non-interactive-defaults.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,9 @@
# ADR 006: Non-interactive Defaults, Wrapped Transactions, and Subtree Root Message Inclusion Checks

## Status

Accepted

> **Note**
> Unlike normal tendermint/cosmos ADRs, this ADR isn't for deciding whether or not we will implement non-interactive defaults. The goal of this document is to help reviewers and future readers understand what non-interactive defaults are, the considerations that went into the initial implementation, and how it differs from the original specs.

Expand Down Expand Up @@ -678,10 +682,6 @@ func (app *App) ProcessProposal(req abci.RequestProcessProposal) abci.ResponsePr

The current implementation performs many different estimation and calculation steps. It might be possible to amortize these calculations to each transaction, which would make it a lot easier to confidently arrange an optimal block.

## Status

Accepted

## Consequences

### Positive
Expand Down
8 changes: 4 additions & 4 deletions docs/architecture/adr-007-universal-share-prefix.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,9 @@
# ADR 007: Universal Share Prefix

## Status

Implemented
rootulp marked this conversation as resolved.
Show resolved Hide resolved

## Terminology

- **compact share**: a type of share that can accommodate multiple units. Currently, compact shares are used for transactions, and evidence is to efficiently pack this information into as few shares as possible.
Expand Down Expand Up @@ -120,10 +124,6 @@ Logic
1. All shares contain a share version that belongs to a list of supported versions (initially this list contains version `0`)
1. All shares in a reserved namespace belong to one share sequence

## Status

Implemented

## Consequences

### Positive
Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,9 @@
# ADR 008: square size independent message commitments

## Status

Implemented

## Changelog

- 03.08.2022: Initial Draft
Expand Down Expand Up @@ -63,16 +67,12 @@ func MinSquareSize(shareCount uint64) uint64 {
}
```

## Status

Implemented

## Consequences

### Negative

1. The amount of subtree roots per commitment is O(sqrt(n)), while n is the number of message shares. The worst case for the number of subtree roots is depicted in the diagram below - an entire block missing one share.
![Interactive Commitment 2](./assets/complexity.png)
![Interactive Commitment 2](./assets/complexity.png)
The worst case for the current implementation depends on the square size. If it is the worst square size, as in `msgMinSquareSize`, it is O(sqrt(n)) as well. On the other hand, if the message is only in one row, then it is O(log(n)).
Therefore the height of the tree over the subtree roots is in this implementation O(log(sqrt(n))), while n is the number of message shares. In the current implementation, it varies from O(log(sqrt(n))) to O(log(log(n))) depending on the square size.

Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,9 @@
# ADR 009: Non-Interactive Default Rules for Reduced Padding

## Status

Accepted

## Changelog

- 14.11.2022: Initial Draft
Expand All @@ -18,7 +22,7 @@ Proposed

The upside of this proposal is that it reduces the inter-message padding. The downside is that a message inclusion proof will not be as efficient for large square sizes so the proof will be larger.

> **Note**
> **Note**
> This analysis assumes the implementation of [celestia-app#1004](https://github.com/celestiaorg/celestia-app/issues/1004). If the tree over the subtree roots is not a Namespace Merkle Tree then both methods have the same proof size.

As an example, take the diagram below. Message 1 is 3 shares long and message 2 is 11 shares long.
Expand Down Expand Up @@ -119,17 +123,17 @@ Each row consists of one subtree root, which means if you have log(n) rows you w

![Current ni rules proof size](./assets/current-ni-rules-proof-size.png)

NMT-Node size := 32 bytes + 2\*8 bytes = 48 bytes
NMT-Node size := 32 bytes + 2\*8 bytes = 48 bytes
MT-Node size := 32 bytes

Proof size = subtree roots (rows) + subtree roots (last row) + blue nodes (parity shares) + 2 \* blue nodes (`DataRoot`)
Proof size = (log(n) + log(k) + log(n)) \* NMT-Node size + 2\*log(k) \* MT-Node size
Proof size = subtree roots (rows) + subtree roots (last row) + blue nodes (parity shares) + 2 \* blue nodes (`DataRoot`)
Proof size = (log(n) + log(k) + log(n)) \* NMT-Node size + 2\*log(k) \* MT-Node size
Proof size = 48 \* (2\*log(n) + log(k)) + 64 \*log(k)

### Current Non-Interactive Default Rules for k/4

Proof size = subtree roots (rows) + subtree roots (last row) + blue nodes (parity shares) + 2 \* blue nodes (`DataRoot`)
Proof size = (k/4 + log(k) + k/4) \* NMT-Node size + 2\*log(k) \* MT-Node size
Proof size = subtree roots (rows) + subtree roots (last row) + blue nodes (parity shares) + 2 \* blue nodes (`DataRoot`)
Proof size = (k/4 + log(k) + k/4) \* NMT-Node size + 2\*log(k) \* MT-Node size
Proof size = 48 \* (k/2 + log(k)) + 64 \*log(k)

### Proposed Non-Interactive Default Rules
Expand All @@ -138,14 +142,14 @@ Each row consists of sqrt(n)/log(n) subtree roots. Which makes in total sqrt(n)

![Proposed ni rules proof size](./assets/proposed-ni-rules-proof-size.png)

Proof size = subtree roots (all rows) + subtree roots (last row) + blue nodes (parity shares) + 2 \* blue nodes (`DataRoot`)
Proof size = (sqrt(n) + log(k) + log(n)) \* NMT-Node size + 2\*log(k) \* MT-Node size
Proof size = subtree roots (all rows) + subtree roots (last row) + blue nodes (parity shares) + 2 \* blue nodes (`DataRoot`)
Proof size = (sqrt(n) + log(k) + log(n)) \* NMT-Node size + 2\*log(k) \* MT-Node size
Proof size = 48 \* (sqrt(n) + log(k) + log(n)) + 64 \*log(k)

### Proposed Non-Interactive Default Rules for k/4

Proof size = subtree roots (rows) + subtree roots (last row) + blue nodes (parity shares) + 2 \* blue nodes (`DataRoot`)
Proof size = (**k/2** + log(k) + k/4) \* NMT-Node size + 2\*log(k) \* MT-Node size
Proof size = subtree roots (rows) + subtree roots (last row) + blue nodes (parity shares) + 2 \* blue nodes (`DataRoot`)
Proof size = (**k/2** + log(k) + k/4) \* NMT-Node size + 2\*log(k) \* MT-Node size
Proof size = 48 \* (3k/4 + log(k)) + 64 \*log(k)

## 5. What is the worst constructible block with the most amount of padding with old and new non-interactive default rules?
Expand Down Expand Up @@ -192,10 +196,6 @@ The worst-case padding decreases from 1.1 GB to 0.8 GB in 2 GB Blocks. In the cu

You can further optimize the proof size by using the fact the Namespace is known and the same for all the subtree roots. You can do the same trick for parity shares as the namespace is fixed for them too. Both of these optimizations are not included in the analysis and would save the bytes that are used to store the namespace.

## Status

Accepted

## Consequences

### Positive
Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,11 @@
# ADR 011: Optimistic Blob Size Independent Inclusion Proofs and PFB Fraud Proofs

## Status

Accepted -> Does not affect the Celestia Core Specification

Optimization 1 & 2 **Declined** as it is currently not worth it to introduce extra complexity for reducing the PFB proof size by 512-1024 bytes.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bit messy imo to have half-accepted ADRs..


## Changelog

- 18.11.2022: Initial Draft
Expand Down Expand Up @@ -64,17 +70,17 @@ Share size := 512 bytes
NMT-Node size := 32 bytes + 2\*8 bytes = 48 bytes
MT-Node size := 32 bytes

Worst Case Normal PFB proof size in bytes
= 2 PFB Shares + blue nodes to row roots + blue nodes to (`DataRoot`)
= 2 \* Share size + 2 \* log(k) \* NMT-Node size + log(k) \* MT-Node size
= 2 \* 512 + 2 \* log(k) \* 48 + log(k) \* 32
= 1024 + 128 \* log(k)
Worst Case Normal PFB proof size in bytes
= 2 PFB Shares + blue nodes to row roots + blue nodes to (`DataRoot`)
= 2 \* Share size + 2 \* log(k) \* NMT-Node size + log(k) \* MT-Node size
= 2 \* 512 + 2 \* log(k) \* 48 + log(k) \* 32
= 1024 + 128 \* log(k)

As the size of a PFB transaction is unbounded you can encompass even more shares. To put a bound on this we assume that most PFB transactions will be able to be captured by 4 shares.

Huge PFB proof size in bytes
= 4 PFB Shares + blue nodes to row roots + blue nodes to (`DataRoot`)
= 4 \* Share size + 2 \* log(k) \* NMT-Node size + log(k) \* MT-Node size
Huge PFB proof size in bytes
= 4 PFB Shares + blue nodes to row roots + blue nodes to (`DataRoot`)
= 4 \* Share size + 2 \* log(k) \* NMT-Node size + log(k) \* MT-Node size
= 2048 + 128 \* log(k)

### Blob Inclusion Proof
Expand All @@ -85,54 +91,54 @@ The worst-case blob inclusion proof size will result from the biggest possible b

With a blob of size n and a square size of k, this means that we have O(sqrt(n)) subtree row roots and O(log(k)) subtree row roots in the last row. As the whole block is filled up, sqrt(n) tends towards k. We will also require additional k blue parity nodes to prove the row roots.

Worst case blob inclusion proof size
= subtree roots (all rows) + subtree roots (last row) + blue nodes (parity shares)
= (sqrt(n) + log(k) + k) \* NMT-Node size | sqrt(n) => k
Worst case blob inclusion proof size
= subtree roots (all rows) + subtree roots (last row) + blue nodes (parity shares)
= (sqrt(n) + log(k) + k) \* NMT-Node size | sqrt(n) => k
= ( 2 \* k + log(k) ) \* 48

## Optimizations

### Optimization 1

If a PFB would be guaranteed to be in one share then we could decrease the PFB proof size significantly. You would not only get rid of one share resulting in a 512 bytes save but also log(k) fewer blue nodes in the worst case. The maximum size PFB that fits into one share is 501 bytes long:
= Share size - nid - special byte - reserved bytes - transaction length
= 512 - 8 - 1 - 1 - 1 = 501
If a PFB would be guaranteed to be in one share then we could decrease the PFB proof size significantly. You would not only get rid of one share resulting in a 512 bytes save but also log(k) fewer blue nodes in the worst case. The maximum size PFB that fits into one share is 501 bytes long:
= Share size - nid - special byte - reserved bytes - transaction length
= 512 - 8 - 1 - 1 - 1 = 501
Therefore a normal-sized PFB of 330 bytes fits easily into a share with additional spare bytes to be used for more complex PFBs.
A requirement for this is an option for the transaction to be the only one in a share or at the start of a share if it fits in the share. This would extend to bigger PFB shares as well so you can potentially reduce the size of overlapping PFB proofs by 1 share (512 bytes).

One share PFB proof size
= Share + blue nodes to row root + blue nodes (`DataRoot`)
= Share size + log(k) \* NMT-Node size + log(k) \* MT-Node size
= 512 + log(k) \* 48 + log(k) \* 32
One share PFB proof size
= Share + blue nodes to row root + blue nodes (`DataRoot`)
= Share size + log(k) \* NMT-Node size + log(k) \* MT-Node size
= 512 + log(k) \* 48 + log(k) \* 32
= 512 + 80 \* log(k)

### Optimization 2

The second optimization that could be possible is to only prove the commitment over the PFB transaction and not the PFB transaction itself. This requires the next block header in the rollup chain to include the commitment of the PFB transaction shares as well.
This would require the PFB transaction to be deterministic and therefore predictable so you can calculate the commitment over PFB transaction beforehand. How the PFB transaction is created like gas used and signatures, is something the roll-up has to agree upon before. It only needs to be predictable, if we want to keep the option of asynchronous blocktimes and multiple Rollmint blocks per Celestia block.
The second optimization that could be possible is to only prove the commitment over the PFB transaction and not the PFB transaction itself. This requires the next block header in the rollup chain to include the commitment of the PFB transaction shares as well.
This would require the PFB transaction to be deterministic and therefore predictable so you can calculate the commitment over PFB transaction beforehand. How the PFB transaction is created like gas used and signatures, is something the roll-up has to agree upon before. It only needs to be predictable, if we want to keep the option of asynchronous blocktimes and multiple Rollmint blocks per Celestia block.
Another requirement is that the content of the PFB shares is predictable. You could enforce this by only having the PFB transaction in the share.

The commitment over the PFB transaction is created by using the same principle of creating the commitment over a blob. Therefore the size of the inclusion proof is the same as the size of a blob inclusion proof of the size of how many shares the PFB transaction spans.

Commitment inclusion proof size over one share PFB
= subtree root + blue nodes to share + blue nodes to (`DataRoot`)
= NMT-Node size + log(k) \* NMT-Node size + log(k) \* MT-Node size
= 48 + log(k) \* 48 + log(k) \* 32
= subtree root + blue nodes to share + blue nodes to (`DataRoot`)
= NMT-Node size + log(k) \* NMT-Node size + log(k) \* MT-Node size
= 48 + log(k) \* 48 + log(k) \* 32
= 48 + 80 \* log(k)

For two shares the best case is the same as for one share but the worst case includes twice the blue nodes to the row root. For 3 and 4 shares for a PFB transaction, the worst case proof size does not change significantly.

Worst case commitment inclusion proof size over 2-4 share PFB
= subtree roots + blue nodes to share + blue nodes to (`DataRoot`)
= 2 \* NMT-Node size + 2 \* log(k) \* NMT-Node size + log(k) \* MT-Node size
= 96 + log(k) \* 96 + log(k) \* 32
= subtree roots + blue nodes to share + blue nodes to (`DataRoot`)
= 2 \* NMT-Node size + 2 \* log(k) \* NMT-Node size + log(k) \* MT-Node size
= 96 + log(k) \* 96 + log(k) \* 32
= 96 + 128 \* log(k)

![Optimized Pfb Proofs](./assets/optimized-pfb-proofs.png)

<!--- This does not need a fraud proof as it could be a validation rule that even light clients can check. This would require the light clients to know the sequencer set and whose turn it was. (not sure about this)
--->
The fraud proof for this would be to prove that the commitment of the PFB transaction does not equal the predicted commitment in the header. Therefore this is equivalent to a PFB transaction inclusion proof. This fraud proof would be optimistic as we would assume that the PFB commitment is correct. But realistically if the commitment over the PFB transaction is wrong then the PFB commitment is most likely wrong as well. Therefore the fraud poof would be a PFB Fraud Proof as described at the top.
The fraud proof for this would be to prove that the commitment of the PFB transaction does not equal the predicted commitment in the header. Therefore this is equivalent to a PFB transaction inclusion proof. This fraud proof would be optimistic as we would assume that the PFB commitment is correct. But realistically if the commitment over the PFB transaction is wrong then the PFB commitment is most likely wrong as well. Therefore the fraud poof would be a PFB Fraud Proof as described at the top.
If we do not have a PFB transaction that can be predicted, we also need to slash double signing of 2 valid PFB transactions in Celestia. This is required so we don't create a valid fraud proof over a valid commitment over the PFB transaction.

The third optimization could be to SNARK the PFB Inclusion Proof to reduce the size even more.?
Expand All @@ -154,12 +160,6 @@ The other way to prove blob inclusion is dependent on the blob size. A blob incl

TODO

## Status

Accepted -> Does not affect the Celestia Core Specification

Optimization 1 & 2 **Declined** as it is currently not worth it to introduce extra complexity for reducing the PFB proof size by 512-1024 bytes.

## Consequences

### Positive
Expand Down
Loading