diff --git a/docs/architecture/adr-065-store-v2.md b/docs/architecture/adr-065-store-v2.md
index b1377555e020..a6b054d68ed0 100644
--- a/docs/architecture/adr-065-store-v2.md
+++ b/docs/architecture/adr-065-store-v2.md
@@ -79,13 +79,11 @@ We propose to build upon some of the great ideas introduced in [ADR-040](./adr-0
while being a bit more flexible with the underlying implementations and overall
less intrusive. Specifically, we propose to:
-* Separate the concerns of state commitment (**SC**), needed for consensus, and
- state storage (**SS**), needed for state machine and clients.
* Reduce layers of abstractions necessary between the RMS and underlying stores.
* Remove unnecessary store types and implementations such as `CacheKVStore`.
-* Simplify the branching logic.
+* Remove the branching logic from the store package.
* Ensure the `RootStore` interface remains as lightweight as possible.
-* Allow application developers to easily swap out SS and SC backends.
+* Allow application developers to easily swap out SC backends.
Furthermore, we will keep IAVL as the default [SC](https://cryptography.fandom.com/wiki/Commitment_scheme)
backend for the time being. While we might not fully settle on the use of IAVL in
@@ -95,18 +93,12 @@ to change the backing commitment store in the future should evidence arise to
warrant a better alternative. However there is promising work being done to IAVL
that should result in significant performance improvement [1,2].
-Note, we will provide applications with the ability to use IAVL v1 and IAVL v2 as
+Note, we will provide applications with the ability to use IAVL v1, IAVL v2 and MemIAVL as
either SC backend, with the latter showing extremely promising performance improvements
over IAVL v0 and v1, at the cost of a state migration.
-### Separating SS and SC
-By separating SS and SC, it will allow for us to optimize against primary use cases
-and access patterns to state. Specifically, The SS layer will be responsible for
-direct access to data in the form of (key, value) pairs, whereas the SC layer (e.g. IAVL)
-will be responsible for committing to data and providing Merkle proofs.
-
-#### State Commitment (SC)
+### State Commitment (SC)
A foremost design goal is that SC backends should be easily swappable, i.e. not
necessarily IAVL. To this end, the scope of SC has been reduced, it must only:
@@ -121,45 +113,6 @@ due to the time and space constraints, but since store v2 defines an API for his
proofs there should be at least one configuration of a given SC backend which
supports this.
-#### State Storage (SS)
-
-The goal of SS is to provide a modular storage backend, i.e. multiple implementations,
-to facilitate storing versioned raw key/value pairs in a fast embedded database.
-The responsibility and functions of SS include the following:
-
-* Provided fast and efficient queries for versioned raw key/value pairs
-* Provide versioned CRUD operations
-* Provide versioned batching functionality
-* Provide versioned iteration (forward and reverse) functionality
-* Provide pruning functionality
-
-All of the functionality provided by an SS backend should work under a versioned
-scheme, i.e. a user should be able to get, store, and iterate over keys for the latest
-and historical versions efficiently and a store key, which is used for name-spacing
-purposes.
-
-We propose to have three defaulting SS backends for applications to choose from:
-
-* RocksDB
- * CGO based
- * Usage of User-Defined Timestamps as a built-in versioning mechanism
-* PebbleDB
- * Native
- * Manual implementation of MVCC keys for versioning
-* SQLite
- * CGO based
- * Single table for all state
-
-Since operators might want pruning strategies to differ in SS compared to SC,
-e.g. having a very tight pruning strategy in SC while having a looser pruning
-strategy for SS, we propose to introduce an additional pruning configuration,
-with parameters that are identical to what exists in the SDK today, and allow
-operators to control the pruning strategy of the SS layer independently of the
-SC layer.
-
-Note, the SC pruning strategy must be congruent with the operator's state sync
-configuration. This is so as to allow state sync snapshots to execute successfully,
-otherwise, a snapshot could be triggered on a height that is not available in SC.
#### State Sync
@@ -179,7 +132,7 @@ the primary interface for the application to interact with. The `RootStore` will
be responsible for housing SS and SC backends. Specifically, a `RootStore` will
provide the following functionality:
-* Manage commitment of state (both SS and SC)
+* Manage commitment of state
* Provide modules access to state
* Query delegation (i.e. get a value for a tuple)
* Providing commitment proofs
@@ -197,12 +150,7 @@ solely provide key prefixing/namespacing functionality for modules.
#### Proofs
-Since the SS layer is naturally a storage layer only, without any commitments
-to (key, value) pairs, it cannot provide Merkle proofs to clients during queries.
-
-So providing inclusion and exclusion proofs, via a `CommitmentOp` type, will be
-the responsibility of the SC backend. Retrieving proofs will be done through the
-a `RootStore`, which will internally route the request to the SC backend.
+Providing a `CommitmentOp` type, will be the responsibility of the SC backend. Retrieving proofs will be done through the a `RootStore`, which will internally route the request to the SC backend.
#### Commitment
@@ -231,9 +179,6 @@ and storage backends for further performance, in addition to a reduced amount of
abstraction around KVStores making operations such as caching and state branching
more intuitive.
-However, due to the proposed design, there are drawbacks around providing state
-proofs for historical queries.
-
### Backwards Compatibility
This ADR proposes changes to the storage implementation in the Cosmos SDK through
@@ -243,17 +188,14 @@ be broken or modified.
### Positive
-* Improved performance of independent SS and SC layers
+* Improved performance of SC layers
* Reduced layers of abstraction making storage primitives easier to understand
-* Atomic commitments for SC
* Redesign of storage types and interfaces will allow for greater experimentation
such as different physical storage backends and different commitment schemes
for different application modules
### Negative
-* Providing proofs for historical state is challenging
-
### Neutral
* Removal of OCAP-based store keys in favor of simple strings for state retrieval
diff --git a/store/v2/README.md b/store/v2/README.md
index eb8495c9adc4..1d5dd811f2fc 100644
--- a/store/v2/README.md
+++ b/store/v2/README.md
@@ -7,8 +7,7 @@ and [Store v2 Design](https://docs.google.com/document/d/1l6uXIjTPHOOWM5N4sUUmUf
## Usage
The `store` package contains a `root.Store` type which is intended to act as an
-abstraction layer around it's two primary constituent components - state storage (SS)
-and state commitment (SC). It acts as the main entry point into storage for an
+abstraction layer around it's primary constituent components - state commitment (SC). It acts as the main entry point into storage for an
application to use in server/v2. Through `root.Store`, an application can query
and iterate over both current and historical data, commit new state, perform state
sync, and fetch commitment proofs.
@@ -16,8 +15,7 @@ sync, and fetch commitment proofs.
A `root.Store` is intended to be initialized with already constructed SS and SC
backends (see relevant package documentation for instantiation details). Note,
from the perspective of `root.Store`, there is no notion of multi or single tree/store,
-rather these are implementation details of SS and SC. For SS, we utilize store keys
-to namespace raw key/value pairs. For SC, we utilize an abstraction, `commitment.CommitStore`,
+rather these are implementation details of SC. For SC, we utilize an abstraction, `commitment.CommitStore`,
to map store keys to a commitment trees.
## Upgrades
@@ -29,7 +27,6 @@ old ones. The `Rename` feature is not supported in store/v2.
```mermaid
sequenceDiagram
participant S as Store
- participant SS as StateStorage
participant SC as StateCommitment
alt SC is a UpgradeableStore
S->>SC: LoadVersionAndUpgrade
@@ -37,14 +34,11 @@ sequenceDiagram
SC->>SC: Prune removed store keys
end
SC->>S: LoadVersion Result
- alt SS is a UpgradableDatabase
- S->>SS: PruneStoreKeys
- end
```
-`PruneStoreKeys` does not remove the data from the SC and SS instantly. It only
+`PruneStoreKeys` does not remove the data from the SC instantly. It only
marks the store keys as pruned. The actual data removal is done by the pruning
-process of the underlying SS and SC.
+process of the underlying SC.
## Migration
@@ -54,7 +48,7 @@ the `migration` package. See [Migration Manager](./migration/README.md) for more
## Pruning
The `root.Store` is NOT responsible for pruning. Rather, pruning is the responsibility
-of the underlying SS and SC layers. This means pruning can be implementation specific,
+of the underlying commitment layer. This means pruning can be implementation specific,
such as being synchronous or asynchronous. See [Pruning Manager](./pruning/README.md) for more details.
@@ -65,4 +59,4 @@ for more details.
## Test Coverage
-The test coverage of the following logical components should be over 60%:
\ No newline at end of file
+The test coverage of the following logical components should be over 60%:
diff --git a/store/v2/pruning/README.md b/store/v2/pruning/README.md
index fbf24130c6be..9ab41af80cd1 100644
--- a/store/v2/pruning/README.md
+++ b/store/v2/pruning/README.md
@@ -1,9 +1,7 @@
# Pruning Manager
The `pruning` package defines the `PruningManager` struct which is responsible for
-pruning the state storage (SS) and the state commitment (SC) based on the current
-height of the chain. The `PruningOption` struct defines the configuration for pruning
-and is passed to the `PruningManager` during initialization.
+pruning the state commitment (SC) based on the current height of the chain. The `PruningOption` struct defines the configuration for pruning and is passed to the `PruningManager` during initialization.
## Prune Options
@@ -29,24 +27,17 @@ sequenceDiagram
participant A as RootStore
participant B as PruningManager
participant C as CommitmentStore
- participant D as StorageStore
loop Commit
A->>B: SignalCommit(true, height)
alt SC is PausablePruner
B->>C: PausePruning(true)
- else SS is PausablePruner
- B->>D: PausePruing(true)
end
A->>C: Commit Changeset
- A->>D: Write Changeset
A->>B: SignalCommit(false, height)
alt SC is PausablePruner
B->>C: PausePruning(false)
- else SS is PausablePruner
- B->>D: PausePruing(false)
end
B->>C: Prune(height)
- B->>D: Prune(height)
end
```