-
Notifications
You must be signed in to change notification settings - Fork 981
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Store only essential Merkle tree snapshots #2043
Comments
Thanks! We should definitely do this. Can you also estimate the storage size as an equation, e.g. dependent on what numbers of particular objects (accounts, proof-of-stake bonds, etc.) we're storing at any particular time? I would like to understand the asymptotic bounds. |
The size of a Merkle tree(our sparse Merkle tree) store is According to the investigation of
The size of each store:
The size of snapshots for 60 epochs:
The total size of the Mekle tree snapshots would be 2.1GB. (In the testnet with |
Thanks, this is helpful, but I'd like to see a more specific breakdown in terms of what data structures are actually stored by PoS, e.g.
Could you put that together? |
Regarding the PoS data, it would be better to go with v0.24.0 as before we weren’t trimming old data properly so many fields were growing unbounded. This was fixed in #1944. Each epoched data type is now trimmed to a configured number of past epoch - the last type parameter in Bonds and unbonds are not being trimmed as we’re applying slashes lazily (on withdrawal) so we need to preserve their start epochs, but there can be only one record per unique (delegator, validator) pair per epoch. We provide a more detailed breakdown of the number of stored fields in terms of parameters if needed. This might be a good addition for updated specs. |
we might be able to withgo storing multiple bridge pool merkle trees, as well. we only need to keep trees from the latest root that has been signed with a quorum signature onward. so, suppose the latest merkle tree nonce is |
@sug0 Thanks for your suggestion. In this context, we are dealing with snapshots of each subtree. I think we should delete entries in each subtree and the storage subspace in a separate process. |
Currently, we store a lot of Merkle tree snapshots. In the current testnet, the size of a snapshot was over 2GB and the 60 snapshots took over 100GB.
The Merkle tree has 4 subtrees under the
Base
tree;Account
,PoS
,Ibc
, andBridgePool
.Base
tree has roots of subtrees. Each subtree has hashed key-value pairs. We storeBase
tree snapshot at every height(block) for the root hash. 4 subtree snapshots are stored every epoch for the readable period (60 epochs in the testnet).namada/apps/src/lib/node/ledger/storage/rocksdb.rs
Lines 912 to 940 in 2138c96
However, only IBC and EthBridge require the Merkle proof though the Merkle tree snapshot is used for proof generation. The
Account
andPoS
snapshots are never used (except for node restart).So, we need to store only
Ibc
andBridgePool
tree snapshots for the period and storeAccount
andPoS
tree snapshots only for the last epoch for restarting.The changes would be about:
It would drastically reduce the storage size. The size would be from 100GB to 4GB.
The text was updated successfully, but these errors were encountered: