Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve proof size #23

Open
burdges opened this issue Jul 16, 2019 · 7 comments
Open

Improve proof size #23

burdges opened this issue Jul 16, 2019 · 7 comments

Comments

@burdges
Copy link

burdges commented Jul 16, 2019

At some point, I think the trie produced inclusion proofs which spelled out all other 15 nodes of the radix 16 tree layers, which waists like half the space in PoV blocks. We instead want Merkle roots to be computed as a a binary tree, so the proofs can be sent for the binary tree form, but the tree can still be stored on disk as a radix 16 tree.

@burdges
Copy link
Author

burdges commented Dec 22, 2019

I spoke with @drskalman about this: I incorrectly estimated the savings here because the trie frequently gets used sparsely, so many decedents get omitted entirely.

We should consider if more modern sparse Merkle tree designs would provide better performance. In particular, there are designs with clever compromises on collision resistance, like h(x,0) = x || "L" and h(0,y) = y || "R", but not entirely sure about their security properties.

We should also be careful about the actual sparsity assumptions since maybe some hybrid design exists that auto-magically gives us an almost optimal approach under almost all conditions.

@NikVolf
Copy link
Contributor

NikVolf commented Dec 24, 2019

It is probably out of scope of this library, because library is about base 16 trie.

Totally agree about SMT-s in general though, we should just calculate all possible scenarios, what theoretical benefits we might get, then I can implement it in separate library and experiment in substrate

@cheme
Copy link
Contributor

cheme commented Dec 24, 2019

I was thinking that for sparse trie (branch where number of children is less eq than 4) we could use existing proof and only apply the binary when the number of child makes it more efficient, without giving it much thought it does not look insecure. I guess 4 treshold is probably not the best (depending on how we value computational cost), but it is just a fix treshold to define.
I am not sure about h(x,0) = x || "L", "L" is some structural additional info that will imply one more round of hashing in the parent node (so without giving it too much thought it feels similar as storing key prefix).
On the other hand having compact encoding of this kind for proof seems good (just a one byte bitmask for position of empty node could be simplier).

@cheme
Copy link
Contributor

cheme commented Dec 24, 2019

Oh, h(x,0) = x || "L" may be about replacing part of the hash to encode L or R or 0 and would not involve an additional round of hash, that would makes it interesting indeed.
On the idea of using smt instead of hex trie for storing, it can makes things way simplier, using some optimisation for single child tree path seems relatively similar to prefix storage.
It seems to me that the point of storing 16 children per storage node can still be interesting to test and compare with the pure smt implementation.
Edit: but this can also be an optimization over storage of a pure smt implementation.
Edit2: idea of putting 'L', 'R' or non LR into the hash would probably not play well when doing it over multiple levels of hashing (it would end up like xoring the prefix into the hash, seems bad).
Also, in the case of storing into a hexary trie, the children bitmap of the branch replace the need for those 'L' 'R' info.

@burdges
Copy link
Author

burdges commented Dec 24, 2019

If your hash inputs enough before running another round then yes the h(x,0) = x || "L" and h(0,y) = y || "R" approach can be "free".

If your hash prefers [u8; 64]s and you expected over "L" and "R" chains over 128 in length, then you like roughly

#[derive(...)]
pub struct SMTHash([u8; 64]);

impl<const N: usize> SMTHash<N> {
    pub fn new(h: &[u8; 32], right: bool) -> { .. }
    fn bump(&mut self) {
        // Hash self.0 into self.0[0..31] and zero self.0[32..63]
    }
    fn advance(&mut self, right: bool) {
        if self.0[32] >= 248 { self.bump(); }
        if right { self.0[self.0[32] / 8] |= 1 << (self.0[32] % 8); }
        self.0[32] += 1;
    }
}

pub fn smt_binary_hash(left: &Option<SMTHash>, right: &Option<SMTHash>) -> Option<SMTHash> {
    match (left,right)
        (None,None) => None,
        (Some(left),Some(right)) => {
            // Hash left and right together
        },
        (None,Some(right)) => {
            let mut r = right.clone();
            r.advance(true);
            r
        },
        (Some(left),None) => {
            let mut r = left.clone();
            r.advance(false);
            r
        },
    }
}

We occasionally pay for hashing 128 bytes when 64 maybe sufficed, but that's rare, and more often we hash only 64.

In fact, we expect "L" and "R" chains much shorter, so we should optimize this somewhat differently. We could even alter the behavior at different depths if that improved performance, presumably meaning close to the root it assumed a full binary tree.

We have not yet analyzed if this is secure, but if your hash has strong enough collision resistance then sure it looks fine.

I think Vitalik has some blog post about wanting separation between storage and the hashing, but regardless we still need storage of intermediate nodes for performance reasons, meaning the radix 16 storage. In fact, if you were using a snark friendly hash for a chain like zcash then you might want more frequent storage because your snark friendly hash is extremely slow.

@burdges
Copy link
Author

burdges commented Dec 24, 2019

Also @cheme you make a good point about changing the threshold dynamically. We might've some situations where the trie starts dense, then goes sparce, and then goes denser again after entering some particular area?

@cheme
Copy link
Contributor

cheme commented Dec 27, 2019

It can happen if/when concatenating two hashed keys without hashing afterward (latest substrate storage change allows it in double key storage iirc), this introduce some unbalance in the trie and there might be plan to switch to child trie in such case.
So in tries where things are deliberately unbalance (through key prefixing like substrate module (for substrate module since there is not many module it will be not dense then dense then not dense again) or key concatenation) there can be such case.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants