-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GIPA, TIPP, MIPP, and example of Groth16 proof aggregation #3
Conversation
Hi Nirvan, Sorry for the delay on this, I'll try to leave a detailed review today! |
No worries. I have some updated benchmarks for the Groth16 proof aggregation (I realized I wasn't building the optimized binary), so now the numbers are more or less what we would expect from counting the group operations.
|
dh_commitments/src/identity/mod.rs
Outdated
let mut out = Vec::new(); | ||
out.extend_from_slice(m); | ||
Ok(IdentityOutput(out)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let mut out = Vec::new(); | |
out.extend_from_slice(m); | |
Ok(IdentityOutput(out)) | |
Ok(IdentityOutput(m.to_vec())) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
dh_commitments/src/pedersen/mod.rs
Outdated
use inner_products::{InnerProduct, MultiexponentiationInnerProduct}; | ||
|
||
#[derive(Clone)] | ||
pub struct PedersenCommitment<G: Group> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can change from G: Group
to C: ProjectiveCurve
if that helps you here (for efficiency or otherwise)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
I changed it to ProjectiveGroup
in order to use the improved VariableMSM
calculation of MultiexponentiationInnerProduct
.
inner_products/src/lib.rs
Outdated
left.iter() | ||
.zip(right) | ||
.map(|(v, a)| P::pairing(v.clone().into(), a.clone().into())) | ||
.product(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This does n final exponentations, you probably want to use something like this: https://github.com/scipr-lab/ripp/blob/e18c63cdda5710c9c65a7ec43532ac8281af3f35/src/lib.rs#L150
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
Done. I used:
https://github.com/scipr-lab/zexe/blob/master/algebra-core/src/curves/mod.rs#L73
I think adding in parallelization support can be done in a future pass?
inner_products/src/lib.rs
Outdated
right.len(), | ||
))); | ||
}; | ||
Ok(left.iter().zip(right).map(|(g, x)| g.mul(x)).sum()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For larger inputs (len > 100), you might want to use something like https://github.com/scipr-lab/zexe/blob/38b6f6f2c6b9a6a1b5e8eb90b1287c98b06520d1/algebra-core/src/msm/variable_base.rs#L100
It should speed up this inner product massively.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
Had to change the generic parameter away from Group
to a curve in order to use VariableMSM::multi_scalar_mul
.
D: Digest, | ||
IP: InnerProduct< | ||
LeftMessage = LMC::Message, | ||
RightMessage = RMC::Message, | ||
Output = IPC::Message, | ||
>, | ||
LMC: DoublyHomomorphicCommitment, | ||
RMC: DoublyHomomorphicCommitment<Scalar = LMC::Scalar>, | ||
IPC: DoublyHomomorphicCommitment<Scalar = LMC::Scalar>, | ||
RMC::Message: MulAssign<LMC::Scalar>, | ||
IPC::Message: MulAssign<LMC::Scalar>, | ||
RMC::Key: MulAssign<LMC::Scalar>, | ||
IPC::Key: MulAssign<LMC::Scalar>, | ||
RMC::Output: MulAssign<LMC::Scalar>, | ||
IPC::Output: MulAssign<LMC::Scalar>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It might be easier to wrap up things into an GIPAConfig
trait, maybe something like the following:
pub trait GIPAConfig {
type Scalar: ...;
type LMCKey: ...;
type RMCKey: ...;
type IPCKey: ...;
type LMCMsg: ...;
type RMCMsg: ...;
type IPCMsg: ...;
type LMCOutput: ...;
type RMCOutput: ...;
type IPCOutput: ...;
type IP: InnerProduct<LeftMessage = Self::LMCMsg, RightMessage = Self::RMCMsg, Output = Self::IPCMsg>;
type LMC: ...;
type RMC: ...;
type IPC: ...;
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed with your other comment that having a GIPAConfig
that implements Clone
would allow other structs that parameterize with GIPAConfig
to simply derive Clone
, which would be easier.
Initializing a GIPAConfig
would be slightly more annoying due to the 9 extra types (Key, Msg, Output) that need to be specified.
I'm going to keep as is for now, just because of inertia to refactor, but if you have a strong preference, I'm happy to make the change.
Separately: In zexe
, is there a reason Group
doesn't implement Mul<Self>
or even better Mul<&Self>
? Currently am using MulAssign
as a workaround.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you mean Mul<Scalar>
? It's because we didn't like the idea of right multiplication by the scalar =P
if !(LMC::verify(ck.0, values.0, com.0)? | ||
&& RMC::verify(ck.1, values.1, com.1)? | ||
&& IPC::verify(&vec![ck.2.clone()], &vec![values.2.clone()], com.2)?) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should we parallelize these? Or do you think it would be unnecessary (verification is already fast enough)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, verification will be a significant cost. I think instead of parallelizing the calls to each of these three verifies, we will get what we want when we implement parallelism for the inner products in inner_products
, e.g. pedersen
uses MultiexponentiationInnerProduct
and afgho16
uses PairingInnerProduct
.
ip_proofs/src/gipa.rs
Outdated
> { | ||
let (m_a, m_b) = values; | ||
let (ck_a, ck_b, ck_t) = ck; | ||
match m_a.len() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe this is can be replaced with a
if !m_a.len().is_power_of_two() {
return Err(...);
}
if m_a.len() == 1 {
something
} else {
something else
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
Fixed when unrolling recursion.
ip_proofs/src/gipa.rs
Outdated
let (mut r_steps, r_base, mut r_transcript, ck_base) = Self::recursive_prove( | ||
(&m_a_recurse, &m_b_recurse), | ||
(&ck_a_recurse, &ck_b_recurse, ck_t), | ||
&c, | ||
)?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a way to do this in place/using iteration? Currently the memory required increases with the depth of recursion
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
Yes, I've made this change now for all of the recursive implementations in gipa
.
ip_proofs/src/gipa.rs
Outdated
} | ||
} | ||
|
||
fn recursive_verify( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a way to reduce this to a single MSM, like what Bulletproofs does? Instead of essentially repeating the prover? This should provide great speed-ups for the verifier.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, we can do something like that (looking at Section 3.1 of BBBPWM18).
I see that BBBPWM18 describes the same bit representation trick that I had emailed you all about in the TIPP context to calculate these scalars in linear time (Section 6.2 "Computing scalars").
I changed the verification to perform the multiexponentiation all at once at the end. However, it does not currently get the gains of using VariableMSM::multi_scalar_mul
since we do not bind the commitment keys (LMC::Key
, RMC::Key
) to AffineCurve
-- a downside of the generalization. Perhaps, we can add two inner products to the generic parameterization to do this? Not a super clean solution... Thoughts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One way around this would be to add a "multi-scalar-mul" trait, and invoke that? That trait would have different implementations for fields vs for curves. (Maybe that belongs in zexe, and not here?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can leave that to a future PR; I filed an issue for it.
} | ||
} | ||
|
||
impl<IP, LMC, RMC, IPC, D> Clone for GIPAProof<IP, LMC, RMC, IPC, D> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If moving to the GIPAConfig
idea, you can just derive this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See other response.
} | ||
} | ||
|
||
impl<IP, LMC, RMC, IPC, P, D> TIPA<IP, LMC, RMC, IPC, P, D> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it make the code more modular/abstracted if you used https://github.com/scipr-lab/poly-commit/tree/master/src/kzg10 ?
(I don't know the answer, because some computations might be different)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are some extra computations we need. For example, we need to support shifting by a scalar r
. That can still be made to work on top of a KZG commitment abstraction. However, the problem is that the library is fixed to commit to G1, whereas we need KZG commitments on both G1 and G2.
ip_proofs/src/tipa/mod.rs
Outdated
} | ||
} | ||
|
||
pub fn structured_generators_scalar_power<G: Group>( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If not using the existing KZG10 impl, then you should use https://github.com/scipr-lab/zexe/blob/38b6f6f2c6b9a6a1b5e8eb90b1287c98b06520d1/algebra-core/src/msm/fixed_base.rs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 Changed parameterization to ProjectiveCurve
//TODO: Structured message is a special case of the non-committed message and does not rely on TIPA | ||
//TODO: Can support structured group element messages as well as structured scalar messages | ||
|
||
// TODO: Currently implemented by completely reusing TIPA, which means wasted prover and verifier effort on B |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe you can extract out the common code into a method on TIPA, and then use that method in both TIPA and here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I think something like that could work. The problem with using TIPA black-box as I am right now is that it attempts to calculate the structured KZG polynomial commitment proof for both left
and right
. We would want to extract the polynomial commitment proving and verification code from the main prove and verify methods, so that they can be called separately for left
and right
.
Then to do TIPA for left
and a structured scalar message for right
, we can first run GIPA using a PlaceholderCommitment
that has null commitment keys and null commitments. Then run the TIPA polynomial commitment code on just left
.
I haven't made this change yet, but commenting here to make note of this as a possible plan forward.
Thanks for the comments! Sorry I wasn't able to get to it this week. I'll respond soon! |
This pull request provides the following additions:
inner_products
: Generic trait for inner products including implementations for pairing, multiexponentiation, and scalar inner productsdh_commitments
: Generic trait for doubly-homomorphic commitments including implementations for Pedersen and AFGHO16ip_proofs/gipa
: Generic implementation of generalized inner product argument with respect to a set of doubly-homomorphic commitments and an inner product.ip_proofs/tipa
: Generic implementation of trusted inner product argument for DH commitments that have commitment keys compatible with the TIPP KZG polynomial commitment trick, e.g. Pedersen and AFGHO16.ip_proofs/tipa/structured_scalar_message
: Extends TIPP with alternate proving and verification algorithms when one of the messages in the inner product is a structured scalar of the form (1, r, r^2, ...). (trusted MIPP is captured by this)ip_proofs/examples/groth16_aggregation
: Example using TIPP and MIPP for Groth16 proof aggregation.I moved the previous code for SIPP into
sipp
. It might be nice to generalize the "plaintext" message option that is used in SIPP andstructured_scalar_message
in a future refactor. Currently they fall outside the abstraction of GIPA and DH commitments.I tried my best to keep things modular and reuse code as much as possible. In some cases, this led to some repeated computation, so there is plenty of room for optimization. I have also not yet implemented any support for parallel computation.