Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GIPA, TIPP, MIPP, and example of Groth16 proof aggregation #3

Merged
merged 28 commits into from
Sep 15, 2020
Merged

GIPA, TIPP, MIPP, and example of Groth16 proof aggregation #3

merged 28 commits into from
Sep 15, 2020

Conversation

nirvantyagi
Copy link
Collaborator

This pull request provides the following additions:

  • inner_products: Generic trait for inner products including implementations for pairing, multiexponentiation, and scalar inner products
  • dh_commitments: Generic trait for doubly-homomorphic commitments including implementations for Pedersen and AFGHO16
  • ip_proofs/gipa: Generic implementation of generalized inner product argument with respect to a set of doubly-homomorphic commitments and an inner product.
  • ip_proofs/tipa: Generic implementation of trusted inner product argument for DH commitments that have commitment keys compatible with the TIPP KZG polynomial commitment trick, e.g. Pedersen and AFGHO16.
  • ip_proofs/tipa/structured_scalar_message: Extends TIPP with alternate proving and verification algorithms when one of the messages in the inner product is a structured scalar of the form (1, r, r^2, ...). (trusted MIPP is captured by this)
  • ip_proofs/examples/groth16_aggregation: Example using TIPP and MIPP for Groth16 proof aggregation.

I moved the previous code for SIPP into sipp. It might be nice to generalize the "plaintext" message option that is used in SIPP and structured_scalar_message in a future refactor. Currently they fall outside the abstraction of GIPA and DH commitments.

I tried my best to keep things modular and reuse code as much as possible. In some cases, this led to some repeated computation, so there is plenty of room for optimization. I have also not yet implemented any support for parallel computation.

@Pratyush
Copy link
Member

Hi Nirvan, Sorry for the delay on this, I'll try to leave a detailed review today!

@nirvantyagi
Copy link
Collaborator Author

No worries. I have some updated benchmarks for the Groth16 proof aggregation (I realized I wasn't building the optimized binary), so now the numbers are more or less what we would expect from counting the group operations.

# of proofs generation (ms) aggregation (ms) verification (ms)
16 130 690 138
64 572 2890 187
256 2095 11650 232
1024 9238 48976 280

Comment on lines 82 to 84
let mut out = Vec::new();
out.extend_from_slice(m);
Ok(IdentityOutput(out))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
let mut out = Vec::new();
out.extend_from_slice(m);
Ok(IdentityOutput(out))
Ok(IdentityOutput(m.to_vec()))

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

use inner_products::{InnerProduct, MultiexponentiationInnerProduct};

#[derive(Clone)]
pub struct PedersenCommitment<G: Group> {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can change from G: Group to C: ProjectiveCurve if that helps you here (for efficiency or otherwise)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

I changed it to ProjectiveGroup in order to use the improved VariableMSM calculation of MultiexponentiationInnerProduct.

Comment on lines 71 to 74
left.iter()
.zip(right)
.map(|(v, a)| P::pairing(v.clone().into(), a.clone().into()))
.product(),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This does n final exponentations, you probably want to use something like this: https://github.com/scipr-lab/ripp/blob/e18c63cdda5710c9c65a7ec43532ac8281af3f35/src/lib.rs#L150

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

Done. I used:

https://github.com/scipr-lab/zexe/blob/master/algebra-core/src/curves/mod.rs#L73

I think adding in parallelization support can be done in a future pass?

right.len(),
)));
};
Ok(left.iter().zip(right).map(|(g, x)| g.mul(x)).sum())
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For larger inputs (len > 100), you might want to use something like https://github.com/scipr-lab/zexe/blob/38b6f6f2c6b9a6a1b5e8eb90b1287c98b06520d1/algebra-core/src/msm/variable_base.rs#L100

It should speed up this inner product massively.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

Had to change the generic parameter away from Group to a curve in order to use VariableMSM::multi_scalar_mul.

Comment on lines +20 to +34
D: Digest,
IP: InnerProduct<
LeftMessage = LMC::Message,
RightMessage = RMC::Message,
Output = IPC::Message,
>,
LMC: DoublyHomomorphicCommitment,
RMC: DoublyHomomorphicCommitment<Scalar = LMC::Scalar>,
IPC: DoublyHomomorphicCommitment<Scalar = LMC::Scalar>,
RMC::Message: MulAssign<LMC::Scalar>,
IPC::Message: MulAssign<LMC::Scalar>,
RMC::Key: MulAssign<LMC::Scalar>,
IPC::Key: MulAssign<LMC::Scalar>,
RMC::Output: MulAssign<LMC::Scalar>,
IPC::Output: MulAssign<LMC::Scalar>,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might be easier to wrap up things into an GIPAConfig trait, maybe something like the following:

pub trait GIPAConfig {
	type Scalar: ...;

	type LMCKey: ...;
	type RMCKey: ...;
	type IPCKey: ...;

	type LMCMsg: ...;
	type RMCMsg: ...;
	type IPCMsg: ...;

	type LMCOutput: ...;
	type RMCOutput: ...;
	type IPCOutput: ...;

	type IP: InnerProduct<LeftMessage = Self::LMCMsg, RightMessage = Self::RMCMsg, Output = Self::IPCMsg>;

	type LMC: ...;
	type RMC: ...;
	type IPC: ...;
}

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed with your other comment that having a GIPAConfig that implements Clone would allow other structs that parameterize with GIPAConfig to simply derive Clone, which would be easier.

Initializing a GIPAConfig would be slightly more annoying due to the 9 extra types (Key, Msg, Output) that need to be specified.

I'm going to keep as is for now, just because of inertia to refactor, but if you have a strong preference, I'm happy to make the change.


Separately: In zexe, is there a reason Group doesn't implement Mul<Self> or even better Mul<&Self>? Currently am using MulAssign as a workaround.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you mean Mul<Scalar>? It's because we didn't like the idea of right multiplication by the scalar =P

Comment on lines +114 to +116
if !(LMC::verify(ck.0, values.0, com.0)?
&& RMC::verify(ck.1, values.1, com.1)?
&& IPC::verify(&vec![ck.2.clone()], &vec![values.2.clone()], com.2)?)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we parallelize these? Or do you think it would be unnecessary (verification is already fast enough)?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, verification will be a significant cost. I think instead of parallelizing the calls to each of these three verifies, we will get what we want when we implement parallelism for the inner products in inner_products, e.g. pedersen uses MultiexponentiationInnerProduct and afgho16 uses PairingInnerProduct.

> {
let (m_a, m_b) = values;
let (ck_a, ck_b, ck_t) = ck;
match m_a.len() {
Copy link
Member

@Pratyush Pratyush Sep 1, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe this is can be replaced with a

if !m_a.len().is_power_of_two() {
	return Err(...);
}
if m_a.len() == 1 { 
	something 
} else { 
	something else 
}

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

Fixed when unrolling recursion.

Comment on lines 272 to 276
let (mut r_steps, r_base, mut r_transcript, ck_base) = Self::recursive_prove(
(&m_a_recurse, &m_b_recurse),
(&ck_a_recurse, &ck_b_recurse, ck_t),
&c,
)?;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a way to do this in place/using iteration? Currently the memory required increases with the depth of recursion

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

Yes, I've made this change now for all of the recursive implementations in gipa.

}
}

fn recursive_verify(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a way to reduce this to a single MSM, like what Bulletproofs does? Instead of essentially repeating the prover? This should provide great speed-ups for the verifier.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, we can do something like that (looking at Section 3.1 of BBBPWM18).

I see that BBBPWM18 describes the same bit representation trick that I had emailed you all about in the TIPP context to calculate these scalars in linear time (Section 6.2 "Computing scalars").

I changed the verification to perform the multiexponentiation all at once at the end. However, it does not currently get the gains of using VariableMSM::multi_scalar_mul since we do not bind the commitment keys (LMC::Key, RMC::Key) to AffineCurve -- a downside of the generalization. Perhaps, we can add two inner products to the generic parameterization to do this? Not a super clean solution... Thoughts?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One way around this would be to add a "multi-scalar-mul" trait, and invoke that? That trait would have different implementations for fields vs for curves. (Maybe that belongs in zexe, and not here?)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can leave that to a future PR; I filed an issue for it.

}
}

impl<IP, LMC, RMC, IPC, D> Clone for GIPAProof<IP, LMC, RMC, IPC, D>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If moving to the GIPAConfig idea, you can just derive this.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See other response.

}
}

impl<IP, LMC, RMC, IPC, P, D> TIPA<IP, LMC, RMC, IPC, P, D>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it make the code more modular/abstracted if you used https://github.com/scipr-lab/poly-commit/tree/master/src/kzg10 ?

(I don't know the answer, because some computations might be different)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some extra computations we need. For example, we need to support shifting by a scalar r. That can still be made to work on top of a KZG commitment abstraction. However, the problem is that the library is fixed to commit to G1, whereas we need KZG commitments on both G1 and G2.

}
}

pub fn structured_generators_scalar_power<G: Group>(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Collaborator Author

@nirvantyagi nirvantyagi Sep 9, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍 Changed parameterization to ProjectiveCurve

//TODO: Structured message is a special case of the non-committed message and does not rely on TIPA
//TODO: Can support structured group element messages as well as structured scalar messages

// TODO: Currently implemented by completely reusing TIPA, which means wasted prover and verifier effort on B
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe you can extract out the common code into a method on TIPA, and then use that method in both TIPA and here.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I think something like that could work. The problem with using TIPA black-box as I am right now is that it attempts to calculate the structured KZG polynomial commitment proof for both left and right. We would want to extract the polynomial commitment proving and verification code from the main prove and verify methods, so that they can be called separately for left and right.

Then to do TIPA for left and a structured scalar message for right, we can first run GIPA using a PlaceholderCommitment that has null commitment keys and null commitments. Then run the TIPA polynomial commitment code on just left.

I haven't made this change yet, but commenting here to make note of this as a possible plan forward.

@nirvantyagi
Copy link
Collaborator Author

Thanks for the comments! Sorry I wasn't able to get to it this week. I'll respond soon!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants