-
Notifications
You must be signed in to change notification settings - Fork 228
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add an optimization for trivial batch sizes. #151
base: main
Are you sure you want to change the base?
Add an optimization for trivial batch sizes. #151
Conversation
When told to validate zero signatures, there's no need to go through any pre-computation to be sure that there are no invalid signatures in the empty set. When told to validate one signature, a single signature verification call is faster than validating a single signature via the batch-verification mechanism. On my not-very-reliable desktop, the performance difference is ~99% when there are no signatures, and ~25% when there is one signature. Why would a programmer use batch verification in this case? Sometimes it makes sense to write code that handles an unknown number of signatures at once: for example, when validating a download that contains multiple objects, between zero and many of which are ed25519-signed objects. Instead of the programmer having to pick which API to use, IMO it's better just to make the "batch" API fast in all cases. This commit adds cases for 0 and 1-signature batches to the benchmark. It also adds a case for a 2-signature batch, to verify that batch verification is faster for all n>1.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @nmathewson! Congratulations on your first Rust patch!
I'm not sure how I feel about mixing the verification equations. Perhaps @valerini, @huitseeker, or @kchaliki have opinions? (Context: we did a bunch of work together documenting all the different types of ed25519 malleabilities.)
return Ok(()); | ||
} else if signatures.len() == 1 { | ||
use crate::ed25519::signature::Verifier; | ||
return public_keys[0].verify(messages[0], &signatures[0]); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm. Thinking out loud here. I'm not entirely sure if I'm okay with mixing the verification equations because of the subtle differences between the two (but anyone should feel free to try to convince me). See for example my and @valerini's comments on this issue. I suppose in general the single verification is safer than the linearised batching formula, but I still worry that mixing the two behaviours will result in difficult to debug verification errors like the following situation:
- I call
verify_batch()
on bad signature(s1,R1)
, which fails. - I call
verify_batch()
on bad signature(s1,R1)
and crafted signature(s2,R2)
which probabilistically cancels out the linearisation factors and succeeds.
However, I suppose the above behaviour is better than the current behaviour where calling verify_batch()
on (s2,R2)
alone would probabilistically succeed.
Okay, I think I've convinced myself that this behaviour is safer. My only remaining concern is the debugging confusions it might cause with a signature not verifying if passed into this function by itself versus in a batch with N>1.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @isislovecruft - I am afraid you have the wrong person, probably looking for @kchalkias instead
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Table 3 in the paper details the issue at hand:
Dalek has a cofactorless single verification with a cofactored batch verification (column "[2]+[3]"), so we can create what you'd call "false negatives" from the PoV of batch verification as a heuristic: a set S of signatures that will (probabilistically) pass batch verification but fail (iterated) single verification.
The present PR eliminates this possibility when N = |S| = 1 by making sure only single verification applies, but does not change the case |S| > 1, as you mentioned.
I think the way to fix the issue is to:
- add a cofactored single signature verification,
- make sure the batch verification is only available when using this cofactored verification by uniting them both under a "cofactored" feature flag (which would replace the current "batch" feature)
- then implement the spirit of this PR: reduce the batch verification call when N = 1 to the (now cofactored) single verification.
I'm happy to help with 1. & 2.
Actually features cannot be used here @huitseeker due to being additive: Any crate using "standard" cofactorless would silently be switched to cofactored whenever a crate using cofactored gets included. You'll could use two different
|
@burdges I see your point. I note, however, that we could make compilation blow up with something like
But a more interesting alternative is that since cofactored verification will always accept cofactorless-correct inputs, and the discrepancy in behavior can only be observed on signatures produced outside of any (draft-)standard —or this library—, we could consider feature unification under cofactorless behavior as a "feature" 😈 To make things compatible with unification, we'd hence define cofactorless verification function (still the default) under
and the corresponding cofactored verification under
The real risk occurs across libraries / implementations, where you could maliciously craft signatures so that one implementation disagrees with the other, not if all your code is "downgraded" to cofactorless unbeknownst to you. You would be downgraded, after all, to the recommended standard behavior. Because the "cofactored" feature would be non-default, the feature approach gives us a meaningful way to ensure upon cofactored opt-in that all the components of a code base are turned to cofactored behavior. At the admitted cost of possibly introducing a discrepancy with another implementation (e.g. ring) , but:
|
You could just name cofactored |
I see.
|
I suspect features cause more harm than good here, but you could've a feature that controls a type alias. |
[Hello, fine Dalek maintainers! This is my first open-source rust patch.]
When told to validate zero signatures, there's no need to go through
any pre-computation to be sure that there are no invalid signatures
in the empty set.
When told to validate one signature, a single signature verification
call is faster than validating a single signature via the
batch-verification mechanism.
On my not-very-reliable desktop, the performance difference is ~99%
when there are no signatures, and ~25% when there is one signature.
Why would a programmer use batch verification in this case?
Sometimes it makes sense to write code that handles an unknown
number of signatures at once: for example, when validating a
download that contains multiple objects, between zero and many of
which are ed25519-signed objects. Instead of the programmer having
to pick which API to use, IMO it's better just to make the "batch"
API fast in all cases.
This commit adds cases for 0 and 1-signature batches to the
benchmark. It also adds a case for a 2-signature batch, to verify
that batch verification is faster for all n>1.