You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Sep 8, 2022. It is now read-only.
The proof is wrong. It says:
This is not correct:
E.g. take p = 3, q = 7 we get 0.476 < 0.226
To overestimate the size, the numerator should be as large as possible (2q) and the denominator should be as small as possible (p^2).
The theorem is still obviously true though.
I'm not sure a formal proof is necessary as the result is quite intuitive, but here is my attempt:
Proof:
(p+q)/pq = p/pq + q/pq = 1/q + 1/p.
Both terms tend to 0 as p and q tend to infinity.
Happy to do a pull request if you give me the .tex code.
I did quite like the proof by security reduction though :)
The text was updated successfully, but these errors were encountered: