Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tighten language around replay protection #582

Merged
merged 1 commit into from
Sep 25, 2024
Merged

Tighten language around replay protection #582

merged 1 commit into from
Sep 25, 2024

Conversation

cjpatton
Copy link
Collaborator

@cjpatton cjpatton commented Sep 19, 2024

Closes #442.

At the moment, we have very little normative/informative language around replay protection. The only normative language is buried in the "input share validation" subsection:

  1. Check if the report has been previously aggregated. If so, the input share
    MUST be marked as invalid with the error report_replayed. A report is
    considered aggregated if its contribution would be included in a relevant
    collection job.

Another problem is that we never say explicitly to record the IDs of aggregated reports. Here the Aggregator has a choice: it can avoid excess computation by storing the ID of every report it sees in an aggregation job, thereby ensuring reports that are rejected for some other reason aren't processed again; or it can avoid excess storage by only storing the IDs of output shares it wants to aggregate.

This change has several parts to it:

  1. Add a section to the protocol overview about replay protection (why/how) and a paragraph to the aggregation sub-protocol overview.

  2. Remove the replay check from the "input share validation" section.

  3. Have the Leader check for replays right after it has picked a candidate set of reports. Have it to store the IDs either before initialization or after completion of the aggregation job.

  4. Have the Helper resolve replays (reject replays and update the stored set) either at the beginning of an aggregation job or just before completing it.

  5. Be explicit about the use of report IDs for replay protection. This only impacts an implementation note, which envisions multiple schemes for replay protection, like hashing the report. This note has been moved from the "input share validation" section to operational considerations.

draft-ietf-ppm-dap.md Outdated Show resolved Hide resolved
draft-ietf-ppm-dap.md Outdated Show resolved Hide resolved
draft-ietf-ppm-dap.md Outdated Show resolved Hide resolved
draft-ietf-ppm-dap.md Outdated Show resolved Hide resolved
draft-ietf-ppm-dap.md Outdated Show resolved Hide resolved
@cjpatton
Copy link
Collaborator Author

cjpatton commented Sep 23, 2024

Took Tim's suggestions, rebased, and squashed.

draft-ietf-ppm-dap.md Outdated Show resolved Hide resolved
draft-ietf-ppm-dap.md Outdated Show resolved Hide resolved
draft-ietf-ppm-dap.md Outdated Show resolved Hide resolved
draft-ietf-ppm-dap.md Show resolved Hide resolved
draft-ietf-ppm-dap.md Show resolved Hide resolved
draft-ietf-ppm-dap.md Show resolved Hide resolved
@cjpatton cjpatton requested a review from branlwyd September 23, 2024 22:20
draft-ietf-ppm-dap.md Show resolved Hide resolved
At the moment, we have very little normative/informative language around
replay protection. The only normative language is buried in the "input
share validation" subsection:

> 1. Check if the report has been previously aggregated. If so, the input share
>    MUST be marked as invalid with the error `report_replayed`. A report is
>    considered aggregated if its contribution would be included in a relevant
>    collection job.

Another problem is that we never say explicitly to record the IDs of
aggregated reports. Here the Aggregator has a choice: it can avoid
excess computation by storing the ID of every report it sees in an
aggregation job, thereby ensuring reports that are rejected for some
other reason aren't processed again; or it can avoid excess storage by
only storing the IDs of output shares it wants to aggregate.

This change has several parts to it:

1. Add a section to the protocol overview about replay protection
   (why/how) and a paragraph to the aggregation sub-protocol overview.

1. Remove the replay check from the "input share validation" section.

1. Have the Leader check for replays right after it has picked a
   candidate set of reports. Have it to store the IDs either before
   initialization or after completion of the aggregation job.

1. Have the Helper resolve replays (reject replays and update the stored
   set) either at the beginning of an aggregation job or just before
   completing it.

3. Be explicit about the use of report IDs for replay protection. This
   only impacts an implementation note, which envisions multiple schemes
   for replay protection, like hashing the report. This note has been
   moved from the "input share validation" section to operational
   considerations.

Co-authored-by: Brandon Pitman <[email protected]>
@cjpatton
Copy link
Collaborator Author

Rebased and squashed.

@cjpatton cjpatton merged commit b99bcfd into main Sep 25, 2024
2 checks passed
branlwyd added a commit that referenced this pull request Oct 4, 2024
The related implementation note is removed, too -- it is nowadays
duplicative of the text in the Replay Protection section.

I think this was an intended part of #582, based on the commit text of
that PR.
branlwyd added a commit that referenced this pull request Oct 7, 2024
The related implementation note is removed, too -- it is nowadays
duplicative of the text in the Replay Protection section.

I think this was an intended part of #582, based on the commit text of
that PR.
@branlwyd branlwyd mentioned this pull request Oct 11, 2024
20 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Replay attack requirements could be tighter
3 participants