-
-
Notifications
You must be signed in to change notification settings - Fork 395
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Utilize NEW_TOKEN frames #1912
base: main
Are you sure you want to change the base?
Utilize NEW_TOKEN frames #1912
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall this looks pretty good, and seems well motivated. Thanks!
Thanks also for your patience while I got around to this; day job has been very busy lately.
quinn-proto/src/connection/mod.rs
Outdated
let new_tokens_to_send = server_config | ||
.as_ref() | ||
.map(|sc| sc.new_tokens_sent_upon_validation) | ||
.unwrap_or(0); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we assert that we don't have both a server config and a token store? Or maybe pass them in an enum?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Check out this: 765a46a
Thoughts?
On one hand, I do like that it moves things in the direction of greater static type checking. On the other hand, it adds a lot more lines of code than it removes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do like the look of that. Not only is it less error-prone, it also makes the callsites clearer, which is a major problem with the affected interfaces. A bit of boilerplate is a small price to pay for readability, IMO.
c453b28
to
250e69d
Compare
2acdfca
to
47eb6ed
Compare
Previously, retry tokens were encrypted using the retry src cid as the key derivation input. This has been described by a reputable individual as "cheeky" (who, coincidentially, wrote that code in the first place). More importantly, this presents obstacles to using NEW_TOKEN frames. With this commit, tokens carry a random 128-bit value, which is used to derive the key for encrypting the rest of the token. This commit also refactors the token.rs module and makes some decoding code slightly more defensive. --- Note on nomenclature: RFC 9000 presents some unfortunate complications to naming things. It introduces a concept of a "token" that may cause a connection to be validated early. In some ways, these tokens must be treated discretely differently based on whether they originated from a NEW_TOKEN frame or a Retry packet. It also introduces an unrelated concept of a "stateless reset token". If our code and documentation were to constantly use phrases like "token originating from NEW_TOKEN frame," that would be extremely cumbersome. Moreover, it would risk feeling like leaking spec internals to the user. As such, this commit tries to move things towards the following naming convention: - A token from a NEW_TOKEN frame is called a "validation token", or "address validation token", although the shorter form should be used most often. - A token from a Retry packet is called a "retry token". We should avoid saying "stateless retry token" because this phrase is not used at all in RFC 9000 and is confusingly similar to "stateless reset token". This commit changes public usages of that phrase. - In the generic case of either, we call it a "token". - We still call a stateless reset token a "reset token" or "stateless reset token".
949027a
to
87840a9
Compare
Whenever a path becomes validated, the server sends the client NEW_TOKEN frames. These may cause an Incoming to be validated. - Adds dependency on `fastbloom` - Converts TokenInner to enum with Retry and Validation variants - Adds relevant configuration to ServerConfig - Incoming now has `may_retry` - Adds `TokenLog` object to server to mitigate token reuse
370ba70
to
bdf45e5
Compare
When a client receives a token from a NEW_TOKEN frame, it submits it to a TokenStore object for storage. When an endpoint connects to a server, it queries the TokenStore object for a token applicable to the server name, and uses it if one is retrieved.
bdf45e5
to
b3a469a
Compare
Normally I wouldn't mark your own comments as resolved. But since your comments were from when this was a draft PR, I marked as resolved the ones that seem definitely irrelevant to the current version of it. As mentioned on Discord, the MSRV CI failure does not seem to actually be caused by this PR. |
When we first added tests::util::IncomingConnectionBehavior, we opted to use an enum instead of a callback because it seemed cleaner. However, the number of variants have grown, and adding integration tests for validation tokens from NEW_TOKEN frames threatens to make this logic even more complicated. Moreover, there is another advantage to callbacks we have not been exploiting: a stateful FnMut can assert that incoming connection handling within a test follows a certain expected sequence of Incoming properties. As such, this commit replaces TestEndpoint.incoming_connection_behavior with a handle_incoming callback, modifies some existing tests to exploit this functionality to test more things than they were previously, and adds new integration tests for server and client usage of tokens from NEW_TOKEN frames.
b3a469a
to
46e204a
Compare
@@ -824,6 +824,8 @@ pub struct ServerConfig { | |||
pub(crate) incoming_buffer_size_total: u64, | |||
} | |||
|
|||
const DEFAULT_RETRY_TOKEN_LIFETIME_SECS: u64 = 15; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why extract this? There is no motivation in this commit. (And at least for this commit it could just be a Duration
.)
(If we're going to do this, let's dump the const
to the bottom of the module.)
quinn-proto/src/token.rs
Outdated
@@ -12,71 +14,103 @@ use crate::{ | |||
Duration, SystemTime, RESET_TOKEN_SIZE, UNIX_EPOCH, | |||
}; | |||
|
|||
pub(crate) struct RetryToken { | |||
/// A retry token |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this commit has too much going on at once, it seems to have all of:
- API changes
- Logic changes
- Refactoring/adding comments
Would be nice to split this into smaller pieces that do one thing.
quinn-proto/src/endpoint.rs
Outdated
Token::decode(&*server_config.token_key, &addresses.remote, &header.token) | ||
.and_then(|token| { | ||
let TokenInner { | ||
orig_dst_cid, | ||
issued, | ||
} = token.inner; | ||
if issued + server_config.retry_token_lifetime > SystemTime::now() { | ||
Ok((Some(header.dst_cid), orig_dst_cid)) | ||
} else { | ||
Err(TokenDecodeError::InvalidRetry) | ||
} | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Feels weird to use an and_then()
combinator on a case where the closure is far larger than the original expression. Maybe this should be a method on Token
instead?
@@ -0,0 +1,41 @@ | |||
//! Limiting clients' ability to reuse tokens from NEW_TOKEN frames |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should keep all of this in the token
module, which is still pretty small.
@@ -0,0 +1,330 @@ | |||
use std::{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is another commit with a lot of stuff going on. Suggest splitting it up in multiple commits (or PRs). For example, it seems that we could get most of the interfaces/traits in place before we actually slot in BloomTokenLog
?
@@ -59,6 +60,7 @@ rcgen = { workspace = true } | |||
tracing-subscriber = { workspace = true } | |||
lazy_static = "1" | |||
wasm-bindgen-test = { workspace = true } | |||
rand_pcg = "0.3" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: keep this in alphabetic order.
The server now sends the client NEW_TOKEN frames, and the client now stores and utilizes them.
The main motivation is that this allows 0.5-RTT data to not be subject to anti-amplification limits. This is a scenario likely to occur in HTTP/3 requests, as one example: a client makes a 0-RTT GET request for something like a jpeg, such that the response will be much bigger than the request, and so unless NEW_TOKEN frames are used, the response may begin to be transmitted but then hit the anti-amplification limit and have to pause until the full 1-RTT handshake completes.
For example, here's some experimental data that should be similar in the relevant ways:
sudo tc qdisc add dev lo root netem delay 100ms
(and undone withsudo tc qdisc del dev lo root netem
)This experiment was performed on Nov/24 with 2edf192 as main and 478b325 as feature.
For responses in a certain size range, avoiding the anti-amplification limits by using NEW_TOKEN frames made the request/response complete in 1 RTT on this branch versus 2 RTT on main.
Reproducible experimental setup
newtoken.rs
can be placed intoquinn/examples/
:science.py
crates the data:graph_it.py
graphs the data, after you've manually renamed the files:Here's a nix-shell for the Python graphing:
Other motivations may include:
.retry()
, this means that requests take a minimum of 3 round trips to complete even for 1-RTT data, and makes 0-RTT impossible. If NEW_TOKENs are used, however, 1-RTT requests can once more be done in only 2 round trips, and 0-RTT requests become possible again.TokenLog
has false negatives, which may range from "sometimes" to "never," in contrast to the current situation of "always."