-
Notifications
You must be signed in to change notification settings - Fork 124
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: Mark all packets TX'ed before PTO as lost #2129
base: main
Are you sure you want to change the base?
Conversation
We'd previously only mark 1 one or two packets as lost when a PTO fired. That meant that we potentially didn't RTX all data that we could have (i.e., that was in lost packets that we didn't mark lost). This also changes the probing code to suppress redundant keep-alives, i.e., PINGs that we sent for other reasons, which could double as keep-alives but did not. Broken out of mozilla#1998
Failed Interop TestsQUIC Interop Runner, client vs. server neqo-latest as client
neqo-latest as server
All resultsSucceeded Interop TestsQUIC Interop Runner, client vs. server neqo-latest as client
neqo-latest as server
Unsupported Interop TestsQUIC Interop Runner, client vs. server neqo-latest as client
neqo-latest as server
|
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #2129 +/- ##
=======================================
Coverage 95.37% 95.38%
=======================================
Files 112 112
Lines 36569 36565 -4
=======================================
- Hits 34879 34878 -1
+ Misses 1690 1687 -3 ☔ View full report in Codecov by Sentry. 🚨 Try these New Features:
|
Benchmark resultsPerformance differences relative to 5a99e85. coalesce_acked_from_zero 1+1 entries: No change in performance detected.time: [86.497 ns 86.824 ns 87.152 ns] change: [-1.2700% -0.3442% +0.3103%] (p = 0.45 > 0.05) coalesce_acked_from_zero 3+1 entries: No change in performance detected.time: [97.350 ns 97.673 ns 98.021 ns] change: [-0.2516% +0.1198% +0.5384%] (p = 0.56 > 0.05) coalesce_acked_from_zero 10+1 entries: No change in performance detected.time: [97.083 ns 97.537 ns 98.206 ns] change: [-0.2979% +0.4286% +1.3340%] (p = 0.33 > 0.05) coalesce_acked_from_zero 1000+1 entries: No change in performance detected.time: [78.408 ns 78.524 ns 78.656 ns] change: [-1.4893% -0.1113% +1.2854%] (p = 0.88 > 0.05) RxStreamOrderer::inbound_frame(): Change within noise threshold.time: [112.43 ms 112.48 ms 112.53 ms] change: [+0.7550% +0.8189% +0.8821%] (p = 0.00 < 0.05) transfer/pacing-false/varying-seeds: No change in performance detected.time: [25.608 ms 26.751 ms 27.877 ms] change: [-4.9566% +0.2378% +5.9291%] (p = 0.93 > 0.05) transfer/pacing-true/varying-seeds: No change in performance detected.time: [35.655 ms 37.493 ms 39.361 ms] change: [-3.2859% +4.6626% +12.793%] (p = 0.23 > 0.05) transfer/pacing-false/same-seed: No change in performance detected.time: [26.037 ms 27.087 ms 28.159 ms] change: [-3.0411% +1.6048% +6.2549%] (p = 0.51 > 0.05) transfer/pacing-true/same-seed: Change within noise threshold.time: [41.997 ms 43.896 ms 45.827 ms] change: [+0.2773% +7.1136% +14.658%] (p = 0.05 < 0.05) 1-conn/1-100mb-resp/mtu-1504 (aka. Download)/client: 💔 Performance has regressed.time: [921.53 ms 930.99 ms 940.52 ms] thrpt: [106.32 MiB/s 107.41 MiB/s 108.52 MiB/s] change: time: [+1.2845% +2.8142% +4.2275%] (p = 0.00 < 0.05) thrpt: [-4.0561% -2.7372% -1.2682%] 1-conn/10_000-parallel-1b-resp/mtu-1504 (aka. RPS)/client: No change in performance detected.time: [320.80 ms 324.07 ms 327.42 ms] thrpt: [30.542 Kelem/s 30.857 Kelem/s 31.172 Kelem/s] change: time: [-1.0000% +0.4036% +1.8189%] (p = 0.58 > 0.05) thrpt: [-1.7864% -0.4020% +1.0101%] 1-conn/1-1b-resp/mtu-1504 (aka. HPS)/client: No change in performance detected.time: [33.479 ms 33.674 ms 33.884 ms] thrpt: [29.512 elem/s 29.696 elem/s 29.869 elem/s] change: time: [-1.2013% -0.2657% +0.6182%] (p = 0.56 > 0.05) thrpt: [-0.6144% +0.2664% +1.2159%] 1-conn/1-100mb-resp/mtu-1504 (aka. Upload)/client: No change in performance detected.time: [1.6249 s 1.6423 s 1.6602 s] thrpt: [60.235 MiB/s 60.889 MiB/s 61.541 MiB/s] change: time: [-2.4670% -0.9272% +0.5758%] (p = 0.24 > 0.05) thrpt: [-0.5725% +0.9359% +2.5294%] Client/server transfer resultsTransfer of 33554432 bytes over loopback.
|
@martinthomson I'd appreciate a review, since the code I am touching is pretty complex. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This makes sense to me. Thanks for extracting it into a smaller pull request.
I am in favor of waiting for Martin's review.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we not have tests for this? Should we?
.pto_packets(PtoState::pto_packet_count(*pn_space)) | ||
.cloned(), | ||
); | ||
lost.extend(space.pto_packets().cloned()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we still need pto_packet_count if this is the decision?
The other question I have is whether this is necessary. We're cloning all of the information so that we can process the loss, which means more work on a PTO. Maybe PTO is rare enough that this doesn't matter, but one of the reasons for the limit on number was to avoid the extra work.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we still need pto_packet_count if this is the decision?
We do still need it to limit the number of packets we send on PTO.
The other question I have is whether this is necessary. We're cloning all of the information so that we can process the loss, which means more work on a PTO. Maybe PTO is rare enough that this doesn't matter, but one of the reasons for the limit on number was to avoid the extra work.
I've been wondering if it would be sufficient to mark n packets per space as lost, instead of all.
There are tests in #2128, but this PR alone doesn't make them succeed yet. |
Signed-off-by: Lars Eggert <[email protected]>
We'd previously only mark 1 one or two packets as lost when a PTO fired. That meant that we potentially didn't RTX all data that we could have (i.e., that was in lost packets that we didn't mark lost).
This also changes the probing code to suppress redundant keep-alives, i.e., PINGs that we sent for other reasons, which could double as keep-alives but did not.
Broken out of #1998