Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: Mark all packets TX'ed before PTO as lost #2129

Open
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

larseggert
Copy link
Collaborator

We'd previously only mark 1 one or two packets as lost when a PTO fired. That meant that we potentially didn't RTX all data that we could have (i.e., that was in lost packets that we didn't mark lost).

This also changes the probing code to suppress redundant keep-alives, i.e., PINGs that we sent for other reasons, which could double as keep-alives but did not.

Broken out of #1998

We'd previously only mark 1 one or two packets as lost when a PTO fired.
That meant that we potentially didn't RTX all data that we could have
(i.e., that was in lost packets that we didn't mark lost).

This also changes the probing code to suppress redundant keep-alives,
i.e., PINGs that we sent for other reasons, which could double as
keep-alives but did not.

Broken out of mozilla#1998
Copy link

github-actions bot commented Sep 19, 2024

Failed Interop Tests

QUIC Interop Runner, client vs. server

neqo-latest as client

neqo-latest as server

All results

Succeeded Interop Tests

QUIC Interop Runner, client vs. server

neqo-latest as client

neqo-latest as server

Unsupported Interop Tests

QUIC Interop Runner, client vs. server

neqo-latest as client

neqo-latest as server

Copy link

codecov bot commented Sep 19, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 95.38%. Comparing base (5a99e85) to head (4e1b379).

Additional details and impacted files
@@           Coverage Diff           @@
##             main    #2129   +/-   ##
=======================================
  Coverage   95.37%   95.38%           
=======================================
  Files         112      112           
  Lines       36569    36565    -4     
=======================================
- Hits        34879    34878    -1     
+ Misses       1690     1687    -3     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.


🚨 Try these New Features:

Copy link

github-actions bot commented Sep 19, 2024

Benchmark results

Performance differences relative to 5a99e85.

coalesce_acked_from_zero 1+1 entries: No change in performance detected.
       time:   [86.497 ns 86.824 ns 87.152 ns]
       change: [-1.2700% -0.3442% +0.3103%] (p = 0.45 > 0.05)

Found 13 outliers among 100 measurements (13.00%)
1 (1.00%) high mild
12 (12.00%) high severe

coalesce_acked_from_zero 3+1 entries: No change in performance detected.
       time:   [97.350 ns 97.673 ns 98.021 ns]
       change: [-0.2516% +0.1198% +0.5384%] (p = 0.56 > 0.05)

Found 11 outliers among 100 measurements (11.00%)
1 (1.00%) high mild
10 (10.00%) high severe

coalesce_acked_from_zero 10+1 entries: No change in performance detected.
       time:   [97.083 ns 97.537 ns 98.206 ns]
       change: [-0.2979% +0.4286% +1.3340%] (p = 0.33 > 0.05)

Found 9 outliers among 100 measurements (9.00%)
1 (1.00%) low mild
2 (2.00%) high mild
6 (6.00%) high severe

coalesce_acked_from_zero 1000+1 entries: No change in performance detected.
       time:   [78.408 ns 78.524 ns 78.656 ns]
       change: [-1.4893% -0.1113% +1.2854%] (p = 0.88 > 0.05)

Found 13 outliers among 100 measurements (13.00%)
4 (4.00%) high mild
9 (9.00%) high severe

RxStreamOrderer::inbound_frame(): Change within noise threshold.
       time:   [112.43 ms 112.48 ms 112.53 ms]
       change: [+0.7550% +0.8189% +0.8821%] (p = 0.00 < 0.05)

Found 9 outliers among 100 measurements (9.00%)
6 (6.00%) low mild
3 (3.00%) high mild

transfer/pacing-false/varying-seeds: No change in performance detected.
       time:   [25.608 ms 26.751 ms 27.877 ms]
       change: [-4.9566% +0.2378% +5.9291%] (p = 0.93 > 0.05)

Found 1 outliers among 100 measurements (1.00%)
1 (1.00%) low mild

transfer/pacing-true/varying-seeds: No change in performance detected.
       time:   [35.655 ms 37.493 ms 39.361 ms]
       change: [-3.2859% +4.6626% +12.793%] (p = 0.23 > 0.05)

Found 1 outliers among 100 measurements (1.00%)
1 (1.00%) high mild

transfer/pacing-false/same-seed: No change in performance detected.
       time:   [26.037 ms 27.087 ms 28.159 ms]
       change: [-3.0411% +1.6048% +6.2549%] (p = 0.51 > 0.05)

Found 1 outliers among 100 measurements (1.00%)
1 (1.00%) high mild

transfer/pacing-true/same-seed: Change within noise threshold.
       time:   [41.997 ms 43.896 ms 45.827 ms]
       change: [+0.2773% +7.1136% +14.658%] (p = 0.05 < 0.05)
1-conn/1-100mb-resp/mtu-1504 (aka. Download)/client: 💔 Performance has regressed.
       time:   [921.53 ms 930.99 ms 940.52 ms]
       thrpt:  [106.32 MiB/s 107.41 MiB/s 108.52 MiB/s]
change:
       time:   [+1.2845% +2.8142% +4.2275%] (p = 0.00 < 0.05)
       thrpt:  [-4.0561% -2.7372% -1.2682%]
1-conn/10_000-parallel-1b-resp/mtu-1504 (aka. RPS)/client: No change in performance detected.
       time:   [320.80 ms 324.07 ms 327.42 ms]
       thrpt:  [30.542 Kelem/s 30.857 Kelem/s 31.172 Kelem/s]
change:
       time:   [-1.0000% +0.4036% +1.8189%] (p = 0.58 > 0.05)
       thrpt:  [-1.7864% -0.4020% +1.0101%]
1-conn/1-1b-resp/mtu-1504 (aka. HPS)/client: No change in performance detected.
       time:   [33.479 ms 33.674 ms 33.884 ms]
       thrpt:  [29.512  elem/s 29.696  elem/s 29.869  elem/s]
change:
       time:   [-1.2013% -0.2657% +0.6182%] (p = 0.56 > 0.05)
       thrpt:  [-0.6144% +0.2664% +1.2159%]

Found 7 outliers among 100 measurements (7.00%)
2 (2.00%) low mild
2 (2.00%) high mild
3 (3.00%) high severe

1-conn/1-100mb-resp/mtu-1504 (aka. Upload)/client: No change in performance detected.
       time:   [1.6249 s 1.6423 s 1.6602 s]
       thrpt:  [60.235 MiB/s 60.889 MiB/s 61.541 MiB/s]
change:
       time:   [-2.4670% -0.9272% +0.5758%] (p = 0.24 > 0.05)
       thrpt:  [-0.5725% +0.9359% +2.5294%]

Found 1 outliers among 100 measurements (1.00%)
1 (1.00%) high mild

Client/server transfer results

Transfer of 33554432 bytes over loopback.

Client Server CC Pacing MTU Mean [ms] Min [ms] Max [ms]
gquiche gquiche 1504 528.4 ± 21.7 512.3 586.0
neqo gquiche reno on 1504 786.3 ± 13.1 761.7 803.0
neqo gquiche reno 1504 755.7 ± 20.7 742.6 811.3
neqo gquiche cubic on 1504 778.0 ± 16.3 739.9 796.4
neqo gquiche cubic 1504 766.7 ± 20.1 747.9 819.4
msquic msquic 1504 103.8 ± 8.4 93.0 119.9
neqo msquic reno on 1504 213.0 ± 11.7 196.2 227.6
neqo msquic reno 1504 218.3 ± 15.3 202.5 252.3
neqo msquic cubic on 1504 217.4 ± 10.3 201.1 233.8
neqo msquic cubic 1504 208.1 ± 5.9 202.2 222.5
gquiche neqo reno on 1504 729.9 ± 99.4 591.1 858.4
gquiche neqo reno 1504 738.0 ± 92.9 606.6 865.4
gquiche neqo cubic on 1504 759.6 ± 94.1 600.3 870.0
gquiche neqo cubic 1504 754.1 ± 90.2 618.6 893.3
msquic neqo reno on 1504 699.6 ± 8.9 682.6 708.6
msquic neqo reno 1504 707.8 ± 12.9 685.9 727.7
msquic neqo cubic on 1504 715.8 ± 15.5 702.5 752.3
msquic neqo cubic 1504 707.0 ± 8.9 697.7 728.3
neqo neqo reno on 1504 651.0 ± 13.0 629.8 667.4
neqo neqo reno 1504 651.4 ± 11.4 636.0 666.2
neqo neqo cubic on 1504 614.8 ± 9.8 598.0 632.7
neqo neqo cubic 1504 617.2 ± 38.9 507.9 639.0

⬇️ Download logs

Copy link

Firefox builds for this PR

The following builds are available for testing. Crossed-out builds did not succeed.

@larseggert
Copy link
Collaborator Author

@martinthomson I'd appreciate a review, since the code I am touching is pretty complex.

Copy link
Collaborator

@mxinden mxinden left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This makes sense to me. Thanks for extracting it into a smaller pull request.

I am in favor of waiting for Martin's review.

Copy link
Member

@martinthomson martinthomson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we not have tests for this? Should we?

.pto_packets(PtoState::pto_packet_count(*pn_space))
.cloned(),
);
lost.extend(space.pto_packets().cloned());
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we still need pto_packet_count if this is the decision?

The other question I have is whether this is necessary. We're cloning all of the information so that we can process the loss, which means more work on a PTO. Maybe PTO is rare enough that this doesn't matter, but one of the reasons for the limit on number was to avoid the extra work.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we still need pto_packet_count if this is the decision?

We do still need it to limit the number of packets we send on PTO.

The other question I have is whether this is necessary. We're cloning all of the information so that we can process the loss, which means more work on a PTO. Maybe PTO is rare enough that this doesn't matter, but one of the reasons for the limit on number was to avoid the extra work.

I've been wondering if it would be sufficient to mark n packets per space as lost, instead of all.

@larseggert
Copy link
Collaborator Author

Do we not have tests for this? Should we?

There are tests in #2128, but this PR alone doesn't make them succeed yet.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants