Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix data-fin in 'dss' and 'add_addr' tests #17

Merged
merged 1 commit into from
Sep 11, 2020

Conversation

dcaratti
Copy link
Collaborator

on recent kernels (5.9-rc1+), a call to close() causes the transmission of a
regular ack having the DATA FIN bit set. This bit is acknowledged by the
peer and then subflows are closed with TCP FIN: adjust tests in the
'dss' and 'add_addr' folders accordingly.

Closes: multipath-tcp/mptcp_net-next#69
Signed-off-by: Davide Caratti [email protected]

@dcaratti dcaratti requested a review from matttbe August 17, 2020 14:34
@dcaratti dcaratti changed the title fix data-fin in 'dss' tests fix data-fin in 'dss' and 'add_addr' tests Aug 17, 2020
@matttbe matttbe requested a review from mjmartineau August 24, 2020 17:03
Copy link
Member

@matttbe matttbe left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it looks good to me!
I guess we can merge this, just a few questions and ideas!

Maybe we should add a new data_fin directory with some dedicated tests: verifying we still accept DATA_FIN in FIN+ACK like before, check what happens if everything is closed without the DATA_FIN, etc. :)
Maybe @mjmartineau has some ideas or already has .pkt files, we never know :-)

Mat: may you review this if you don't mind? :)

// SSN should be 0 for DATA-FIN packets carrying no data at all
+0.0 > F. 101:101(0) ack 1001 <nop, nop,TS val 100 ecr 700,dss dack8=1001 dsn8=101 ssn=0 dll=1 nocs fin, nop, nop>
+0.0 > . 101:101(0) ack 1001 <nop, nop,TS val 100 ecr 700,dss dack8=1001 dsn8=101 ssn=0 dll=1 nocs fin, nop, nop>
+0.0 < . 1001:1001(0) ack 101 win 450 <dss dack8=101 nocs>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do we need to receive this ACK twice?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with what @dcaratti and @matttbe mentioned in the overall conversation - this should have dack8=102.

+0.4 close(3) = 0
// SSN should be 0 for DATA-FIN packets carrying no data at all
+0.0 > F. 101:101(0) ack 1001 <nop, nop,TS val 100 ecr 700,dss dack8=1001 dsn8=101 ssn=0 dll=1 nocs fin, nop, nop>
+0.0 > . 101:101(0) ack 1001 <nop, nop,TS val 100 ecr 700,dss dack8=1001 dsn8=101 ssn=0 dll=1 nocs fin, nop, nop>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(I didn't check) do we still have tests validating we can receive a DATA_FIN in the FIN+ACK like before?

@@ -24,4 +24,6 @@
// read 100 bytes from the MPC+data
+0.01 read(4, ..., 100) = 100
+0 close(4) = 0
+0 > . 1001:1001(0) ack 101 <dss dack8=101 dsn8=1001 ssn=0 dll=1 nocs fin, nop, nop>
+0 < F. 101:101(0) ack 1001 win 225 <dss dack8=1001 nocs>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could we here receive a DATA+FIN in this FIN+ACK? (just to make sure it is still accepted or maybe we should add a dedicated test for that?)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd be surprised to see a TCP FIN here without dack8=1002 and a DATA_FIN. As discussed in the top-level pull request conversation, it's fine if the full handshake is tested in another folder

@@ -21,4 +21,4 @@
+0.0 < . 1:1(0) ack 101 win 256 <nop,nop,TS val 705 ecr 305,add_address addr[saddr] hmac=auto,dss dack8=101 dll=0 nocs>

+0.4 close(3) = 0
+0.0 > F. 101:101(0) ack 1 <nop, nop,TS val 494 ecr 700,dss dack4=1 dsn8=101 ssn=0 dll=1 nocs fin, nop, nop>
+0.0 > . 101:101(0) ack 1 <nop, nop,TS val 494 ecr 700,dss dack4=1 dsn8=101 ssn=0 dll=1 nocs fin, nop, nop>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we add another packet with the > F. to always check we have the FIN, not just the DATA+FIN?
(or maybe we have other tests validating that already? or should we validate that in specific DATA_FIN tests?)

@@ -22,4 +22,4 @@
+0.3 read(4, ..., 1000) = 1000

+0 close(4) = 0
+0 > F. 1:1(0) ack 1001 <dss dack8=1001 dsn8=1 ssn=0 dll=1 nocs fin, nop, nop>
+0 > . 1:1(0) ack 1001 <dss dack8=1001 dsn8=1 ssn=0 dll=1 nocs fin, nop, nop>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same here for the FIN+ACK?

@dcaratti
Copy link
Collaborator Author

dcaratti commented Aug 24, 2020 via email

@matttbe
Copy link
Member

matttbe commented Aug 24, 2020

>  +0.4 close(3) = 0
-// SSN should be 0 for DATA-FIN packets carrying no data at all
-+0.0   > F. 101:101(0) ack 1001 <nop, nop,TS val 100 ecr 700,dss dack8=1001 dsn8=101 ssn=0 dll=1 nocs fin, nop, nop>
++0.0   > . 101:101(0) ack 1001 <nop, nop,TS val 100 ecr 700,dss dack8=1001 dsn8=101 ssn=0 dll=1 nocs fin, nop, nop>
++0.0   < . 1001:1001(0) ack 101 win 450 <dss dack8=101 nocs>

why do we need to receive this ACK twice?

this was meant to be a packet acking the data-fin. should it be
dack8 = 102 (as it is dsn8=101 + dll = 1)

OK, that would make more sense (line 24: dack8=101, line 26: dack8=102). I guess you are going to change that, right?

(I didn't check) do we still have tests validating we can receive a DATA_FIN in the FIN+ACK like before?

you mean, a test where packetdrill closes MPTCP protocol before the
kernel under test does the close() ? I think we don't have it (and yes,
we should add it)

Sorry no I mean: do we still have other tests validating we accept DATA_FINs in a dedicated ACK (new behaviour) and in a FIN+ACK (old behaviour, before Mat's modifications).

So:

+0.0 < F. x:x(0) ack y <(...) nocs fin (...)> // FIN+ACK + DATA_FIN

and:

+0.0 < . x:x(0) ack y <(...) nocs fin (...)> // Dedicated DATA_FIN
+0.0 < F. x:x(0) ack y <(...)> // FIN+ACK

for tests that end with a call to close(), in the kernel under test, the
minimum check is to ensure that the next packet is outbound, does not
have the TCP FIN bit set, but has DSS and DATA FIN set. This would
probably reduce the diffstat of this patch at the minimum.

Then, in a separate folder, we can add a case for:

Sounds good to me, we can create a new folder later. If needed, we can create a new GH issue not to forget about that.

// SSN should be 0 for DATA-FIN packets carrying no data at all
+0.0 > F. 101:101(0) ack 1001 <nop, nop,TS val 100 ecr 700,dss dack8=1001 dsn8=101 ssn=0 dll=1 nocs fin, nop, nop>
+0.0 > . 101:101(0) ack 1001 <nop, nop,TS val 100 ecr 700,dss dack8=1001 dsn8=101 ssn=0 dll=1 nocs fin, nop, nop>
+0.0 < . 1001:1001(0) ack 101 win 450 <dss dack8=101 nocs>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with what @dcaratti and @matttbe mentioned in the overall conversation - this should have dack8=102.

+0.0 > F. 101:101(0) ack 1001 <nop, nop,TS val 100 ecr 700,dss dack8=1001 dsn8=101 ssn=0 dll=1 nocs fin, nop, nop>
+0.0 > . 101:101(0) ack 1001 <nop, nop,TS val 100 ecr 700,dss dack8=1001 dsn8=101 ssn=0 dll=1 nocs fin, nop, nop>
+0.0 < . 1001:1001(0) ack 101 win 450 <dss dack8=101 nocs>
+0.0 > F. 101:101(0) ack 1001 <nop, nop,TS val 100 ecr 700,dss dack8=1001 dsn8=101 ssn=0 dll=1 nocs fin, nop, nop>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this is sent in response to line 27's dack8=102, then it shouldn't need a data mapping (dsn/ssn/dll or the DATA_FIN flag). However if it was sent before processing the dack8=102 packet then line 28 could look like this. Is this based on observed packet captures or expected behavior?


+0 close(4) = 0
+0 > . 3001:3001(0) ack 11 <dss dack8=11 dsn8=3001 ssn=0 dll=1 nocs fin, nop, nop>
// rcv DATA_ACK[DFIN]
+0 < . 11:11(0) ack 3001 win 225 <dss dack8=3001 nocs>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Like the ack in dss_ssn_specified_client.pkt line 27, dack8 should move forward if DATA_FIN was processed. I need to capture some packets with the latest export branch and see what current behavior is, there were some cases with extra acks.

@@ -24,4 +24,6 @@
// read 100 bytes from the MPC+data
+0.01 read(4, ..., 100) = 100
+0 close(4) = 0
+0 > . 1001:1001(0) ack 101 <dss dack8=101 dsn8=1001 ssn=0 dll=1 nocs fin, nop, nop>
+0 < F. 101:101(0) ack 1001 win 225 <dss dack8=1001 nocs>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd be surprised to see a TCP FIN here without dack8=1002 and a DATA_FIN. As discussed in the top-level pull request conversation, it's fine if the full handshake is tested in another folder

@dcaratti
Copy link
Collaborator Author

>  +0.4 close(3) = 0
-// SSN should be 0 for DATA-FIN packets carrying no data at all
-+0.0   > F. 101:101(0) ack 1001 <nop, nop,TS val 100 ecr 700,dss dack8=1001 dsn8=101 ssn=0 dll=1 nocs fin, nop, nop>
++0.0   > . 101:101(0) ack 1001 <nop, nop,TS val 100 ecr 700,dss dack8=1001 dsn8=101 ssn=0 dll=1 nocs fin, nop, nop>
++0.0   < . 1001:1001(0) ack 101 win 450 <dss dack8=101 nocs>

why do we need to receive this ACK twice?

this was meant to be a packet acking the data-fin. should it be
dack8 = 102 (as it is dsn8=101 + dll = 1)

OK, that would make more sense (line 24: dack8=101, line 26: dack8=102). I guess you are going to change that, right?

yes, will re-submit the PR (hopefully, today)

(I didn't check) do we still have tests validating we can receive a DATA_FIN in the FIN+ACK like before?

you mean, a test where packetdrill closes MPTCP protocol before the
kernel under test does the close() ? I think we don't have it (and yes,
we should add it)

Sorry no I mean: do we still have other tests validating we accept DATA_FINs in a dedicated ACK (new behaviour) and in a FIN+ACK (old behaviour, before Mat's modifications).

So:

+0.0 < F. x:x(0) ack y <(...) nocs fin (...)> // FIN+ACK + DATA_FIN

and:

+0.0 < . x:x(0) ack y <(...) nocs fin (...)> // Dedicated DATA_FIN
+0.0 < F. x:x(0) ack y <(...)> // FIN+ACK

I think we don't have them. I did a quick check on the existing code: either tests end with a close(), or they don't test cases where the connection is teared down by an inbound packet. Does it make sense to change one of the above test cases to remove the call to close() and replace it with a DATA FIN?

for tests that end with a call to close(), in the kernel under test, the
minimum check is to ensure that the next packet is outbound, does not
have the TCP FIN bit set, but has DSS and DATA FIN set. This would
probably reduce the diffstat of this patch at the minimum.
Then, in a separate folder, we can add a case for:

Sounds good to me, we can create a new folder later. If needed, we can create a new GH issue not to forget about that.

makes sense!

on recent kernels (5.9-rc1+), a call to close() causes the transmission of a
regular TCP ACK having the DATA FIN bit set. Then, the kernel under test
immediately sends a TCP FIN with DACK and DATA FIN: adjust tests in the
'dss' and 'add_addr' folders accordingly.

Closes: multipath-tcp/mptcp_net-next#69
Signed-off-by: Davide Caratti <[email protected]>
@matttbe
Copy link
Member

matttbe commented Sep 11, 2020

Again sorry for the delay! The modifications you did since last time looks good to me!

@matttbe matttbe merged commit 679a88c into multipath-tcp:mptcp-net-next Sep 11, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Packetdrill: dss: failing with new DATA_FIN patches
3 participants