Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add MPTCP support with the --multipath flag #1166

Closed
wants to merge 1 commit into from
Closed

Conversation

pabeni
Copy link

@pabeni pabeni commented Jun 14, 2021

  • Version of iperf3 (or development branch, such as master or
    3.1-STABLE) to which this pull request applies:
    master

  • Issues fixed (if any):
    []

  • Brief description of code changes (suitable for use as a commit message):

Recent version of the Linux kernel (5.9) gained MPTCP support, this change
allows easy testing such protocol.
MPTCP is strongly tied to TCP, and the kernel APIs are almost the same,
so this change does not implement a new 'struct protocol'. Instead it just
extend the TCP support to optionally enable multipath via the --multipath or
-m switch.

The only required dependency is the 'IPPROTO_MPTCP' protocol number
definition, which should be provided by the netinet/in.h header.
To keep things simple, just conditionally provide the required
protocol, if the system header does not have it yet.

Also available with the short option '-m'.
The MPTCP protocol is really a TCP variant, so this change
does not implement a new 'struct protocol'. Instead it just
extend the TCP support to optionally enable multipath.

The only required dependency is IPPROTO_MPTCP definition,
which should be provided by the netinet/in.h header.
To keep things simple, just conditionally provide the required
protocol, if the system header does not have it yet
pgagne added a commit to pgagne/lnst that referenced this pull request Aug 4, 2021
Added code to support setting the `--multipath` flag in iperf3

Note: This requires a special build of iperf:
esnet/iperf#1166
pgagne added a commit to pgagne/lnst that referenced this pull request Aug 4, 2021
Added code to support setting the `--multipath` flag in iperf3

Note: This requires a special build of iperf:
esnet/iperf#1166
pgagne added a commit to pgagne/lnst that referenced this pull request Sep 12, 2021
Added code to support setting the `--multipath` flag in iperf3 on both client and server

Note: This requires a special build of iperf:
esnet/iperf#1166
olichtne pushed a commit to LNST-project/lnst that referenced this pull request Sep 15, 2021
Added code to support setting the `--multipath` flag in iperf3 on both client and server

Note: This requires a special build of iperf:
esnet/iperf#1166
@teto
Copy link
Contributor

teto commented Dec 13, 2021

I've tested it locally and it worked fine. Needs a rebase and maybe reword the commit too.,

@bmah888
Copy link
Contributor

bmah888 commented Dec 15, 2021

A belated thank-you for this pull request. It sounds useful, although I'm not sure where MPTCP would be used in our (ESnet's) network environment. (or for that matter how we would test whether this works or not.)

The main change I can think of at this point is that you should be testing for MPTCP support (I guess you can check for the presence of IPPROTO_MPTCP), and only allowing someone to pass -m or --multipath if it's actually present. The way it works now, it looks like you just assume that the host supports MPTCP regardless of whether IPPROTO_MPTCP is defined or not, which isn't the way we detect and enable features across different codebases.

If you look in configure.ac you can see an example of this where we test for the presence of various socket options such as SO_MAX_PACING_RATE or SO_BINDTODEVICE.

@teto
Copy link
Contributor

teto commented Dec 29, 2021

If you look in configure.ac you can see an example of this where we test for the presence of various socket options such as SO_MAX_PACING_RATE or SO_BINDTODEVICE.

wont that be an issue when cross compiling ? I would think it's not necessary as one needs to check at runtime anyway

@ZerxXxes
Copy link

This is great! The first use case I can think of is when both server and client is dual stack. If I understand it correctly MPTCP will create a subflow per L3 path so iperf will push traffic over IPv4 and IPv6 at the same time.
In some cases one of them is a bit slower due to CGNAT or just suboptimal routing (when testing long haul links, like cross continents) and then MPTCP will just push more of the packets via the path that works best.
Looking forward to test this!

@voidpointertonull
Copy link

It sounds useful, although I'm not sure where MPTCP would be used in our (ESnet's) network environment. (or for that matter how we would test whether this works or not.)

Ideally MPTCP would be used almost everywhere as practically a superset of TCP.
In a theoretical perfectly behaving, redundant network environment you wouldn't gain much, so if yours is like that or close to it, then the upsides may not be immediately obvious. However in the wild it will most often serve these main purposes:

  • Seamless failover: As long as at least one network path remains intact, MPTCP connections won't get interrupted. OpenMPTCProuter is one example of this with typically multi-WAN setups, and Apple is an example for mobility usage of this to avoid the common mobile data <-> WiFi transition failure. A sysadmin could also enjoy SSH connections staying alive while running around with WiFi, but then moving back to the lower latency Ethernet and avoiding polluting the shared radio space, all without resorting to autossh + cursing, mosh, or any other trickery trying to make up for the lack of fault tolerance.
  • Bandwidth aggregation: Bandwidth of links could be aggregated without the usual link aggregation restrictions. The links and the networks can be different. For example a multi-WAN setup would aggregate the bandwidth of all ISPs, or WiFi + Ethernet could be used together. Back to the mobile sysadmin example, a large transfer could be started on the go, and if performance isn't satisfying over WiFi, then an Ethernet connection could be added to speed it up without any interruptions.

Testing could be still tricky as support is still not widespread. I'm not fully confident, but it's possible that with a recent enough kernel and mptcpd/NetworkManager there's already no need for extra configuration. Alternatively Red Hat has good instructions on getting multiple subflows going as part of a quite decent documentation, but they also have a more recent post describing it too.

Worst case while that's not testing MPTCP features, if the MPTCP handshake fails, a regular TCP connection is established, so the handshake attempt could be observed over the network, and from that point on the rest could be left up to the system as the program successfully opted into using MPTCP, and further participation is not mandatory for the extra features.

If I understand it correctly MPTCP will create a subflow per L3 path so iperf will push traffic over IPv4 and IPv6 at the same time.

IPv4 and IPv6 mixing was likely not possible yet at the time of your comment, at least related work seems to be quite recent as it can be seen https://github.com/multipath-tcp/mptcp_net-next/issues/269.
Haven't even considered the upside earlier you've mentioned. Even if there's just an A DNS record or a "naked" IPv4 address provided, MPTCP could still potentially establish an IPv6 connection.

@geliangtang
Copy link

#1659

@pabeni
Copy link
Author

pabeni commented Mar 13, 2024

obsoleted by #1661

@pabeni pabeni closed this Mar 13, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants