-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
roadmap: remove optimizations of the TCP-based handshake #1959
roadmap: remove optimizations of the TCP-based handshake #1959
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suggest removing the TCP-based optimizations instead of just deprioritizing them, since given our current staffing situation, and the recent addition of the libp2p+HTTP work, we already have a lot on our plate.
I agree with all these points, especially in the context of work for Q1 and Q2.
As an early result of the swarm metrics effort (#1910), we've discovered that >80% of the connections that a (full) node on the IPFS network establishes / accepts are QUIC connections.
Have these results been shared anywhere? I don't doubt the veracity of the claim but it links to an issue that is in progress without a PR.
80% of the connections that a (full) node on the IPFS network establishes
We can expect similar numbers for other libp2p networks that use go-libp2p (and that enabled the QUIC transport).
Since this is a profile of only the IPFS network, I worry we're being myopic about considering our customer needs. What do libp2p consumers that are rely purely on TCP (like Prysm) have to say about this? This may help us inform whether or not RTT optimization for TCP should be on the libp2p roadmap sooner than later. Also if there is a rough timeline for using QUIC in Eth/Prysm? (apologies if this has been discussed elsewhere.) cc: @nisdas
Not yet. The code is still work in progress, and my dev branch is only hooked up to my own Grafana instance at this point. I'm happy to share two screenshots: @mxinden has shared some similar numbers using the brand-new rust-libp2p QUIC implementation. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For what my input is worth, reasoning here makes sense to me 👍
As an early result of the swarm metrics effort (#1910), we've discovered that >80% of the connections that a (full) node on the IPFS network establishes / accepts are QUIC connections. We can expect similar numbers for other libp2p networks that use go-libp2p (and that enabled the QUIC transport).
It therefore makes sense to focus our attention on cutting down roundtrips for QUIC users. This makes sense for multiple reasons:
I suggest removing the TCP-based optimizations instead of just deprioritizing them, since given our current staffing situation, and the recent addition of the libp2p+HTTP work, we already have a lot on our plate.