-
-
Notifications
You must be signed in to change notification settings - Fork 277
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tailscale over OMR is painfully slow! #3702
Comments
I don't know much about Tailscale. |
I meant local VPS is I am using local provider VPS. Tailscale is similar to ZeroTier, It is zero config VPN based on Wireguard. So essentially it is centralized wireguard management that allows each node to establish peer to peer network. |
I use tailscale with openmptcp and have no issues, performance is excellent. I have numerous clients and run an exit node behind openmptcp and have no issues. |
have you tried adjusting (reducing) the MTU value on both sides of the tunnel? |
Did you use the direct UDP connection or TCP DERP Relay?
How to adjust that? What is the recommended value? |
Tailscale should gets the right value on its own, with path MTU discovery but you must allow ICMP traffic through all your devices. Anyway, in the recent versions it's 1280 by default. This size works fine with Wireguard and OMR in my experience. |
The MTU is locked in wire guard, but it can be fragmented. All I can suggest is double check your performance between these two sites NOT using tailscale, as I doubt that is the cuplrit or related. |
I put up a discussion on VPN performance over on the discussions tab as I hadn't yet seen this reported issue. I don't think this issue is exclusive to Tailscale as it can be reproduced across multiple VPN types. I've demonstrated it with Wireguard (which Tailscale is based on), OpenVPN and even standard SSL VPN. (i.e. Cisco AnyConnect, Palo Alto GP) UDP VPN's like Wireguard are most affected, but it also affects VPN's with TCP based transport as well. I'm a network engineer, so I'm well aware of things like MSS/MTU. This isn't the problem, though it would affect "high performance" if you could get "high performance." But, if you're adjusting it, the correct values for IPv4 are 1460 byte MTU with a 1420 byte MSS. If you utilize the VPN tunnel (such as with MPTCP over VPN), the correct values would be 1420 byte MTU with a 1380 byte MSS. These values ensure the final packet size is 1500 bytes. I do think there might be differences between versions and thus this could produce "mine is working fine" types of results. I'm seeing this issue on the latest versions, using the 6.6 kernel. I also don't think this is tied to the proxy type, at least based on my testing. If you haven't done so all ready, you do need to optimize for UDP if you're going to use Wireguard. This means using a "UDP friendly" proxy like Xray and also selecting the option to use the VPN for UDP traffic. (Which, I suspect this issue may actually be tied to VPN performance.) I'm planning on pulling a packet capture post-OpenMPTCPRouter to see if I can learn what's going on with the packets. |
I think it's fair to assume that anyone using OpenMPTCP has a solid understanding of networking. I also use Cisco AnyConnect, OpenVPN (via Mullvad), and Wireguard (via Mullvad) in my MPTCP setup, and I’ve found the performance to be excellent across these setups. While I haven’t benchmarked performance with Mullvad specifically, my experience suggests it’s comparable, with maybe a 20-30% reduction compared to running without Mullvad. For the various sites I connect to using AnyConnect, the performance has been consistently excellent. Of course, there’s a slight drop—around 20% in overall performance—but that’s expected since I’m essentially running a VPN within another VPN. Been using OpenMPTCP for about 3 years, and currently run the latest release and I can't see I've noticed any difference with respect to VPN performance compared to the overall performance in any release, I can say the last two releases things have gotten generally faster though! |
Agreed on those points, most people using a project like this do probably have some decent familiarity with networks. My packet capture indicated a lot of packets were using extremely small MTU's, like in the 50 to 200 byte range. (Still, not a smoking gun.) But, I've been banging on this for days now and might have just figured out my problem. I was using the option "V2Ray/XRay UDP" found in System->OpenMPTCPRouter->Advanced Settings. I had turned this on, thinking it would help. Turns out, my Wireguard performance is much improved with that off, when also using Xray proxy. I'm not getting full performance, but can now push 100mbit+, which is likely fairly normal with the overhead of encryption. I know for sure I had toggled this in the past and run IPERF's to no avail. However, in my diagnostics, it's quite plausible I had something else (like perhaps different proxy, different tunnel type, different MTU/MSS, etc) at different values. I know better than to try multiple things at once, but I was at the point where I was certain my various tuning of OpenMPTCPRouter wasn't "just one thing." To summarize my settings, in case it helps others: Proxy: XRay-VLESS (other XRay proxies should also work well) For the actual tunnel, the default MTU that Tailscale uses should be OK as it's smaller than it needs to be. If you're using normal Wireguard like me, it should whack packets (i.e. MSS) to at least around 1420 bytes to allow for two 40 byte headers. (i.e. the actual VPN packet plus the header your router will slap on top of the encrypted packet) This will be optimal, but lesser values will also work. Wireguard seems to handle fragmenting OK, but it's best to avoid it as this is extra work your routers have to do and can affect performance slightly. |
Great, I think its best to follow @Ysurac advise and always use the defaults unless you really know what you are doing and have a very special reason to not use the defaults, and trust me I learned this one the hard way. (; |
In fairness, at least these days, almost everyone has to tune it since the current defaults don't work so well. IMO, Shadowsocks-Rust and cubic should not be the modern defaults. You can also eek a bit more tunnel performance and less added latency by tweaking the default VPN from OpenVPN to Glorytun. |
The default configuration is made to work in most cases. |
I have been doing some extensive test using iperf3, found out that the route is really bad from the OMR VPS to the other node, nothing can be done to fix if not change the VPS provider |
No, please don't assume this. Some of us just hope for a turn key solution to have better, faster, more reliable internet by combining multiple flaky uplinks and don't know much about all the little details of what they are doing. |
Expected Behavior
I want to take the advantage of the aggregation bandwidth done by the OMR to get full speed for my site to site tailscale client. I currently have 3 WAN which respectively speed 150 Mbps, 100 Mbps and 20 Mbps. By utilizing tailscale over OMR, I am expected to see at least 200 Mbps++ speed between tailscale client
Current Behavior
If tailscale traffic routes through OMR, regardless the direct UDP connection or through TCP DERP Relay, it will be painfully slow, only getting around 2-5 Mbps. However, if I route directly to one of the ISP, I am getting at least 50 Mbps (still not that fast, but far better than slow 2 Mbps over OMR).
I am currently using XRAY VMESS as the UDP connection. I am also able to get more than 200 Mbps++ if I am using cloudflare WARP over OMR, which I know for fact using UDP to tunnel, so I know the potential of routing UDP traffic with high speed should no longer be the issue on OMR
Specifications
The text was updated successfully, but these errors were encountered: