-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add command line arguments to control incoming/outgoing send/receive socket OS buffer size #352
Comments
Well, its quite easy. But have you tried and verify that it is working on your env? Is that really matter for a proxy server? |
@zonyitoo yes I have tested this very thoroughly and I am using these options myself as daily driver 24x7 in server shadowsocks bridge. I have tested different values and measured change in TCP behavior with them (eg: performance of multisegment TCP vs single segment TCP download and upload). Yes it matters for some use advanced cases especially when chaining proxies or chaining vpn to proxy or vice versa. It should be optional, most users don't need to use it for normal case of client PC connecting to a shadowsocks server as the default OS autotuning will handle it correctly. |
You can test that it's working by using Linux ss utility like this: |
How about sndbuf / rcvbuf for udp sockets? |
Ah.. tokio didn't provide APIs for So I have to implement it from scratch. |
For UDP there is no flow control or congestion avoidance or slow start or back off mechanism like TCP. In case of a TCP inside UDP flow which has the normal OS buffers (note: there is also no OS autotuning for UDP), it will feel any delays normally and I think it will adjust fine. I think the default OS buffer control for it are sufficient and no controls for it are necessary. The problem is specifically for TCP because in case of TPROXY, the 2 sides of the proxy are highly asymmetrical (one side is very fast access to the chained server, and one side is the slow client). Since from the OS Point of view, the connection between TPROXY shadowsocks and the chain server target is one connection, and the connection between the client and the main server is a completely different connection, the OS ramps up the buffers of one side too high than the other side, leading to a unique bufferbloat-similar phenomena where the packets are queued in the localhost shadowsocks redir buffer instead of being dropped due to congestion at the client side, and this causes confusion in the TCP congestion avoidance mechanism especially visible in case of multiple TCP connections from the same client to the same main server leading to poor overall network throughput. @zonyitoo tokio doesn't let you reach socket fd and setsockopt on it directly ? or setsockopt before converting socket object to stream object. |
@notsure2 |
Done. But I haven't tested yet. Still waiting for #354 . |
@manjuprajna If the connection between your PC and your router is faster than the connection between your router and the shadowsocks server, no tuning is needed. But if your connection to your own router is slow and the connection between the router and the remote server is much faster, then tuning is needed. You need to put in the config the same name as the command line arguments but replace hyphen with underscore. Test with a small buffer (512 KB) and increase by multiples, use any site that calculates TCP buffer size to get an estimate and test. Anyway, let's not hijack this bug report. |
* upgrade to tokio v1.0 - removed tokio::prelude - upgrade hyper to v0.14, tokio-rustls to v0.22 still working on migrating trust-dns-* and tokio-native-tls ref #354 * tokio v1.0 removed with_poll fix #355, ref #354 * removed CTRL-BREAK signal handler ref #354 * fixes compliation error, add missing return fixes #355 * allow setting SO_SNDBUF and SO_RCVBUF for sockets ref #352 * completely removed unix socket based DNS resolving ref shadowsocks/shadowsocks-android#2622 * fix build issue on Windows * fixed uds poll_write loop, fixed udp outbound loopback check ref #355 * disable default trust-dns resolver for andorid, macos, ios This commit also: - Optimized resolve() logging with elapsed time - updated tokio-native-tls * local-dns removed from default features * fix rustc version with rust-toolchain * Bump bytes from 0.6.0 to 1.0.0 * add dependabot badge * indirectly depend on trust_dns_proto via trust_dns_resolver * auto reconnect if udp sendto failed * recreate proxied socket if recv() returns failure * increase score precision to 0.0001 * example of log4rs configuration * PingBalancer instance shouldn't kill probing task when dropping - Probing task should be controlled by the internal shared state * switch to trust-dns main for latest tokio 1.0 support
Please test if the latest master works for you. |
ok I tested ss-local in redir mode talking to a chained ss-server with iperf, tcp and udp, forward and reverse, single and multiple segments and all seems to be ok. @zonyitoo |
Then problem solved. Will be released with v1.9.0. |
@zonyitoo you misunderstood me, I mean that the code works, not that this bug is solved. This issue is still there and the command line arguments are still needed. It is still needed to control the tcp buffers for send/receive incoming and outgoing to get optimal performance. |
Does the code that included in the current master solve your problem? |
Ok I didn't notice you already included the new config in master, sorry, I will retest with them now, are they supported in the config json ? or only in command line @zonyitoo |
Currently only in command line. |
there's a typo in the help text:
redv instead of recv |
Oh. That is a bug. I tested it with C-V and never noticed that :( |
inbound parameter same typo also. and it accept the wrong name, if you write the correct name it rejects execution. |
@zonyitoo Thank you so much sir. The buffer controls are working and I was able to achieve the expected level of performance using chained tproxy shadowsocks servers. |
Hello,
For advanced use case scenarios such as using shadowsocks as TPROXY in redir mode to bridge servers together, a bufferbloat effect happens that causes reduction in performance due to incorrect assumptions made by the OS when auto tuning buffers. To fix this problem manual control on the buffers is needed.
In addition, some users may like to have control on this as well for ss-local and ss-server for TCP manual tuning.
I implemented this for shadowsocks-libev. Is it possible to implement this for shadowsocks-rust ?
See shadowsocks/shadowsocks-libev#2781
@zonyitoo
The text was updated successfully, but these errors were encountered: