Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

shoop slower than scp in optimal bandwidth case #28

Open
strangeman opened this issue Aug 12, 2016 · 6 comments
Open

shoop slower than scp in optimal bandwidth case #28

strangeman opened this issue Aug 12, 2016 · 6 comments

Comments

@strangeman
Copy link
Contributor

strangeman commented Aug 12, 2016

Hi. I tested shoop on my environment and see that it 5x times slower than scp.

My environment:

Client:

  • Core i5 with 8Gb RAM
  • Debian 8 with latest updates
  • Rust from rustup 0.5.0 (4be1012 2016-07-30)
  • libsodium 1.0.0-1 from debian repos
  • shoop 0.0.1-prealpha.4

Server:

  • Digital Ocean 512Mb droplet
  • Debian 8 with latest updates
  • Rust from rustup 0.5.0 (4be1012 2016-07-30)
  • libsodium 1.0.0-1 from debian repos
  • shoop 0.0.1-prealpha.4

Network and test file:

SCP:

strangeman@strangebook:~/tmp$ time scp [email protected]:/root/1GB.zip .
1GB.zip                                                  100% 1024MB   4.1MB/s   04:08    

real    4m12.082s
user    0m7.052s
sys 0m10.004s

SHOOP:

strangeman@strangebook:~/tmp$ time shoop [email protected]:/root/1GB.zip 
downloading ./1GB.zip (1024.0MB)
   1024.0M / 1024.0M (100%) [ avg 0.8 MB/s ]
shooped it all up in 20m48s

real    20m51.508s
user    13m48.492s
sys 7m9.124s

Why? Did shoop optimised only for slow and unstable connections?

@mcginty
Copy link
Owner

mcginty commented Aug 12, 2016

Hey @strangeman! Thanks so much for testing.

If possible, can you build the latest HEAD (remember to do cargo build --release to optimize and get rid of debug info) and let me know if it's still that slow?

There was a memory regression but the fix is in a forked library, so I can't publish a more recent release on crates until the dependency merges the fix.

@mcginty
Copy link
Owner

mcginty commented Aug 12, 2016

Even without that, though, we still need to add multithreading to make the high-speed case actually as high-speed as it wants to be :).

So basically, yeah, it's not currently optimized for the high-speed, high-reliability case, since I'm stuck in a high-ish-speed, low-reliability case :P. You've inspired me to get on the threading business though... I just replicated your test on two VPS's, and it's making me sad.

@strangeman
Copy link
Contributor Author

Tested with HEAD version. It's faster (~1.5-2MB/s) than old version (~0.8 MB/s), but still slower than scp (~4MB/s).
But HEAD version have another annoying bug: sometimes transmission freezes for 20-30 seconds (looks like server process crashing).

Anyway, this is a great tool! Hope you'll fix these problems in future. :)

@mcginty
Copy link
Owner

mcginty commented Aug 12, 2016

Cool, I think with threading and limiting the progressbar stdout insanity we should be closer to parity on stable connections, which should also be our goal (never regress from TCP).

@mcginty
Copy link
Owner

mcginty commented Aug 14, 2016

Added basic asynchronous file I/O (client side only right now) and de-crazied stdout, and I'm still only seeing ~40Mbps from two 100Mbps VPSs, so there's something more to it. I haven't profiled the server-side yet, so that's next :).

I'm really excited about https://aturon.github.io/blog/2016/08/11/futures/.

Given time, I'm also really interested in a better benching framework for shoop, at the very least to refactor enough where we can have a "virtual" perfect UDT connection and have a solid measure on our real max bandwidth.

@mcginty mcginty changed the title This is regression on just wrong use case? shoop slower than scp in optimal bandwidth case Sep 4, 2016
@mcginty
Copy link
Owner

mcginty commented Sep 4, 2016

Been making some gradual speed improvements over the last few alpha releases, but there's still a big gap in the case of VPS-VPS transfers where we have low-latency 100Mbps-1Gbps connections. The weird thing is, on occassion, I'm seeing the same transfer speed, but often times it will inexplicably drop from 11MB/s to ~5MB/s, making me wonder if this is perhaps a UDT congestion control thing.

Still more digging to be done.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants