-
-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Not a real benchmark] 0.4.17 vs 0.4.18: File transfer over LAN #5715
Comments
/cc @Stebalien @magik6k |
It would be nice to get a sense for many small files versus a single large file. Bandwidth overhead (that is, ask the OS how much traffic each node is actually sending) would also be great. Note: in these tests, you should probably use:
|
You mean to also run |
Ah, no. I mean |
|
Thanks! (we'll probably see more when we bump the wantlist size but we've been reluctant to do that without fixing sessions first). |
@schomatis where are your running your VMs? And how are you provisioning and driving them? Your timing is pretty stable which is neat. Hannah and I have been trying out different styles of benchmark tests (iptb, driving IpfsNodes in Go, and Jeromy's pure bitswap exchange tests) as we investigate bitswap sessions and they all have their merits but they all test different things. We were thinking of trying roughly what you did here and maybe using Fargate (or something) to deploy a dozen IPFS containers and point them at each other. I haven't checked on the cost or what we'd use to control them. I'm curious to try just running a well connected IPFS instance with experimental changes and enough logging output to evaluate them IRL and then just making it do a lot of work over and over. And comparing that to the same workload on a stock IPFS node. Some things like bitswap duplicate blocks would be easy to measure. Transfer speed improvements would be all over the map but maybe over a few days we'd get useful information - especially if they're getting well provided data repetitively. |
On my local machine, they are just two VMs connected to an internal network (and nothing else) where the nodes find themselves fairly quickly. But your proposed solution is much better, this is just an ad hoc test suggested by Steven to get a coarse sense of some of the performance improvements of the latest release, but I'm doing it manually and is really time consuming so it doesn't scale at all.
I think that is a key point in the analysis process, the kind of black-box testing we're doing at the moment is valuable but has a lot of limitations to really understand what's going on. Personally, just placing a lot of |
I'm going to close this since the tests I'm doing locally are not conclusive enough to assert that any real performance improvement has been achieved, |
(@Stebalien If you can think of a way to make the |
@schomatis unfortunately, no. |
So, I ended up stabilizing I'm curious if other local/virtualized tests like |
For our iptb sharness test we actually didn't pay much attention to timing, we were just looking for bitswap duplicate blocks so we didn't get a feel for what was limiting our transfer speeds. I would think that for large blocks bitswap doesn't introduce much overhead (if we ignore duplicates) but for lots of small blocks it might start to get significant. |
Two Ubuntu 16.04 64-bit VMs connected to an isolated internal network with their
.ipfs
repo mounted on antmpfs
, the "file server" VM does:the "client" does:
I'm getting a wall clock time of 8.5±0.2 seconds for both
v0.4.17
andv0.4.18
. (I've added the files withv0.4.18
, didn't bother to recreate the repo withv0.4.17
since both report a version offs-repo@7
.)This is a very basic test but I wanted to get some kind of quantification on the performance improvements between versions as suggested by @Stebalien, any ideas on variations to the test?
The text was updated successfully, but these errors were encountered: