-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Show that when benchmarking properly there's no difference #1
Conversation
Delta: minor (0.01-ish in all configs)
Delta: -0.2 in Firefox for both Bluebird and native
It fluctuates wildly, which is not a good sign for the reliability of this benchmark
This is not really making a difference
With proper benchmarking there's no difference between the two.
Note that I think the reason benchmark.js is helpful here is because IIUC it runs until the results start settling down. So the fact that we have e.g. 57 runs sampled vs. 11 runs sampled just means that the JIT needed a few more runs to optimize the promise code than it did the sync code. |
Why is the sync test async? How much does the result change when you change it to sync? |
I made the test async so as to give each of them an equal footing. This is supposed to be a test of how long it takes to do 10 reads in each paradigm, not how many 10-reads you can do in a second. Using different benchmark settings for each result seems like it would give an unfair advantage to the sync one because e.g. if benchmark.js uses setTimeout(,0) in its deferred implementation, that would obviously dominate the results. (Maybe that is what we are seeing here, given that they are so close...) |
If we are suspicious of benchmark.js we can try evolving https://github.com/domenic/streams-promise-read/tree/no-benchmarkjs. It currently shows a ~3x slowdown for promises in Chrome, ~18x in Firefox, but I still suspect we haven't properly warmed up the JIT. |
Is it possible to benchmark inside a fresh web worker, this would be similar to how node benchmark or bluebird benchmarks do it. Fresh process -> warmup -> do benchmark -> shutdown -> rinse and repeat for next benchmark. |
I haven't looked at the code yet, but this promise result looks completely bogus to me:
|
I think the async callback at the end of the sync test is dominating the results with these low numbers of chunks. If you increase the number of chunks to 1000+ you can see the sync case pulling away from promises.
I guess we can debate the reality of being able to read that many chunks synchronously. (Or wisdom of doing so given frame deadlines, etc.) I do want to re-run these tests on some mobile devices, though. |
The results in the previous comment were for bluebird. For default Firefox promises... they are much worse:
|
Merge @domenic's changes to use benchmark.js, avoid the sync loop from being optimized out, etc.
FYI, I opened a bug for firefox's poor performance here: https://bugzilla.mozilla.org/show_bug.cgi?id=1152875 |
I can reproduce this in FF release, but I can't explain the insane number of results we get with the promises here. |
I was trying to see what micro-optimizations I could make (first few commits). The results were really inconsistent as I reloaded my browser. I added the ratio statistic (fourth commit) to confirm this. It indeed jumped between 0.8x and 100x pretty much at random. So I thought it was time to bring out the big guns and actually use a benchmarking library to get some better statistics.
The fifth commit introduces benchmark.js, and also introduces a check on the .value property (which should avoid JITs optimizing away most of the loop). My results:
So if anything promises are faster.
Let me know if you see anything obviously stupid... /cc @petkaantonov