-
Notifications
You must be signed in to change notification settings - Fork 760
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance regression in v4 #598
Comments
Thanks for tracking this and the detailed comment, David. The promisification of the VM is definitely still half-baked. I didn't know Regarding TS's target, maybe @alcuadrado or @krzkaczor can chime in, they have more experience on that front. |
Hey @davidmurdoch, thanks for taking the time to do this. I know that profiling the VM can be pretty challenging, so I really appreciate it. I started collaborating in this library right after the TS migration, so I wasn't aware of the performance regression. But what you described matches the performance characteristics I've been observing lately, so I spent some time digging deeper. Here are my findings. The first thing I did was patching Then, I patched Then, I created a branch of the VM and started optimizing those calls. To measure the impact of the changes, I set up Open Zeppelin Contracts tests, on its version 52c30edab8e872949a, and run the full test suite after each change. The initial run, without any modification, took 10min ( I cached the Then, on top of that, I reduced the amount of Finally, I recompiled the project with both optimizations and TS's I think the three optimizations are worth implementing, and I'll open a PR for the first two right now. The third one requires a little more consideration about how to implement it. We don't support versions of node < 8, so we are good on that front, but how will browser support be handled? Publishing an ES2017 package may break some builds, as all the VM's versions were ES5. I suspect that most people are running bundlers that supports ES2017, and targeting modern browsers, so nothing will break. @yann300 @Aniket-Engg, how does/can Remix handle this? If we decide that the possibility of a breaking change is not something we want to deal with, I think that we should publish a package with two builds. |
Hi @davidmurdoch, I assume that you haven't released on |
We're actually planning to release a tagged version ( As for releasing to stable, yes, this would be a blocker (especially since we recently shipped a version of ganache-core with its own performance regressions and are working on solving those now -- I don't want to compound these perf regressions :-) ). After releasing a tagged ganache-core/cli with v4.1.0 i'll test the performance of #600 against ganache's tests to confirm that it solves the majority of the regression; if it does, I don't see any other reason to delay the release to stable any further! |
I forgot to mention that publishing packages with multiple TS/Babel builds is a very standard practice. For example, many projects release multiple their packages with CJS and ESM builds. |
Amazing work, @alcuadrado! I just merged #600.
This sounds like a good plan to me. I don't see any side-effect from having multiple builds and see no reason why this should be delayed. |
@s1na Great! 😄 Like always until February I can't do any substantial work myself but just give my 2 cents, which makes it even easier for me to give support for your great plans!! 😛 😛 😛 Do you think this can be so quickly realized that we can also put this in a next release? Or should we just do one in between now with #600 merged? |
I created #603 which prepares the tooling to support releasing both builds. I think it's complete, but I haven't been involved in the release process of any of the libraries, so I may be missing something. |
Amazing work, @alcuadrado! |
@holgerd77 Shipped in https://github.com/trufflesuite/ganache-cli/releases/tag/v6.8.0-istanbul.0 and https://github.com/trufflesuite/ganache-core/releases/tag/v2.9.0-istanbul.0 as tagged released on npm ( Thanks everyone for getting on these fixes so quickly! |
I initially tracked the majority of the slowdown to this commit: 4678325
My initial findings pointed me to: 4678325#diff-452d330f2397c53c4a10771e89e595b8R254-R256
But that code has since been refactored. The issue here was that the
vm._emit
function was getting "promisified" individually for every single op code run. On Node 8 on my machine I saw a 40% performance improvement by memoizing_emit
in the constructor viathis._emit = promisify(vm.emit.bind(vm));
.The refactored code seems to have moved the promisification to
VM
itself, but it still does it for every individualemit
call (https://github.com/ethereumjs/ethereumjs-vm/blob/195bba7645151f975020bebe9f8ef146cc2a1680/lib/index.ts#L193). There are other callback functions getting thepromisify
treatment on every call (getBlockHash
,applyTransactions
,applyBlock
, probably more?). We should probably memoize allpromisify
calls where reasonable and appropriate.Another potential performance penalty (I haven't measured myself) is due to async/await is getting transpiled into TypeScript's
__awaiter
which usesPromise
'sthen
internally.Promise#then()
has been reported to be slower than nativeawait
(https://v8.dev/blog/fast-async and https://mathiasbynens.be/notes/async-stack-traces).async
/await
is technically an ES2017 feature, so you'll need to update the TStarget
to use it. Updating may have other side effects that cause incompatibilities in some browsers or node version, I haven't checked. Maybe you can configure your linter to dissallow most ES2017 features except async/await?The text was updated successfully, but these errors were encountered: