-
Notifications
You must be signed in to change notification settings - Fork 791
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor opcodes #664
Refactor opcodes #664
Conversation
Codecov Report
@@ Coverage Diff @@
## master #664 +/- ##
==========================================
+ Coverage 91.00% 91.36% +0.35%
==========================================
Files 47 44 -3
Lines 3046 2778 -268
Branches 501 433 -68
==========================================
- Hits 2772 2538 -234
+ Misses 157 140 -17
+ Partials 117 100 -17
Continue to review full report at Codecov.
|
@rumkin Apologies for the delay reviewing here, will take a look at this today. For background, in #660 you mention that:
Is this PR meant to address a specific issue with that project or are you mostly interested in performance optimization here? |
@cgewecke I'm not hurrying you up. I closed the original PR and created new one (to relocate branches) and added you to reviewers just to notify about this change.
This PR doesn't affect performance significantly, it's more like the first step to further optimizations. |
Are you able to elaborate a little on this? |
@cgewecke This PR is obviously reduce pressure on memory and allows V8 to optimize code. But, as I wrote earlier, to preserve backward compatibility with |
I like the idea of doing this kind of change. I think as long as they are kept this focused, they should be manageable. This project is full of code that makes excessive allocations, even in hot paths. For example, we already saw huge performance improvements by caching some Not all the PRs would bring dramatic changes by themselves. The challenge is to manage expectations when the change is not so big, or when performance comparisons are inconclusive. For instance, I tried multiple things like this and dropped them when building Buidler EVM, as they had almost no impact. Now I regret that. I think many times doing these changes will still be worth their effort, just not individually. But in conjunction, they can dramatically change the performance of this project. |
Agree, and this PR LGTM to me as well. In principle it could speed things up a bit. As an aside, the question of measuring performance has come up a few times in the last months here - I wonder how it might be done. Perhaps there's a way to record the VM inputs of a long Solidity test suite and replay them through a BuidlerEVM-like fixture. Ethereumjs-vm doesn't really npm install from git because it's a transpiled product. It would be challenging to swap the changes of a specific branch here into (for example) a clone of Zeppelin's test suite as a benchmark. |
Side note: monorepo PR #666 merge is imminent, please don't do any other merges right now |
@cgewecke, about benchmarks and performance. Couldn't say much about how to apply this for this repo.
AFAIK Mocha supports plugins, and it is possible to rewrite modules paths. This is how code coverage works. About @alcuadrado promise enhancement. Why not to rewrite state, trie and other parts to use async functions instead of callbacks? Is it a backward compatibility issue? It should enhance performance and simplify code itself. Also JS engines generates proper stack trace for async functions. |
@rumkin Trie rewrite to use async functions is in the works ethereumjs/merkle-patricia-tree#100, if you want to do a code review there that would be helpful |
@holgerd77 I will take a look into this. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This LGTM!
I've rewritten Opcodes creation logic what simplified Interpreter and made Opcode object itself easier to use and more informative.
code
andfullName
. Class fieldfullName
created for opcodes like PUSH, JUMP, etc. which have similar names. Now this name is calculated only once. Classes are what V8 optimizer loves to use for faster function optimization. Also I protect it from accidental changes with Object.freeze call, so it could be passed anyway without creating of a duplicate. The interface now looks like this:lib/evm/opcodes.ts
. It simplifiedInterpreter#lookupOpInfo()
and removed recalculation and recreation of opcode objects. Due to opcode is always same for same hardfork, so there is no reason to recreate it in runtime on each VM step.Interpreter#_runStepHook()
method. This code should be removed in the next major release and replaced with a regular Opcode instance, it will reveal all the benefits of this PR.Now it is: