-
Notifications
You must be signed in to change notification settings - Fork 127
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Slow execution (in general) #2076
Comments
how it s fuzzed? can it be profiled? |
Running one of the sample tests,
However, if I take that test, and change
What geth, besu, nethermind and now eels have implemented, is a batch-mode. Where client client is fed paths via To demonstrate what I mean: if you had that, I could rewrite the for-loop above as:
It was initially implemented to get around virtual-machine boostrap times (nethermind / besu), but with kzg init, I added it to geth too. |
But there's something more too, I think. Because even if I don't use the batch-mode in geth, it's still a lot faster:
Of course, batch-mode is faster still:
|
Can I have both the Shanghai and Cancun test file? I want to try to see what kind of regression and how to improve it. |
It's in here: https://github.com/holiman/goevmlab/tree/master/evms/testdata/cases . diff 00000936-mixed-1.json 00000936-mixed-1.json.cancun
53c53
< "Shanghai": [
---
> "Cancun": [ |
This still very much happening, btw. Here's a stacktrace from when I ctrl-c:ed an execution,
So, most definitely the loading of the trusted setup. |
Ok. now I know what is the problem. Turn out not only the trusted setup loading contribute to the slowness but the culprit is the json tracer. If the json tracer is disabled, nimbus evm run twice as fast as geth evm without tracer. And geth evm appear faster for non batch mode because geth lazily load the trusted setup. test vector Another area of optimization is replacing stdlib/json with something faster for both reader and writer. |
Hm, not sure what you mean. As I reported here, by changing EDIT: Ah, you were talking about geth. Ok, nevermind! |
Earlier, nimbus-eth1 was one of the fastest evms, but in the last months, it has been by far the slowest.
Here are some stats from a 90 hours of fuzzing
All other clients performed over 1M tests, nimbus-eth1 only 600K. The slowest of the other clients was nearly double as fast as nimbus-eth1.
I suspect that some regression has been introduced (possibly some kzg initialization?) which adds overhead on startup.
The text was updated successfully, but these errors were encountered: