-
-
Notifications
You must be signed in to change notification settings - Fork 402
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added benchmarks for full program execution #560
Conversation
Codecov Report
@@ Coverage Diff @@
## master #560 +/- ##
=======================================
Coverage 68.19% 68.19%
=======================================
Files 172 172
Lines 10573 10573
=======================================
Hits 7210 7210
Misses 3363 3363 Continue to review full report at Codecov.
|
Is it potentially also worth adding some more examples of 'less abstract' javascript? |
Benchmark for 84a457aClick to view benchmark
|
What do you mean? If you find interesting stuff to benchmark, feel free to add it :) |
I'll see if I can find some good examples |
This adds benchmarks for full program execution.
Sometimes we have benchmarks that tell us things like "the parser is now 50% slower", but what does that mean for the whole program execution? Maybe it's just a 1% speed reduction.
Also, we have no benchmarks to check improvements if for example, we create the realm in a new thread in parallel to lexing + parsing. This should reduce at least a bit the full program execution time, but there is no way to measure it.
With this PR, all execution benchmarks get duplicated by using our
exec()
function, that creates a realm, lexes and parses and finally executes everything. This should give us insights on how each change affects to a full workflow.I also took the opportunity to remove some non-needed black boxes from old benchmarks. They were just preventing optimizations in pre-benchmark code, so they were not useful.
Ideally, this should land before #486/#559 in order to see the change in that PR.