-
Notifications
You must be signed in to change notification settings - Fork 140
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pool lexers to reduce allocations and improve performance #1610
Conversation
Codecov Report
@@ Coverage Diff @@
## master #1610 +/- ##
==========================================
+ Coverage 74.73% 74.74% +0.01%
==========================================
Files 288 288
Lines 55340 55356 +16
==========================================
+ Hits 41357 41375 +18
+ Misses 12489 12487 -2
Partials 1494 1494
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
|
Cadence Benchstat comparisonThis branch with compared with the base branch onflow:master commit f615bea Results
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice optimization! 👌 Just a one question
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.
Description
@simonhf pointed out that execution is allocating a lot of memory, specifically emitting tokens in the token stream (
append
on a slice). Pre-allocating an array reduces the overhead of append.I then also realized that the token stream itself is allocated and not used after parsing, so we can reduce allocations by introducing an object pool for lexers.
This significantly reduces amount of allocated memory and also increases performance as a side-effect.
Also, improve the benchmarking workflow:
master
branchFiles changed
in the Github PR explorer