optimize path input decoding to avoid allocations #3554
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Problem
The decoding of inputs from paths is a hot operation in my bechmark implementation and has a high allocation rate of
Tuple2
instances.Solution
decodedPathInputs
follows the same order and has the same size aspathInputs
at the end of a successful decoding, use anArray
to collect the decoded inputs and positionally refer to the original path input usingpathInputs
when folding the results.matchPathInner
andfoldDecodedPathInputs
to use index-based iteration.Vector.headAndTail
calls since it produces aTuple2
allocation. The JIT should have an easier time eliding this allocation but I think it's best to avoid it given how hot the method is.Notes