Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

optimize path input decoding to avoid allocations #3554

Merged
merged 1 commit into from
Mar 6, 2024

Conversation

fwbrasil
Copy link
Contributor

Problem

The decoding of inputs from paths is a hot operation in my bechmark implementation and has a high allocation rate of Tuple2 instances.

Solution

  • Since decodedPathInputs follows the same order and has the same size as pathInputs at the end of a successful decoding, use an Array to collect the decoded inputs and positionally refer to the original path input using pathInputs when folding the results.
  • Modify both matchPathInner and foldDecodedPathInputs to use index-based iteration.
  • Avoid Vector.headAndTail calls since it produces a Tuple2 allocation. The JIT should have an easier time eliding this allocation but I think it's best to avoid it given how hot the method is.

Notes

@kciesielski
Copy link
Member

LGTM, thanks!

@adamw adamw merged commit fb719fa into softwaremill:master Mar 6, 2024
23 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants