-
Notifications
You must be signed in to change notification settings - Fork 297
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cmd/cue: serious performance regression #2243
Comments
Thanks for reporting. v0.5.0-beta.2 is roughly as fast as v0.4.3, so it must have been a recent change that was included in v0.5.0-beta.5. I bisected between the two, which pointed me at https://review.gerrithub.io/c/cue-lang/cue/+/549247. Not entirely surprising:
I imagine the more aggressive early evaluation might be causing worse performance in your case. |
Reduced it down by quite a bit:
With this shorter config, v0.5.0-beta.5 takes over a second, and v0.4.3 barely takes ten milliseconds.
|
I believe @tmm1 is running into the same performance regression; Worth noting that beta.2 fails, partially because of #2246, and it's unclear if the other errors are caused by 2246 as well. But at least the export finishes in about ten seconds instead of taking a long time. Small reproducer:
That smaller reproducer takes 0.08s on v0.4.3, 0.25s on v0.5.0-beta.1, 0.30s on v0.5.0-beta.2, and 4.63s on v0.5.0-beta5. CUE master (02e19c8) also takes nearly five seconds. I believe it is the same performance regression as the one reported here, because both regressed between beta2 and beta5, and both involve multiple levels of fields which do computation on top of each other, e.g. I bisected between beta.2 and beta.5 and the wall time jumps from 0.3s to 4.5s with https://review.gerrithub.io/c/cue-lang/cue/+/549247, which further convinces me that the two users ran into the same performance regression: they both bisected to the same CUE change. |
Marking this as v0.7.0. If we were to address this by backing out the change that caused the regression, we would likely need an additional fix as part of v0.6.0. As things stand, we don't want to delay v0.6.0 if we can avoid it, and instead prefer to address this along with other performance issues in v0.7.0. This has the added benefit of the fix likely being considerably easier, because of refactors in the evaluation engine as part of v0.7.0 (that are not part of v0.6.0, where the cost of making such changes is higher). |
With @mpvl we have confirmed that both of the examples above are reasonably fast with First, the example at #2243 (comment), reduced from the code provided by @nxcc:
The original code also sees a similar speed-up, after tweaking the top-level
As for #2243 (comment), reduced from @tmm1's code:
Looking at the original project at https://github.com/tmm1/taxes.cue, and working around the regression bug I filed at #3483, the performance there still isn't great when evaluating the entire package:
We are pretty sure that this is caused by increased memory usage; the ~2x wall time corresponds to a ~3x number of Go allocations and peak memory usage. We think that the planned improvements in work reuse and memory allocations (#2854, #2853) will give better results than the old evaluator. The new evaluator is already evaluating fewer unifications and disjunctions than the old one, which is a good sign. I should note that, on this same taxes.cue project, Since @nxcc's original project is fast now, and @tmm1's performance improvements are already tracked in the issues I listed above, I'm going to close this issue as resolved. Please try the new evaluator on either the latest alpha release or master, and let us know how it works for you :) |
What version of CUE are you using (
cue version
)?Does this issue reproduce with the latest release?
yes
What did you do?
cue eval ./testcase
(see testcase.zip)What did you expect to see?
the expected output after a fraction of a second (<0.2s), like with cue v0.5.0-beta2
What did you see instead?
the expected output after more than 12 seconds
The text was updated successfully, but these errors were encountered: