Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

faster inference order #19513

Merged
merged 1 commit into from
Dec 7, 2016
Merged

faster inference order #19513

merged 1 commit into from
Dec 7, 2016

Conversation

vtjnash
Copy link
Member

@vtjnash vtjnash commented Dec 6, 2016

may fix #15346

the idea here is that we want to try to avoid accidentally doing a depth-first scan of the AST
but while a breadth-first (dominator) scan is more optimal in terms of avoiding rework,
it is also more expensive to compute

intead, the idea here is that anytime we hit a goto branch,
we instead track whether this is a lower pc waiting for revisit.

by preferring lower pc to larger ones,
we improve the ability for inference to finish completely inferring
all branches of a loop / conditional / try-catch before moving on

this depends on the fact that lowering generally keeps these bloks
fairly well ordered in the AST. excessive use of `@goto` nodes in the user's code
may reduce the effectiveness of this heuristic, but it won't break the algorithm.
@vtjnash
Copy link
Member Author

vtjnash commented Dec 6, 2016

for the test case in #15346 with n=1000, this gave me about a 4x speedup on inference time (80s -> 20s), with a corresponding reduction in memory usage as well!

@JeffBezanson
Copy link
Member

LGTM.

@vtjnash vtjnash merged commit 07cb7c1 into master Dec 7, 2016
@vtjnash vtjnash deleted the jn/faster-inference-order branch December 7, 2016 16:00
@tkelman
Copy link
Contributor

tkelman commented Mar 1, 2017

This doesn't cherry-pick cleanly on release-0.5. Please open a PR against tk/backports-0.5.1 if this should be included in 0.5.1.

@dpsanders
Copy link
Contributor

Yes please let's backport this; Base.Test on Julia v0.5 is still unbearably slow for large testsets.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
compiler:inference Type inference performance Must go faster
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Time to run N tests grows faster than linear in N: codegen problem?
4 participants