Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

compiler performance #14743

Closed
JeffBezanson opened this issue Jan 20, 2016 · 20 comments
Closed

compiler performance #14743

JeffBezanson opened this issue Jan 20, 2016 · 20 comments
Assignees
Labels
compiler:codegen Generation of LLVM IR and native code help wanted Indicates that a maintainer wants help on an issue or pull request performance Must go faster
Milestone

Comments

@JeffBezanson
Copy link
Member

JeffBezanson commented Jan 20, 2016

This is a tracking issue for work on speeding up the compiler itself. Between LLVM 3.7 and the upcoming jb/functions we have significant slowdowns. Dealing with this is becoming quite urgent. All phases of the system could use improvement.

Front end

  • Options for improving flisp performance
    • Try using Gambit-C again
    • Write an flisp bytecode-to-llvm compiler (can be a static compiler)
    • Hand compile the front end code to C
  • Clean up lowering passes (julia-syntax.scm). Probably at least 2-3 of them can be combined or removed. (simplify and speed up front end #14997)

IR

Type inference

Other

  • Use generated functions less
  • There is sometimes a regression due to precompile+ #15934 believed to be largely fixed

Codegen

Some specific issues:

@JeffBezanson JeffBezanson added performance Must go faster compiler:codegen Generation of LLVM IR and native code labels Jan 20, 2016
@Keno
Copy link
Member

Keno commented Jan 20, 2016

When 2 specializations of a function have the same LLVM IR, reuse the native code

I've been thinking about this. We could try hashing the IR code, but we'd have to do some work to avoid spurious differences due to naming things, etc., e.g. we could name all functions after a hash of their IR. Of course this'll also seriously complicate backtraces/debug info.

@JeffBezanson
Copy link
Member Author

this'll also seriously complicate backtraces/debug info

Would this be mitigated if we started by only considering different specializations of the exact same method? I imagine we could do a reasonably quick experiment to see if this might be profitable.

@Keno
Copy link
Member

Keno commented Jan 20, 2016

Would this be mitigated if we started by only considering different specializations of the exact same method? I imagine we could do a reasonably quick experiment to see if this might be profitable.

Yes for backtraces, no for debug info, but I think it might be fixable.

@StefanKarpinski
Copy link
Member

The approach I had contemplated was replacing actual debug info with some sort of template values and then when you get a cache hit, use the previously generated code but with the debug info "template" filled in. Not sure how well that could be made to work though.

@Keno
Copy link
Member

Keno commented Jan 20, 2016

I think the biggest problem is to know which of the specializations you're in while walking the stack. You could potentially do it by looking at the local variables of the parent frame and the trying to figure out which one would have had to have been called.

@StefanKarpinski
Copy link
Member

What I'm was describing would result in different specialized versions (with different debug info), but would reuse the generated code, so it would save time but not memory. Of course, that's not as good as using the same generated code, but that seems much harder.

@Keno
Copy link
Member

Keno commented Jan 20, 2016

Ah, I understand

@ViralBShah
Copy link
Member

Wasn't gambit a bit buggy when we first tried it in the very early days? I guess it should be easy to try it out and run PkgEvaluator.

@ViralBShah
Copy link
Member

An flisp to llvm bytecode compiler could also be a great JSOC project. We need to announce JSOC soon too.

@StefanKarpinski
Copy link
Member

I think that compiler performance is a little too important to hang our hopes on a JSoC project.

@ViralBShah
Copy link
Member

Of course we wouldn't hang our hopes on it, but there is no harm in mentioning it as a potential candidate project - in case we don't get around to doing it.

@JeffBezanson JeffBezanson added this to the 0.5.0 milestone Jan 28, 2016
@JeffBezanson JeffBezanson added the priority This should be addressed urgently label Feb 6, 2016
@felipenoris
Copy link
Contributor

Is there any update on which solution will be given to improve flisp performance?

@timholy
Copy link
Member

timholy commented Mar 7, 2016

Check out https://github.com/JuliaLang/julia/pulls?q=is%3Apr+author%3AJeffBezanson+is%3Aclosed for some of Jeff's PRs which have already implemented some of the solutions.

@andreasnoack
Copy link
Member

andreasnoack commented May 26, 2016

On my laptop, generic_matmatmul! takes 0.5 seconds to compile on 0.5 and 0.3 seconds on 0.3 (and 0.4). Even though the function is ~200 lines, it seems too slow in general and is probably one the main time consumers in the linear algebra tests.

It might be related to #16434 and, therefore, we should probably also look into the effects of splitting up the function. It might be much faster to compile six smaller versions.

@JeffBezanson
Copy link
Member Author

With a quick look, a significant amount of compile time for generic_matmatmul! is in alloc_elim_pass.

@JeffBezanson
Copy link
Member Author

Still to go here: #16837

@JeffBezanson
Copy link
Member Author

There's also an especially bad case in #17137 we should fix.

@PallHaraldsson
Copy link
Contributor

PallHaraldsson commented Sep 1, 2016

"Try using Gambit-C again", "Scheme" wasn't obvious (well I guess implied by flisp..):

https://en.wikipedia.org/wiki/Gambit_(scheme_implementation)

Is FemtoLisp on the way out? If/when this works? I see recent issues on a REPL for it..

@ViralBShah
Copy link
Member

@PallHaraldsson This is just adding noise by asking such questions here. Best to do it on julia-users.

@StefanKarpinski StefanKarpinski added help wanted Indicates that a maintainer wants help on an issue or pull request and removed help wanted Indicates that a maintainer wants help on an issue or pull request labels Oct 27, 2016
@vtjnash
Copy link
Member

vtjnash commented May 24, 2017

Doesn't seem to be anything left on this list worth doing / tracking with a meta issue.

@vtjnash vtjnash closed this as completed May 24, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
compiler:codegen Generation of LLVM IR and native code help wanted Indicates that a maintainer wants help on an issue or pull request performance Must go faster
Projects
None yet
Development

No branches or pull requests

9 participants