Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

merge specializations and tfunc #15918

Merged
merged 1 commit into from
Jun 11, 2016
Merged

merge specializations and tfunc #15918

merged 1 commit into from
Jun 11, 2016

Conversation

vtjnash
Copy link
Member

@vtjnash vtjnash commented Apr 18, 2016

1) by resetting all method table caches and inferring everything in specializations, we can avoid needing to perform a two-stage inference (& the associated complexity with allowing a module to be replaced during compilation)

this merges the LambdaInfo tfunc and specializations fields, to slightly reduce duplication. it now contains any lambda (or rettype) that was considered worth creating by either dispatch or inference, without prejudice

intermediate type inference results are fetched from the active queue to accommodate

jl_symbol_name(name));
}
// suppress warning "replacing module Core.Inference" during bootstrapping
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like this comment can be removed?

@JeffBezanson
Copy link
Member

JeffBezanson commented Apr 18, 2016

inferring everything in specializations

Isn't there a chance this will still leave un-inferred code that we reach recursively during this inference process?

Admitted, there don't seem to be any regressions evident in test timing at least.

@vtjnash
Copy link
Member Author

vtjnash commented Apr 18, 2016

Isn't there a chance this will still leave un-inferred code that we reach recursively during this inference process?

i couldn't think of anywhere this could miss, after setting jl_typeinf_func. i'm not even sure that resetting all method table caches, and forcing them to be rebuilt from the specializations list, was necessary (since everything in a method table cache would also be in a specializations list somewhere. note: part of why this works is that no native code is stored during generation of inference.ji. to combine the inference and sys stages of bootstrapping would require implementing a similar step of resetting all of the cached llvm generated code in-process.

although i'm thinking i should separate these two changes, since they aren't actually coupled. (tfunc is always only inferred information, so I don't believe it shouldn't matter if it's been diligent about sharing to specializations)

@JeffBezanson
Copy link
Member

although i'm thinking i should separate these two changes

👍

@vtjnash vtjnash changed the title WIP: eliminate stage0 inference and tfunc WIP: merge specializations and tfunc Apr 19, 2016
@vtjnash vtjnash force-pushed the jn/tfunc_spec_stage0 branch from 91b2866 to 958961f Compare April 19, 2016 19:22
@JeffBezanson
Copy link
Member

👍 do want

@StefanKarpinski
Copy link
Member

Important because:

  • it helps compiler performance (removes linear lookup structure)
  • allows us to delete IR that we don't need anymore (helps memory)
  • helps to allow more static compilation
  • eliminates redundant data structure

@StefanKarpinski
Copy link
Member

TODO: @vtjnash, can you rebase this and see if the memory usage problems this previously caused are now improved so that this can be merged? @JeffBezanson did you have other reservations about this?

@JeffBezanson
Copy link
Member

Rebase & merge as far as I'm concerned. @vtjnash has indicated that the memory problems have been solved.

@vtjnash vtjnash force-pushed the jn/tfunc_spec_stage0 branch from ca05bdd to 80f20bb Compare June 4, 2016 04:21
@vtjnash
Copy link
Member Author

vtjnash commented Jun 4, 2016

just fyi: this will increase the size of the system image by a few of megabytes. that's the result of an unintentional bugfix for the creation of the system image, not the impact of this PR, nor a runtime change.

@tkelman
Copy link
Contributor

tkelman commented Jun 4, 2016

Could you please explain in more detail what the bug was and how this fixes it?

@vtjnash
Copy link
Member Author

vtjnash commented Jun 4, 2016

There was an unintended conditional check at

julia/src/gf.c

Line 1459 in 31f5a63

if (spec == NULL)
which artificially reduced the number of specializations that were precompiled

@tkelman
Copy link
Contributor

tkelman commented Jun 4, 2016

On Travis was that possibly an OOM while compiling the system image?

@vtjnash
Copy link
Member Author

vtjnash commented Jun 4, 2016

Yes, apparently that's the downside to fixing that bug :). I've been exploring various fixes to this issue over the past month, so I guess it's time to actually go with one of them.

@JeffBezanson
Copy link
Member

Although we generally do need more precompilation, it would be nice to have some way to dial it down to find the right tradeoff.

@tkelman
Copy link
Contributor

tkelman commented Jun 9, 2016

hopefully #16835 will fix the OOM, let's see

@vtjnash vtjnash force-pushed the jn/tfunc_spec_stage0 branch from 80f20bb to ee8e9f8 Compare June 9, 2016 18:46
@vtjnash vtjnash changed the title WIP: merge specializations and tfunc merge specializations and tfunc Jun 9, 2016
@vtjnash vtjnash force-pushed the jn/tfunc_spec_stage0 branch from ee8e9f8 to 1ec5092 Compare June 10, 2016 20:56
@vtjnash vtjnash merged commit 85d098c into master Jun 11, 2016
@vtjnash vtjnash deleted the jn/tfunc_spec_stage0 branch June 11, 2016 19:03
@JeffBezanson
Copy link
Member

It looks to me like TypeMap is using jl_args_morespecific for the specializations table on non-leaftypes. This shouldn't be necessary, since it only needs to handle exact match queries. We might be able to gain a bit by skipping morespecific.

@tkelman
Copy link
Contributor

tkelman commented Jun 12, 2016

half a dozen 15% or worse regressions? 85d098c#commitcomment-17834416

@Keno
Copy link
Member

Keno commented Jun 16, 2016

Base.visit on meth.specializations now sometimes gives Type{Void}. Can we have it skip those instead?

@vtjnash
Copy link
Member Author

vtjnash commented Jun 17, 2016

That's a feature of jl_prune_tcache

@Keno
Copy link
Member

Keno commented Jun 17, 2016

I'm not sure what that means
On Jun 16, 2016 21:29, "Jameson Nash" [email protected] wrote:

That's a feature of jl_prune_tcache


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
#15918 (comment),
or mute the thread
https://github.com/notifications/unsubscribe/ABO1l5g_P2gdPJWFp-nH4eHeBDJRoCChks5qMfhigaJpZM4IJYKY
.

@vtjnash
Copy link
Member Author

vtjnash commented Jun 17, 2016

It's intentional, although perhaps it would be more useful if visit returned the entry instead of the value:

julia> first(methods(map)).specializations TypeMapEntry(TypeMapEntry(TypeMapEntry(TypeMapEntry(TypeMapEntry(TypeMapEntry(nothing,Tuple{Base.#map,Base.#esc,Array{Any,1}},svec(),nothing,svec(),Array{Any,1},true,true,false),Tuple{Base.#map,Base.FastMath.#make_fastmath,Array{Any,1}},svec(),nothing,svec(),Array{Any,1},true,true,false),Tuple{Base.#map,Base.#names,Array{Any,1}},svec(),nothing,svec(),Array{Any,1},true,true,false),Tuple{Base.#map,Base.#string,Array{Any,1}},svec(),nothing,svec(),Array{Any,1},true,true,false),Tuple{Base.#map,Base.Docs.##20#21{String},Array{Any,1}},svec(),nothing,svec(),Array{Any,1},true,true,false),Tuple{Base.#map,Base.Docs.##22#23{String},Array{Any,1}},svec(),nothing,svec(),Array{Any,1},true,true,false)

julia> Base.visit(first(methods(map)).specializations) do f; println(f); end
Array{Any,1}
Array{Any,1}
Array{Any,1}
Array{Any,1}
Array{Any,1}
Array{Any,1}

@Keno
Copy link
Member

Keno commented Jun 17, 2016

What would be the use of having Base.visit return Void? When would I ever care?

@JeffBezanson
Copy link
Member

Curious: why do you want to iterate over specializations?

@Keno
Copy link
Member

Keno commented Jun 17, 2016

For the debugger, it needs to set a breakpoint in each one.

@JeffBezanson
Copy link
Member

Ah, of course. It's probably better to iterate over the method cache instead, since specializations has lots of stuff that's not codegen'd, plus gets entries deleted as you've discovered.

@Keno
Copy link
Member

Keno commented Jun 17, 2016

I need everything that gets or could potentially get codegened. meth.specializations used to do that for me. If that's no longer how I should do this, please suggest an alternative.

@JeffBezanson
Copy link
Member

I would use mt->cache. That plus jl_method_tracer should give you everything I would think.

@Keno
Copy link
Member

Keno commented Jun 17, 2016

I would use mt->cache

Can you describe the semantics of that field?

@JeffBezanson
Copy link
Member

It's a TypeMap just like specializations. The basic idea is every concrete type tuple a function is ever called with gets an entry pointing to a LambdaInfo with the specialized/generated code. In reality, some LambdaInfos are shared by several entries, and some keys might be abstract types.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants