Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix ordering dependence of method cache insertion #16084

Merged
merged 1 commit into from
Apr 29, 2016
Merged

Conversation

vtjnash
Copy link
Member

@vtjnash vtjnash commented Apr 27, 2016

previously, a cache entry would be widened beyond the
despecialization heuristic, creating an ordering dependence
such that after inserting a Function into the cache (as Any)
or a Tuple (as DataType), no more methods would attempt to specialize
that slot

@JeffBezanson I think this should fix the issue you noted #15934 (comment)

In particular, given the command sequence:

f(x) = x
f(+)
f(-)
f(2)

before:

julia> typeof(f).name.mt.cache
f(x) at REPL[1]:1

julia> typeof(f).name.mt.cache.next
nothing

after:

julia> typeof(f).name.mt.cache
f(x::Int64) at REPL[1]:1

julia> typeof(f).name.mt.cache.next
f(x::Function) at REPL[1]:1

julia> typeof(f).name.mt.cache.next.next
nothing

@JeffBezanson
Copy link
Member

Excellent. Were you able to reproduce the test slowdown?

@vtjnash
Copy link
Member Author

vtjnash commented Apr 27, 2016

No, I was just hoping you would be able to test this somehow :). I had recalled that I had noticed this ordering issue in the past, but had forgotten again and previously not been able to address it.

@@ -609,6 +615,19 @@ static jl_lambda_info_t *cache_method(jl_methtable_t *mt, union jl_typemap_t *ca
// in the method cache to prevent anything else from matching this entry
origtype = NULL;
}
if (makesimplesig && origtype == NULL) {
// reduce the complexity of rejecting this entry in the cache
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm having trouble visualizing when this case happens. Could you give an example?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See the example in the PR. This lets us make an expensive cache entry like (<: Function), while still using the fast-path to reject it (although the (<: Any) it builds isn't a particularly useful example since it only has one argument)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried that but I always saw ::Function entries in the cache. I didn't the effect of this replacement with ::Any.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this ends up in the simplified-signature field. the primary sig field still gets ::Function (which is also what we bother to print)

previously, a cache entry would be widened beyond the
despecialization heuristic, creating an ordering dependence
such that after inserting a Function into the cache (as Any)
or a Tuple (as DataType), no more methods would attempt to specialize
that slot

should fix the issue noted #15934 (comment)
@vtjnash vtjnash force-pushed the jn/cache-predictable branch from ad49a77 to 5c66898 Compare April 28, 2016 02:46
@@ -3702,6 +3703,12 @@ void jl_init_types(void)
slot_sym = jl_symbol("slot");
static_parameter_sym = jl_symbol("static_parameter");
compiler_temp_sym = jl_symbol("#temp#");

tttvar = jl_new_typevar(jl_symbol("T"),
(jl_value_t*)jl_bottom_type,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

funny indent?

@vtjnash vtjnash merged commit 83b2fa4 into master Apr 29, 2016
@vtjnash vtjnash deleted the jn/cache-predictable branch April 29, 2016 19:35
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants