Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Julep: tfuncs for all #36463

Closed
timholy opened this issue Jun 28, 2020 · 27 comments
Closed

Julep: tfuncs for all #36463

timholy opened this issue Jun 28, 2020 · 27 comments
Labels
compiler:inference Type inference compiler:latency Compiler latency julep Julia Enhancement Proposal speculative Whether the change will be implemented is speculative

Comments

@timholy
Copy link
Member

timholy commented Jun 28, 2020

TL;DR: one of the harder compiler problems is handling inference on abstract types. This proposes to cheat by adding lookup tables for a few functions. This also analyzes how many poorly-inferred calls we actually have.

There's been a lot of thinking lately about how to handle abstract inference, e.g., #34742, "typocalypse" #36208, and many of the invalidation PRs are nibbling at individual pieces of this problem. Some of the discussion in #35904 centered on whether it would be possible for inference be more assertive about output types, but the cost of running inference on everything is large and recent progress has focused on doing less inference, not more.

I have a growing sense that the majority of inference problems affect a relatively small fraction of functions. One potential solution (which I'm sure must have been considered before) is to do something akin to the tfunc mechanism† for generic functions, automatically adding type-assertions as needed for particular functions (EDIT: previous was clarified in response to @martinholters). I'm not exactly sure how it would work (and therein lies the rub), but something like "if you're calling a method in ModuleA with abstract types as wide as the method signature itself, then its permissible to impose custom tfuncs on its callees." This is essentially a strong form of piracy-prevention.

To see how this might work, let's consider the example of eof. We document that it returns a Bool. In a sense, if you're calling one of "our" eof implementations, and your object is a subtype of IO, then this would seem to be a restriction you should have to live with (meaning, we'd enforce it by automatically adding a type-assert). So if you have a method myreader(io::IO, args...) and you call eof(io), then it seems fair to require that this eof call return a Bool. Conversely, if you duck-type the method, myreader(io, args...), and if inference ends up producing a purely-abstract version without knowing the type of io, then the abstract call is on eof(::Any), and in that case we would not make any guarantees about the return type. (Even though Base "owns" Any, we make an exception and pretend it doesn't.) If this is still too restrictive, it could conceivably be loosened by exempting known concrete types from the pseudo-tfunc constraints. I suspect this isn't a good idea, though, as program behavior would change based on the success or failure of inference, and moreover it just seems weird to allow specific concrete subtypes to escape constraints that we otherwise apply to the abstract type itself.

Somewhat in the spirit of #36393, I thought that perhaps the best way to think about this more clearly would be to get a sense for the scale of the problem. So I did an analysis on our current codebase. The script is in https://gist.github.com/timholy/9d2aabaeabb22239b5e7a4e95e35d298, but in rough terms what it does is go through all methods, run inference on them with their declared signatures (i.e., the widest type signature permissible), and then look for poorly inferred calls to a specific subset of functions. It exempts a "hit" if it is immediately followed by a typeassert. For example, one of the functions on the list is ==, which should return Union{Bool,Missing}. The method

julia> f(x, y) = (x == y)::Bool
f (generic function with 1 method)

julia> code_typed(f, (Any, Any))
1-element Array{Any,1}:
 CodeInfo(
1%1 = (x == y)::Any
│        Core.typeassert(%1, Main.Bool)::Bool%3 = π (%1, Bool)
└──      return %3
) => Bool

would be deemed OK because of the type-assert that follows the == call (which has an inferred return type of Any). But without that type-assert, this would be deemed a violation of the "contract" that == must satisfy. This Julep is essentially exploring the possibility of adding the missing type-assert systematically to everything that doesn't infer to the expected type and doesn't already have its own equally- or more-restrictive type-assert.

While you'll want to see the code for full details, roughly speaking this will count violations of things that I think most people would agree on, e..g., we expect convert(T, x)::T, isempty(x)::Bool, etc.

The results of the analysis: out of ~16000 methods, about 1500 of them have one or more "violating" calls to one of the functions I put on the tfunc list. That's more than I hoped but fewer than I feared. Here are the functions on the list and a count of the number of calling Methods they have (somewhat like "unrealized backedges" since these depend on the inferred code but not MethodInstances):

!: 163
&: 15
+: 102
-: 186
<: 297
<=: 221
==: 552
>: 217
>=: 102
axes: 144
cconvert: 106
cmp: 21
codeunit: 15
convert: 297
copyto!: 40
displaysize: 11
eof: 16
getindex: 13
isascii: 1
isempty: 170
isequal: 33
iseven: 10
isfinite: 34
isinf: 25
isinteger: 11
isless: 24
ismarked: 3
isnan: 45
isodd: 9
isone: 7
isopen: 2
isperm: 2
ispow2: 2
isreadable: 3
isreal: 10
issorted: 6
issubset: 2
istextmime: 3
isvalid: 12
iswritable: 3
iszero: 36
iterate: 395
leading_zeros: 12
length: 342
ncodeunits: 31
nextind: 25
prevind: 20
readline: 1
resize!: 15
size: 300
sizeof: 29
thisind: 3
unsafe_convert: 63
|: 2

In terms of paths forward (aside from just closing this issue...), we could:

  • implement a tfunc-like mechanism in inference to impose these limits "behind the scenes"
  • write a bot that automatically annotates source code with typeasserts for some or all of these functions
  • continue to add type-asserts to specific locations only when we discover they matter for one reason or another (performance, latency, whatever).

The third is kind of the default, but I was curious to know how many annotations would be required to shut most of these down. It turns out, quite a few! In contrast with my impressions from #36393, where I felt that we seem to closing in on eliminating many of the biggest sources of invalidation, from this perspective I think we've barely started: the lists look eerily similar for 1.4 and master.


†If you're unfamiliar with tfunc, it's an inference lookup table for Julia's intrinsic & builtin functions. See https://github.com/JuliaLang/julia/blob/master/base/compiler/tfuncs.jl

@timholy timholy added speculative Whether the change will be implemented is speculative compiler:inference Type inference compiler:latency Compiler latency labels Jun 28, 2020
@martinholters
Copy link
Member

If I understand this correctly, the reference to tfuncs might be misleading. A tfunc tells the compiler the return type of something which it will blindly trust. Here, on the other hand, it seems you'd like to see type assertions to be inserted automatically. Of course, the compiler then can assume the value has the desired type after verifying the assertion, but it won't trust it blindly. (it may remove the assertion if it can be proven to always hold, of course.)

That said, being able to impose restrictions on the return type for all methods matching a specific signature does seem useful. It can make APIs more explicit and help inference.

@timholy
Copy link
Member Author

timholy commented Jun 29, 2020

Good point, I definitely did not mean that the output type should be trusted blindly. I tried to clarify my description.

@timholy
Copy link
Member Author

timholy commented Jun 29, 2020

For those not steeped in compiler details, it may also help to give a bit more explanation: on master, we have 9 variants of eof, all of which are inferred to have return type Bool:

julia> cts = code_typed(eof, (IO,));

julia> collect(zip(methods(eof), last.(cts)))
9-element Array{Tuple{Method,DataType},1}:
 (eof(pipe::Base64.Base64DecodePipe) in Base64 at /home/tim/src/julia-master/usr/share/julia/stdlib/v1.6/Base64/src/decode.jl:126, Bool)
 (eof(io::Base.SecretBuffer) in Base at secretbuffer.jl:155, Bool)
 (eof(f::Base.Filesystem.File) in Base.Filesystem at filesystem.jl:200, Bool)
 (eof(s::Base.BufferStream) in Base at stream.jl:1354, Bool)
 (eof(s::IOStream) in Base at iostream.jl:232, Bool)
 (eof(::Base.DevNull) in Base at coreio.jl:22, Bool)
 (eof(io::Base.AbstractPipe) in Base at io.jl:414, Bool)
 (eof(io::Base.GenericIOBuffer) in Base at iobuffer.jl:335, Bool)
 (eof(s::Base.LibuvStream) in Base at stream.jl:104, Bool)

(This is not true of earlier Julia versions). However,

julia> f(io::IO) = eof(io)
f (generic function with 1 method)

julia> code_typed(f, (IO,))
1-element Array{Any,1}:
 CodeInfo(
1%1 = Main.eof(io)::Any
└──      return %1
) => Any

because 9 methods of eof is above the max_methods limit. Consequently any method in Base which calls eof and for which the concrete io type is not available to inference will not realize it returns a Bool. This issue is essentially a proposal for an escape-hatch for the max_methods limit for a specific subset of functions.

@JeffBezanson
Copy link
Member

There is a long-standing idea (IIRC, @StefanKarpinski was a proponent, and I could swear there was an issue for it but I can't find it (not unusual)) to allow declaring the return type for an entire generic function, e.g. function eof::Bool end. I think that's a form of this that would be simple, reliable, and efficient. It would be fancy to be able to do that for arbitrary "slices" of generic functions, e.g. eof(::IO)::Bool, but that gets very complex --- we have to handle intersections between that and defined methods.

I think the ideal way to handle it is to insert a typeassert in all matching methods when they are defined, and then the type can simply be assumed at call sites.

We already have return type declarations on some methods, and we could potentially exploit those in inference. But, I suspect the value of that is surprisingly low. There are several reasons:

  1. We would have to add a lot of declarations. If even one method is missing one it poisons the whole pot.
  2. Checking that a large number of methods (e.g. length has 82) declared the same return type seems fairly expensive.
  3. Few functions are as well-behaved as we'd like, e.g. because of Missing, or length returning UInt or BigInt on some ranges, etc.

@StefanKarpinski
Copy link
Member

Another possible option is to allow "soft annotations" which basically would tell the compiler to "union split" on the asserted type, i.e. when it sees eof(io) do something like this:

x = eof(io)
if x isa Bool
    # fast path
else
    # slow path
end

in other words, this would be a mechanism to tell the compiler what type to expect, although it would still have to generate code that works even if that's wrong, given that branch prediction is generally fast when it's predictable, the performance could nearly as good as actually knowing the type.

@timholy
Copy link
Member Author

timholy commented Jun 29, 2020

Yep, I proposed using return-type annotations in #35904 (comment). One problem though is that currently return-type annotations call convert, and that's one of the methods we'd most like to protect via this mechanism. (EDIT: I said this poorly. An explicit call to convert will, of course, still need this mechanism, and it won't be protected from invalidation. What I was referring to is that we might want to avoid introducing more converts since they would add additional targets of invalidation, even while "protecting" the methods that get called for their output values.) That's why I think the tfunc-like mechanism may be better, but we'd need a syntax for it. I might be able to write a macro version

@returntype eof(::IO)::Bool

that could insert something in a Dict, if someone else could convert that into actual syntax.

@DilumAluthge
Copy link
Member

DilumAluthge commented Jun 29, 2020

One problem though is that currently return-type annotations call convert

Would it be possible and/or desirable to change this? Instead of calling convert, what if return-type annotations were type assertions, i.e. a runtime error would be thrown if the actual return type did not match the asserted return type?

Edit: I think Jeff may have already addressed this above when he wrote "But, I suspect the value of that is surprisingly low."

@JeffBezanson
Copy link
Member

Current per-method return type declarations call convert, but the new function-wide declaration needn't.

Instead of calling convert, what if return-type annotations were type assertions

We actually do both; call convert plus assert that that returns the expected type.

@DilumAluthge
Copy link
Member

There is a long-standing idea (IIRC, @StefanKarpinski was a proponent, and I could swear there was an issue for it but I can't find it (not unusual)) to allow declaring the return type for an entire generic function, e.g. function eof::Bool end.

One such issue may be #1090 (comment)

My most recent thinking is that you may actually want two distinct features here:

  1. return annotations on methods implicitly convert the return value to that type
  2. typed generic functions, which raise errors if any method violates the asserted signature

Example of 1:

f()::Float64 = 1

julia> f()
1.0

Example of 2 (very hypothetical syntax):

function f::Float64 end # no methods + guaranteed return type
f() = 1

julia> f()
ERROR: TypeError: typeassert: expected Float64, got Int64

The reasoning is this. There are two reasons you want return-type annotations:

  1. as a shortcut to make sure that every exit point of a method definition returns the same type
  2. to allow type inference to reason about the return type of a generic function better (e.g. convert)

For the former, you want convert behavior, while for the latter convert behavior is bad since it introduces very non-local effects – namely that the behavior of a method can be massively and unexpected changed by a type annotation in a totally unrelated place. Arguably, you should also only be able to put a "function type" on a generic function when it's created, not at any point in time.

See also: #210, #964, #8283.

@StefanKarpinski
Copy link
Member

Excellent GitHub archaeology!

@timholy
Copy link
Member Author

timholy commented Jun 29, 2020

we have to handle intersections between that and defined methods

I'm guessing I don't fully understand that comment---naively, I imagine that these can be straighforwardly handled as "AND" requirements. To be concrete, let's take a call, f(x::PartialTypeInfoX, y::PartialTypeInfoY), and suppose:

  • while there might be a hundred f methods, only 2 apply given the type info we have. Since we are under the max_methods case, we are obliged to make use of inference results for those methods
  • let's say we also have n "function level" return type declarations that apply, i.e., PartialTypeInfoX<:DeclarationXn, PartialTypeInfoY<:DeclarationYn, and so we'd like to enforce that return_type <: DeclaredReturnTypen.

This seems to set up the conflict that I think you're describing. But wouldn't we just typeintersect the return types from the relevant function-level declarations and then typeassert it? And if all of the method inference info in fact already satisfies those constraints, doesn't it just get elided? And if not it remains? That is to say, any method that violates applicable function-level return type declaration might end up throwing an error. (As a consequence, during the 1.x phase we might have to be looser about these declarations than we'd ideally like.)

All this is to explain why I'm failing to see the conflict that concerns you, not that I think it's not there.

@StefanKarpinski
Copy link
Member

(As a consequence, during the 1.x phase we might have to be looser about these declarations than we'd ideally like.)

Or, as I suggested, make them non-fatal and just generate two branches: promise satisfied / promise violated.

@timholy
Copy link
Member Author

timholy commented Jun 29, 2020

Jeff knows this, but to make sure it's clear to everyone: the reason that

We actually do both; call convert plus assert that that returns the expected type.

isn't what we want for this circumstance is that calling convert(T, x) introduces a backedge for the method to convert(::Type{T}, ::WhateverTypeYouveInferredForX). If your inference for x's type is crappy, you'll have a really wide signature (e.g., convert(::Type{T}, ::Any)) that is a big, juicy target for invalidation. However, the typeassert alone introduces no such backedge.

Or, as I suggested, make them non-fatal and just generate two branches: promise satisfied / promise violated.

True, but with a cost: backedges to the method you wish you weren't needing. Not saying we can't do it, but it's not without cost. (Invalidations are on my mind, but I'm not religious about them. It's just that heretofore I've never tried to engineer Julia code to avoid them so this is new territory for me and one worth thinking about carefully even if we eventually decide to live with them for certain methods.)

EDIT: to be clearer,

if !eof(io) & (length(A) > 3)

can be rewritten as

if &(!(eof(io)), length(A) > 3)

Let's say you don't know that eof(io) returns a Bool. Then you're calling !(::Any), which probably produces Any, so you're also calling &(::Any, ::Bool). So that's two potential vulnerabilities to invalidation, for ! and &. If we automatically turned this into

if &(!(convert(Bool, eof(io))::Bool), length(A) > 3)

then we know we're calling !(::Bool) and &(::Bool, ::Bool), so we've protected those methods from invalidation. But it's at the cost of opening up a different vulnerability, convert(::Type{Bool}, ::Any). So it's often progress to add the converts---they protect the methods that come after them---but they themselves then become vulnerable.

And if you have to have a branch that still might call !(::Any) and &(::Any, ::Bool), now you have 3 vulnerabilities to invalidation. You might have fixed some of your performance issues but you've potentially made your latency-risk higher.

@StefanKarpinski
Copy link
Member

Or, as I suggested, make them non-fatal and just generate two branches: promise satisfied / promise violated.

True, but with a cost: backedges to the method you wish you weren't needing. Not saying we can't do it, but it's not without cost.

I wonder if it's possible to use dynamic method calls in the slow path to avoid backedges. I.e. generate slow, dynamic code that won't be invalidated.

@timholy
Copy link
Member Author

timholy commented Jun 29, 2020

That's a great idea. It would pretty much get us everything we want: better performance and lower risk to invalidation while ensuring we aren't introducing breaking changes because we've not enforced such constraints previously. The trickiest bit would be handling the potential combinatorial explosion, sort of like now with Union-splitting though I fear it would be much worse. f(g(h(x))) would have to have a branch on each call, not just the h call.

@StefanKarpinski
Copy link
Member

How about two paths:

  1. all promises kept, things are fast;
  2. any promise is broken, everything is dynamic.

Basically, adding expected type signatures for functions would be a non-breaking change but it would be highly beneficial to follow the signature. We can start opening issues on any packages that violate expected signatures. There could be a mode where a violation is fatal so that we can catch them when debugging.

@StefanKarpinski
Copy link
Member

The end-game would be that these kinds of type signatures would be strict in 2.0 and by then we've already gotten the entire package ecosystem into compliance so it's a smooth transition, so I think this would be something to add as a first-class feature, not just some t-function table interface, although could be a first cut.

@timholy
Copy link
Member Author

timholy commented Jun 29, 2020

You'd have to be able to switch to dynamic at any point in the call chain. Though I guess you could do it with

y = f(x)
isa(y, Y) || @goto labelY
z = g(y)
isa(z, Z) || @goto labelZ
...

@label labelY
z = @rtinvoke g(y)
@label labelZ
...

@tkf
Copy link
Member

tkf commented Jun 29, 2020

I wonder if this can be piggy backed with a more comprehensive approach to defining "overload API." I'm thinking something like

@overloadable function push!(xs::T, x) where {T}
    @impl
    return xs::T
end

# lowered to (say)
function push!(xs::T, x) where {T}
    __push!__(xs, t)
    return xs::T
end

to declare overload API which can be used as

@impl push!(xs::MyArray, x) = # implementation

# lowered to
Base.__push!__(xs::MyArray, x) = implementation

This would avoid issues like #34202 where you have implementations that is not compatible with the expected API. It'd also help people overloading a function that requires complex pre- and post-processing (e.g., foldl).

It'd be also nice if this can be used to detect dispatching on unsupported argument, to avoid introducing method ambiguity:

julia> @impl push!(xs, x::MyType) = ...
ERROR: dispatching on the second argument is not allowed

Of course, we need to be able to seal function/method #31222 for this to work reliably.

e.g. function eof::Bool end

BTW, I think another interpretation of the syntax function f::T end is that f isa T. This can be very useful if you want to create special type of functions:

abstract type Monoid <: Function end

function *::Monoid end

# lowered to:
struct Mul <: Function end
const * = Mul()

@JeffBezanson
Copy link
Member

I wonder if it's possible to use dynamic method calls in the slow path to avoid backedges. I.e. generate slow, dynamic code that won't be invalidated.

We already do that when we split out some cases with branches, if the slow case was inferred as Any. For example with !:

julia> g(x) = !x;

julia> code_typed(g, (Any,))
1-element Array{Any,1}:
 CodeInfo(
1 ─ %1  = (isa)(x, Missing)::Bool
└──       goto #3 if not %1
2 ─       goto #6
3 ─ %4  = (isa)(x, Bool)::Bool
└──       goto #5 if not %4
4 ─ %6  = π (x, Bool)
│   %7  = Base.not_int(%6)::Bool
└──       goto #6
5 ─ %9  = !x::Any
└──       goto #6
6 ┄ %11 = φ (#2 => $(QuoteNode(missing)), #4 => %7, #5 => %9)::Any
└──       return %11
) => Any

So that code only depends on the methods for Bool and Missing.

@timholy
Copy link
Member Author

timholy commented Jun 30, 2020

Ah, so is the point of Core.Argument to mark that all that can be done, has been done, and we can assume that any :call expression with Core.Arguments will be made by runtime dispatch?

@JeffBezanson
Copy link
Member

JeffBezanson commented Jun 30, 2020

Argument just marks that the value is an argument of the enclosing method. All :call expressions will do runtime dispatch.

@StefanKarpinski
Copy link
Member

The way I'm thinking about it, the point of the type hint is to avoid having to do inference: the hint tells the compiler what type to expect and it can "trust but verify". Doesn't help not knowing which precise method to call, but if we knew that then the number of methods present wouldn't matter anyway (since we know which one).

timholy added a commit to timholy/MethodAnalysis.jl that referenced this issue Jun 30, 2020
@JeffBezanson
Copy link
Member

This is an interesting idea to think about. But I'm starting to think it is too fundamental a change to our type paradigm. Currently, any type we have can be assumed to be correct. This would introduce the idea of a "probably correct" type, and they propagate. If f is declared as "probably returns Int", and we have g(x) = f(x), then g also probably returns Int. One could try a non-propagating version (g returns Any), but that violates referential transparency: simply introducing an extra layer of functions totally changes your type information.

Taken to its logical conclusion, this approach leads to basically assuming a fixed world of type signatures, and compiling two versions of everything: one that uses those assumptions, and another that dynamically dispatches everything, with side exits from the first to the second. The first version will sometimes be faster/better-typed than what we have now, but the second version will be slower. When a rogue method definition comes along, instead of recompiling things your program just gets (potentially a lot) slower. So I'm skeptical of whether that is worthwhile.

It seems better to me to try to sharpen invalidation using the inferred type of new methods. I don't yet know exactly how to do that, but it seems likely the payoff would be large. It could also potentially be combined with a type-hinting mechanism like that proposed here --- basically just use the current invalidation system (instead of side exits) if the declaration is violated.

The simplest version of that doesn't even need declarations: we could just infer every method on its declared signature, and if all methods of a function have the same return type save it for quick lookup during inference. The only problem with that is that experiments show it's way too expensive, so one possibility is just to have a hint telling us it's worthwhile to do that for a particular function.

@timholy
Copy link
Member Author

timholy commented Jun 30, 2020

The simplest version of that doesn't even need declarations: we could just infer every method on its declared signature, and if all methods of a function have the same return type save it for quick lookup during inference. The only problem with that is that experiments show it's way too expensive, so one possibility is just to have a hint telling us it's worthwhile to do that for a particular function.

Can we add infer_module(mod::Module) that checks all methods defined in a particular module? If that could be written to the *.ji file we'd be basically set (basically, a new variant of precompile). And of course do that for Julia's own system image.

For the record, the script here really doesn't take very long to run, and it covers all of Base and the stdlibs.

@oscardssmith
Copy link
Member

why was this closed?

@vtjnash
Copy link
Member

vtjnash commented Mar 10, 2023

See Jeff's explanation above #36463 (comment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
compiler:inference Type inference compiler:latency Compiler latency julep Julia Enhancement Proposal speculative Whether the change will be implemented is speculative
Projects
None yet
Development

No branches or pull requests

9 participants