-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inference failure dependent on Tuple signature annotation #29117
Comments
This is intentional. Some limits it's hitting include:
So here, we're doing inference on essentially just the declared types. But I don't see an obvious way to either change this, or to add this as a convergent pattern into inference. I tried to consider some re-arrangements, but this didn't seem to help bring any simplification so far: # sendtoback processes `inds` one dimension at a time. Any finished indexes
# are appended to the end of the arguments; when `inds` is exhausted, this exits.
function sendtoback(A::AbstractArray, inds, i::Dims)
wi = inds[1]
return sendtoback1(A, indexes(wi), weights(wi), Base.tail(inds), i)
end
sendtoback(A::AbstractArray{T,N}, ::Tuple{}, i::Dims{N}) where {T,N} = A[i...]
# sendtoback1 is responsible for processing a single dimension
sendtoback1(A, j::Int, w::Tuple{Any, Any, Vararg{Any}}, inds, i::Dims) =
w[1]*sendtoback(A, inds, (i..., j)) + sendtoback1(A, j+1, Base.tail(w), inds, i)
sendtoback1(A, j::Int, w::Tuple{Any}, inds, i::Dims) = w[1]*sendtoback(A, inds, (i..., j))
sendtoback1(A, j::Int, w::Tuple{}, inds, i::Dims) = error("this should never happen") function sendtoback(A::AbstractArray, i::Dims, wi, inds...)
return sendtoback1(A, indexes(wi), inds, i, weights(wi)...)
end
sendtoback(A::AbstractArray{T,N}, i::Dims{N}) where {T,N} = A[i...]
sendtoback1(A, j::Int, inds, i::Dims, w1, w...) =
w1*sendtoback(A, (i..., j), inds...) + sendtoback1(A, j+1, inds, i, w...)
sendtoback1(A, j::Int, inds, i::Dims, w1) = w1*sendtoback(A, (i..., j), inds...)
sendtoback1(A, j::Int, inds, i::Dims) = error("this should never happen") |
I suspected this was an inference-must-converge issue. And like you I'm struggling to find a way to express this in a different way that's easier on inference. In the short term is my best course of action to write this as a generated function? |
I think a version that is currently inferrable would have to take the following form: function sendtoback(wi, inds...)
tail_w, tail_i = sendtoback(inds...) # unwrap the weights and indexing information for the tail of `inds`
head_w = weights(wi) # get the vector of new weights
head_i = ntuple(j -> indexes(wi) + j - 1, Val(length(w))) # get the vector of new indexes
W, I = # form the cartesian product between head and tail with `w` and `I`
return W, I
end
sendtoback() = ((1,), ())
function sendtoback(A::AbstractArray, inds...)
W, I = sendtoback(inds...)
return sum(W[j] * A[I[j]...] for j = 1:length(W))
end |
Thanks for the suggestion, I will try that in the morning. Given that it's mostly about proving convergence, I'm a bit curious about whether the following could, in principle, be proved to converge (diff is relative to my OP): diff --git a/inference.jl b/inference.jl
index 3538931..b500c37 100644
--- a/inference.jl
+++ b/inference.jl
@@ -8,7 +8,7 @@ indexes(wi::WeightedAdjIndex) = wi.istart
# sendtoback processes `inds` one dimension at a time. Any finished indexes
# are appended to the end of the arguments; when `inds` is exhausted, this exits.
-function sendtoback(A::AbstractArray{T,N}, inds, i::Vararg{Int,M}) where {T,N,M}
+function sendtoback(A::AbstractArray{T,N}, inds, i::Vararg{Union{Int,Nothing},N}) where {T,N}
wi = inds[1]
sendtoback1(A, indexes(wi), weights(wi), Base.tail(inds), i)
end
@@ -17,14 +17,14 @@ sendtoback(A::AbstractArray{T,N}, ::Tuple{}, i::Vararg{Int,N}) where {T,N} = A[i
# sendtoback1 is responsible for processing a single dimension
# sendtoback1(A, j::Int, w::Tuple, inds, i) =
# w[1]*sendtoback(A, inds, i..., j) + sendtoback1(A, j+1, Base.tail(w), inds, i)
-sendtoback1(A, j::Int, w::Tuple{Any}, inds, i) = w[1]*sendtoback(A, inds, i..., j)
+sendtoback1(A, j::Int, w::Tuple{Any}, inds, i) = w[1]*sendtoback(A, inds, Base.tail(i)..., j)
sendtoback1(A, j::Int, w::Tuple{}, inds, i) = error("this should never happen")
using Test
A = rand(3,3,3,3)
wis = ntuple(d->WeightedAdjIndex(2, (1,)), ndims(A))
-
-@test @inferred(sendtoback(A, wis)) == A[2,2,2,2]
+filler = ntuple(d->nothing, ndims(A))
+@test @inferred(sendtoback(A, wis, filler...)) == A[2,2,2,2]
# m = @which sendtoback(A, wis)
# Core.Compiler.typeinf_code(m, typeof((A,wis)), Core.svec(Float64,4,0), true, Core.Compiler.Params(typemax(UInt))) As I'm sure will be immediately evident to you, the design principle here was to avoid having the vararg grow (your first point in #29117 (comment)). |
Somewhat minor point compared to the importance of inferrability, but computing the cartesian product of the coefficients first and then aggregating with the array values involves more multiplications. You can see this just for 2d linear interpolation: c11 = a1*a2
c12 = a1*b2
c21 = b1*a2
c22 = b1*b2
c11*A[i,j] + c12*A[i,j+1] + c21*A[i+1,j] + c22*A[i+1,j+1] (8 multiplies, 3 adds if you discount the additions for the indexes) vs a1*(a2*A[i,j] + b2*A[i,j+1]) + b1*(a2*A[i+1,j] + b2*A[i+1,j+1]) (6 multiplies, 3 adds). For cubic interpolation in 3d it's 84 multiplies vs 192 multiplies; the ratio is bounded above by the dimensionality but can become a sizable fraction of it (e.g., in 20 dimensions it's a factor of ~15, not that anyone should be doing cubic interpolation in 20 dimensions). |
Ah, interesting. I think you could even use any in-domain value, then it's even more clear that each recursive call is strictly a function of its caller's arguments:
I don't usually like using (abusing?) |
Good idea. Here's some further changes I incorporated:
Consequently I had high hopes for this version: # A concerete subtype used when the indexes are adjacent, e.g., i and i+1
struct WeightedAdjIndex{L,W}
istart::Int
weights::NTuple{L,W}
end
weights(wi::WeightedAdjIndex) = wi.weights
indexes(wi::WeightedAdjIndex) = wi.istart
# sendtoback processes `inds` one dimension at a time. Any finished indexes
# are appended to the end of the arguments; when `inds` is exhausted, this exits.
function sendtoback(A::AbstractArray{T,N}, i::NTuple{N,Int}, wi, inds::Vararg{Any,M}) where {T,N,M}
sendtoback1(A, i, indexes(wi), weights(wi), inds)
end
@inline sendtoback(A::AbstractArray{T,N}, i::NTuple{N,Int}) where {T,N} = A[i...]
# sendtoback1 is responsible for processing a single dimension
# sendtoback1(A::AbstractArray{T,N}, i::NTuple{N,Int}, j::Int, w::NTuple{L,Any}, inds::NTuple{M,Any}) where {T,N,L,M} =
# w[1]*sendtoback(A, (Base.tail(i)..., j), inds...) + sendtoback1(A, i, j+1, Base.tail(w), inds)
@inline sendtoback1(A::AbstractArray{T,N}, i::NTuple{N,Int}, j::Int, w::Tuple{W}, inds::NTuple{M,Any}) where {T,N,W,M} =
w[1]*sendtoback(A, (Base.tail(i)..., j), inds...)
sendtoback1(A::AbstractArray{T,N}, i::NTuple{N,Int}, j::Int, w::Tuple{}, inds::NTuple{M,Any}) where {T,N,M} = error("this should never happen")
using Test
A = rand(3,3,3,3)
wis = ntuple(d->WeightedAdjIndex(2, (1,)), ndims(A))
filler = ntuple(d->0, ndims(A))
@test @inferred(sendtoback(A, filler, wis...)) == A[2,2,2,2] However, it didn't work. I also tried forms like With this much manual annotation I'm starting to get a bit surprised this still doesn't work. |
Ordering (mostly) doesn't matter, but depth (number of tuple wrappers) does. If we look at
Yes, that'll probably help some with ensuring inference shouldn't look at too many unreasonable signatures, but it is mostly not allowed to improve the result (I discovered this fact in #28517)
That'll alter the compiler heuristics, but it's ignored for inference (it actually may worsen inference, by creating a more complex type that we'll then end up needing to widen even faster—but when possible it tries not to distinguish between alternative versions of equivalent types) |
In a forthcoming rewrite of Interpolations.jl, I'm basing all operations on a new kind of array index, an abstract
WeightedIndex
type for whichgets converted into
Unfortunately I'm running into limitations on inference. Here's a small reproducer:
Julia currently infers the return type as
Any
but actually returns aFloat64
, hence the@inferred
line throws an error. Some interesting observations:sendtoback1
if you changeTuple{Any}
toTuple{Int}
then this example gets inferred correctly asFloat64
. Of course, I don't want to restrict the weights to be of typeInt
.sendtoback1
method it goes back to failingThe text was updated successfully, but these errors were encountered: