-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow arrays to customize nonscalar indexing return type #20334
Conversation
This is a very small change (really only ±5 LOC) that adds a hook for custom arrays to customize the array storage that gets used for non-scalar indexing based upon the indices. There is no functional change to Base here; but the extensibility here is quite powerful. I've added a very basic Interpolations.jl-like array to the AbstractArray tests. With just a few extra definitions, it's able to fully participate with non-scalar indexing and views. ```julia julia> v = InterpolatedVector([2^i for i=0:10]) 11-element InterpolatedVector{Int64,Array{Int64,1}}: 1 2 4 ⋮ julia> v[2.5:5.5] 4-element Array{Float64,1}: 3.0 6.0 12.0 24.0 julia> @view v[2.5:5.5] 4-element SubArray{Float64,1,InterpolatedVector{Int64,Array{Int64,1}},Tuple{StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}}},false}: 3.0 6.0 12.0 24.0 julia> using DualNumbers julia> v[[Dual(2.5,1),Dual(3.5,1),Dual(5.1,1)]] 3-element Array{DualNumbers.Dual{Float64},1}: 3.0 + 2.0ɛ 6.0 + 4.0ɛ 17.6 + 16.0ɛ julia> @view v[[Dual(2.5,1),Dual(3.5,1),Dual(5.1,1)]] 3-element SubArray{DualNumbers.Dual{Float64},1,InterpolatedVector{Int64,Array{Int64,1}},Tuple{Array{DualNumbers.Dual{Float64},1}},false}: 3.0 + 2.0ɛ 6.0 + 4.0ɛ 17.6 + 16.0ɛ ```
The only trouble here is that inference really doesn't like inferring julia> @time Base.unsafe_view(v, 1.5:6.5);
0.256642 seconds (139.62 k allocations: 6.678 MiB)
julia> @time view(v, 1.5:6.5);
39.703160 seconds (65.02 M allocations: 2.917 GiB, 3.72% gc time)
julia> @time view(v, 1.5:6.5);
0.000120 seconds (22 allocations: 1.203 KiB) Fortunately, this only seems to affect the |
Base.size(V::InterpolatedVector) = size(V.data) | ||
# This custom type supports all Numbers as scalar indices; protect them in a 0-d array for non-scalar methods | ||
Base.to_index(::InterpolatedVector, x::Number) = fill(x) # could be a specialized immutable | ||
Base.indexed_eltype{T}(V::InterpolatedVector{T}, xs...) = _reduce_promote_eltype(T, xs...) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You could use Base.promote_eltype(T, xs...)
instead of _reduce_promote_eltype
. Unfortunately, that doesn't help with the slowness of inference over the view
s you're seeing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I'm aware of that method, but I figured I'd make this test just code against the publicly available interface. Thanks for checking to see if it made a difference!
Really interesting case. @vtjnash @JeffBezanson @yuyichao, is this simple enough to be useful? Or does its dependence on |
Care checking whether #20874 helps with the inference of |
This was just a cute little feature that I thought would be fun, but I don't really need it myself and it doesn't seem to have others terribly excited either, so closing. |
This is a very small change (really only ±5 LOC) that adds a hook for custom arrays to customize the array storage that gets used for non-scalar indexing based upon the indices. There is no functional change to Base here; but the extensibility here is quite powerful. I've added a very basic Interpolations.jl-like array to the AbstractArray tests. With just a few extra definitions, it's able to fully participate with non-scalar indexing and views.
cc: @tlycken