-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
setindex #75
Comments
Hi, looks like this package could indeed help you! If they're in an array, it should be quite a bit easier to modify the values. |
Ah, great! I plan to add the Cholesky factorization, I am also using FixedSizedArrays for filtering and MCMC in the widest sense. I could also add the |
@michaellindon the julia> using FixedSizeArrays
help?> destructure
search: destructure
Destructure the elements of an array A to create M=ndims(eltype(A))
additional dimensions prepended to the dimensions of A. The returned
array is a view onto the original elements; additional dimensions occur
first for consistency with the natural memory ordering.
For example, AbstractArray{F<:FixedArray{T,M,SIZE},N} appears as an
AbstractArray{T,M+N} after destructuring.
julia> v = [Vec(i,-i,1) for i=1:10]
10-element Array{FixedSizeArrays.Vec{3,Int64},1}:
Vec(1,-1,1)
Vec(2,-2,1)
Vec(3,-3,1)
Vec(4,-4,1)
Vec(5,-5,1)
Vec(6,-6,1)
Vec(7,-7,1)
Vec(8,-8,1)
Vec(9,-9,1)
Vec(10,-10,1)
julia> destructure(v)
3x10 Array{Int64,2}:
1 2 3 4 5 6 7 8 9 10
-1 -2 -3 -4 -5 -6 -7 -8 -9 -10
1 1 1 1 1 1 1 1 1 1
julia> @fslice v[1:2,:]
2x10 Array{Int64,2}:
1 2 3 4 5 6 7 8 9 10
-1 -2 -3 -4 -5 -6 -7 -8 -9 -10 When writing the |
I'm not sure its exactly what I am looking for. I have n 2 vectors and n 2x2 matrices. At the moment I am storing them as Arrays like: m=Array{FixedSizeArrays.Vec{2,Float64},1}() Note if I specify the size, then these guys get initialized and I cannot change their values. Nor can I create an empty matrix and use push!(x,Mat(...)) because the algorithm requires I start at the end of x and go backwards, x[n-1] is computed from x[n]. I did away with the Arrays and replaced them with Dicts, so I have x[n-1]=f(x[n]) for f some linear function. Under profiling my code is spending most of its time in constructor.jl, which is frustrating, because the only way I can change these elements is an assignment from some Vec((a,b)),Mat((a,b),(c,d)) type thing as opposed to setting the indices of x[n-1]. In addition, x,m,M will change at every iteration, profiling my code yields memory allocation of the order of gigabytes. Overall my code is taking 7x as long using FixedSizeArrays because I cannot preallocate stuff in advance. |
@c42f it seems I missed your point about fslice, but does it work for matrices?
|
Your problem is that the list comprehension has failed type inference, so For now it's easy enough to work around: v = Mat{2,2,Float64}[Mat(rand(2,2)) for i=1:10]
@fslice v[1:2,1:2,:] |
By the way, you can definitely change the values inside your m = Array{Vec{2,Float64},1}()
for i=1:10
push!(m, Vec(1.0,i))
end
for j=1:10
m[j] = Vec(2.0,j)
end |
@c42f Thanks!! I've been able to implement it now as I had hoped, preallocating everything in advance, but sadly it still runs 1 second slower than using native linear algebra operations :/ Profile indicates the most burdensome lines are:
and
I feel like wrapping things in Vec and Mat, especially for the submatrix views, may be slow. Is there anything that jumps out at you for why this is slow? All my matrices are 2x2 |
Native Julia:
Timing:
FixedSizeArrays
Timing:
|
That's rather a lot of symbols to digest, and it's a bit hard to diagnose without any type information for the inputs to FFBS. To focus our attention, could we reduce the test case to just the second loop, and get a fully runnable snippet (including allocation of representative inputs - could be randomly generated or whatever)? To give you some pointers, consider the expression x[i]=m[i]+M[i]*A[i]'*\(AMAQ[i],x[i+1]-A[i]*m[i])+chol(Σ+Mat(0.0000000001*eye(d)))'*Vec(rand(Normal(0,1),d))
|
I can answer some of this from memory:
|
Ah, now how did I not know about For immutable RandFunctor2{T} ; dist::T ; end
@inline call{T}(rf::RandFunctor2{T}, i...) = rand(rf.dist)
Base.rand{F<:FixedArray}(d::Distribution, ::Type{F}) = map(RandFunctor2(d), F)
# After which you can call
rand(Normal(0,1), Vec{2,Float64}) I should do a PR with something like the above, but FSAs probably shouldn't depend on Distributions.jl, and the existing RandFunctor doesn't make sense for use with Distributions.Distribution at the moment. |
Ah, yes. For now I would rather add
|
@michaellindon You see, it is work in progress, but your "trying the ice to see if it holds" is very helpful for us. |
Not true: julia> using FixedSizeArrays
julia> a = Vec{2,Float64}[rand(2), rand(2)]
2-element Array{FixedSizeArrays.Vec{2,Float64},1}:
Vec(0.4211134132103669,0.015044855516675781)
Vec(0.6822470581398004,0.6019663948045095)
julia> a[1] = Vec(rand(2))
FixedSizeArrays.Vec{2,Float64}((0.2785146610286493,0.9090981367087467))
julia> a
2-element Array{FixedSizeArrays.Vec{2,Float64},1}:
Vec(0.2785146610286493,0.9090981367087467)
Vec(0.6822470581398004,0.6019663948045095) For the bigger question, I fear your code will be pretty hard for anyone but you to debug: it's complicated, it's not complete, and we don't know the types of the inputs you're using. But generally speaking, |
Hi Everyone, thanks for your comments. I profiled my code and it is spending the majority of time in constructors.jl: The other blocks on that line are ctranspose, line 100 of ops.jl. This block occurs 4 times. The others are convert line 9 from indexing.jl In short the most expensive operations are: As for the time spent constructing, at every iteration transition(Δ,ł) and innovation(Δ,ł) returns a 3x3 Mat, for example:
ctranspose I cannot explain. I would have hoped that A_B' is as fast as A_B like in BLAS convert @mschauer Can you push your randn(Vec{d,Float64}) function to the repo? |
@timholy I just noticed something from your post, you specify the return type. I just tried the following:
|
constructors line 52 seems to be the one for |
And the conversion code doesn't seem to be optimal as well ;) I could try to make it faster |
supplying the type of the Mat helps because the dimensionality can then be fixed at compile time. Otherwise, it would depend on the dimension of the array. |
Okay nevermind, the typed code path is fairly optimal :D |
@SimonDanisch, if you got rid of the size check then you could construct a @michaellindon, with that much red at the top of your profile, you have big bad type-stability problems. Specifying the sizes of your |
I make a PR this weekend if c42f does not beat me to it. |
@timholy I'm not sure either... It's super convenient in OpenGL ;) The best reason for it would be, that it is a valid use case and it will be hard to offer this use case in a concise way otherwise. |
@mschauer thanks for making that PR with randn, I've been away on holiday for the last couple of weeks or so. @SimonDanisch I agree with Tim regarding size checks. IMO GLSL conversion semantics only make sense in the context of the specific things you're likely to do in a shading language (and closely related graphics tasks on the CPU side). For generality I think FixedSizeArrays should be aiming for API compatibility with other dense arrays as much as possible which would suggest explicit slicing for changing shape. (But until julia has some combination of constant propagation and inlining mixed into type inference that's not going to work well - ref #17.) |
I have an application where I'm doing an enormous amount of computation on 3x3 and 4x4 matrices, multiplications, transposes, choleskys, determinants etc. My implementation in Julia is very vanilla and I guess that all of these operations are being sent off to BLAS or LAPACK which has quite a large overhead for such small matrices. I considered rewriting these computationally expensive functions in Eigen because this has a small vectorized fixed size type for matrices and vectors. I was linked to this julia package from my question on stackoverflow. It looks exactly what I'm looking for, except for cholesky or ldl factorization, but I can't edit the elements of the vector or matrix once it has been created (they are immutable?). I'm hoping to allocate memory for an array of fixed size matrices or vectors and then change them at each iteration. It looks like setindex is on the to do list. Is there any other way to change the elements without setindex? How hard would it be to implement, or are there any other packages which are suitable for my application. Thanks!
Context, I'm implementing a forward filtering backward sampling algorithm and my chain is very long, and this happens at every iteration of an mcmc algorithm. The dimensionality of my state space is either 3 or 4, meaning that for each time point in my chain I have to do the linear algebra to generate a 4 dimensional random vector from a conditional normal distribtion.
The text was updated successfully, but these errors were encountered: