-
Notifications
You must be signed in to change notification settings - Fork 38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Simplify some functions #19
Conversation
@inbounds return $TensorType($expr) | ||
end | ||
@inline function otimes{dim}(S1::SecondOrderTensor{dim}, S2::SecondOrderTensor{dim}) | ||
TensorType = getreturntype(otimes, get_base(typeof(S1)), get_base(typeof(S2))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will only be fast if getreturntype
is @pure
Edit: Should have kept scrolling.
We could perhaps have some "reduce" function to simplify other things |
Tt = get_base(typeof(S)) | ||
tr = trace(S) / 3 | ||
Tt( | ||
@inline function(i, j) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This function will return different eltype for Int
tensors.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
julia> Tensor{2, 3}((i, j)-> i == j ? 1 : 1.0)
ERROR: MethodError: Cannot `convert` an object of type Tuple{Int64,Float64,Float64,Float64,Int64,Float64,Float64,Float64,Int64} to an object of type Tensors.Tensor{2,3,T<:Real,M}
This may have arisen from a call to the constructor Tensors.Tensor{2,3,T<:Real,M}(...),
since type constructors fall back to convert methods.
in Tensors.Tensor{2,3,T<:Real,M}(::Tuple{Int64,Float64,Float64,Float64,Int64,Float64,Float64,Float64,Int64}) at ./sysimg.jl:53
in macro expansion at /home/fredrik/Dropbox/Programming/Tensors.jl/src/constructors.jl:15 [inlined]
in Tensors.Tensor{2,3,T<:Real,M}(::##1#2) at /home/fredrik/Dropbox/Programming/Tensors.jl/src/constructors.jl:5
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep. I tried:
@inline function dev(S::SecondOrderTensor)
Tt = get_base(typeof(S))
tr = trace(S) / 3
T = typeof(one(eltype(S)) / 3)
Tt(
@inline function(i, j)
@inbounds if i == j
return T(S[i,j] - tr)
else
return T(S[i,j])
end
end
)
end
but it is slow for some reason.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could do this:
@inline function dev{dim, T}(S::SecondOrderTensor{dim, T})
Tt = get_base(typeof(S))
tr = trace(S) / 3
R = promote_type(T, typeof(tr))
Tt(
@inline function(i, j)
@inbounds if i == j
return R(S[i,j] - tr)
else
return R(S[i,j])
end
end
)
end
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Edit, yea its really slow.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like Tt
is not inferred..
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It doesn't work on master either so
julia> A = rand(Tensor{2, 3, Int});
julia> dev(A)
ERROR: MethodError: Cannot `convert` an object of type Tuple{Float64,Int64,Int64,Int64,Float64,Int64,Int64,Int64,Float64} to an object of type Tensors.Tensor{2,3,T<:Real,M}
This may have arisen from a call to the constructor Tensors.Tensor{2,3,T<:Real,M}(...),
since type constructors fall back to convert methods.
in macro expansion at /home/fredrik/Dropbox/Programming/Tensors.jl/src/math_ops.jl:275 [inlined]
in dev(::Tensors.Tensor{2,3,Int64,9}) at /home/fredrik/Dropbox/Programming/Tensors.jl/src/math_ops.jl:267
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I put it back since it fails on master
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could perhaps limit eltype for this method to AbstractFloat
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ideally it should just promote but I dunno why it gets slow then...
src/transpose.jl
Outdated
$(Expr(:meta, :inline)) | ||
@inbounds return Tensor{4, dim}($expr) | ||
end | ||
@inline function majortranspose{dim}(S::Tensor{4, dim}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This method should have S::FourthOrderTensor{dim}
as input but always return Tensor{4, dim}
so just change the input type signature.
Did some more functions.. Using the function stuff is really powerful. |
Jesus we have a lot of benchmarks... |
src/transpose.jl
Outdated
@inbounds return Tensor{2, dim}($expr) | ||
end | ||
@inline function Base.transpose{dim}(S::Tensor{2, dim}) | ||
Tensor{2, dim}(@inline function(i, j) @inbounds v = S[j,i]; v; end) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@inboundsret
?
This is great!
Yea, better safe than sorry :p |
Benchmarks here: https://gist.github.com/KristofferC/8f423c6d4b2c1f614b3ff7f03fa7c394 I did not find any difference in the generated code for anything that looked like it could be real regressions. |
I tested some cases too, didn't see any difference. Benchmarks really are weird... |
Codecov Report
@@ Coverage Diff @@
## master #19 +/- ##
==========================================
- Coverage 97.62% 97.44% -0.19%
==========================================
Files 14 14
Lines 970 899 -71
==========================================
- Hits 947 876 -71
Misses 23 23
Continue to review full report at Codecov.
|
Pfft 36 min on Travis 0.6. Is this due to a change in Base or this PR? |
Locally on v06:
|
Yuck... hmm. |
JuliaLang/julia#18077 ? But upgraded to 0.6 from 0.5 :) |
b76a616
to
ad54bed
Compare
ad54bed
to
aa4e141
Compare
symmetric /skew symmetric 600 seconds!, one master, 3.52 seconds. construct from array: 13.8s -> 281s |
Seems to be extremely hard on inference with this method... |
I don't get it though.. on 0.5.1:
so why is it fast when doing the tests? |
That time is just ridiculous LOL |
I tried typeasserting the functions but didn’t make a difference. |
This is fast though: immutable Tensor{order, dim, T <: Real, M}
data::NTuple{M, T}
end
@generated function gg{order, dim}((S::Type{Tensor{order, dim}}), f::Function)
exp = Expr(:tuple, [:(f($i,$j,$k,$l)) for i=1:dim, j=1:dim, k = 1:dim, l = 1:dim]...)
return quote
return $exp
end
end
gg(Tensor{4,3}, (i,j,k,l) -> i+j+k+l) |
With JuliaLang/julia#21085 tests on this branch are slightly faster than on master. |
It seems that we can generally avoid using the @generated function dcontract_muladd{dim}(S1::Tensor{2, dim}, S2::Tensor{2, dim})
ex1 = Expr[:(S1[$i, $j]) for i in 1:dim, j in 1:dim][:]
ex2 = Expr[:(S2[$i, $j]) for i in 1:dim, j in 1:dim][:]
exp = reducer(ex1, ex2, true)
return quote
$(Expr(:meta, :inline))
@inbounds return $exp
end
end produces just as good code as the one with |
Yea, but for |
Why not? |
the |
Ah, no. This also seems to be more expensive for compilation... |
@inbounds return $TensorType($expr) | ||
end | ||
end | ||
Base.fill{T <: AbstractTensor}(el::Number, S::Type{T}) = apply_all(get_base(T), i -> el) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This became really neat.
end | ||
@inline Base.det(t::SecondOrderTensor{1}) = @inboundsret t[1,1] | ||
@inline Base.det(t::SecondOrderTensor{2}) = @inboundsret (t[1,1] * t[2,2] - t[1,2] * t[2,1]) | ||
@inline function Base.det(t::SecondOrderTensor{3}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will this give the same code even for SymmetricTensor
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Appear so, yes.
end | ||
@inline function otimes{dim}(S1::SecondOrderTensor{dim}, S2::SecondOrderTensor{dim}) | ||
TensorType = getreturntype(otimes, get_base(typeof(S1)), get_base(typeof(S2))) | ||
TensorType(@inline function(i,j,k,l) @inboundsret S1[i,j] * S2[k,l]; end) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same here; results in the same code for SymmetricTensor
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm pretty sure checked that but can check again.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As long as compute_index
via getindex
is elided for Tensor
, no reason it shouldn't for SymmetricTensor
as well. Just making sure.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, exact same code
@@ -7,7 +7,7 @@ end | |||
|
|||
@testsection "tensor ops" begin | |||
for T in (Float32, Float64, F64), dim in (1,2,3) | |||
|
|||
println("T = $T, dim = $dim") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps this should be removed? It was included in #23
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is there because sometimes there were time outs for this section because nothing got printed for 10 minutes. I think maybe when Travis is extra slow. So good to have something that prints maybe?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But #23 results in prints just as often?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is no printing there from what I can see?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, right. It was removed in #23 :) https://github.com/KristofferC/Tensors.jl/pull/23/files#diff-78a3de5eaf1c4bcf42b32436cba105ceL8
Good to go? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice code deletion!
I tried this #19 (comment) for all functions. Test time (eg compilation time) went from 120s to 750. It really helps the compiler to compute the linear index by ourselves. Sadly, cause it led to some nice code deletion |
Perhaps we can make an issue about it... |
TODO: