Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] Optimizing {+,-,*} for structured matrices #28883

Merged

Conversation

mcognetta
Copy link
Contributor

@mcognetta mcognetta commented Aug 24, 2018

This is a work in progress to get binary operations on structured matrix types to work as intended. Currently there are several inefficiencies (usually due to fall backs or, the algorithm chosen to perform the binary operation, or some undesirable conversion mid operation) or missing/broken methods.

Current Status:

In the tables below, ❌ is for errors (where the operation is not even evaluated) and ✔️ are for problems that have a PR open to fix them.

First, we look at the current state of matrix addition and multiplication:

Type of A Type of B Type of A+B Type of A*B Notes
UpperTriangular UpperTriangular UpperTriangular UpperTriangular
UpperTriangular LowerTriangular Array Array
UpperTriangular Diagonal UpperTriangular UpperTriangular
UpperTriangular U Bidiagonal Array Array Add and Mul should be LowerTriangular
UpperTriangular L Bidiagonal Array Array
UpperTriangular SymTridiagonal Array Array
UpperTriangular UniformScaling UpperTriangular UpperTriangular
UpperTriangular Array Array Array
LowerTriangular UpperTriangular Array Array
LowerTriangular LowerTriangular LowerTriangular LowerTriangular
LowerTriangular Diagonal LowerTriangular LowerTriangular
LowerTriangular U Bidiagonal Array Array
LowerTriangular L Bidiagonal Array Array Add and Mul should be LowerTriangular
LowerTriangular SymTridiagonal Array Array
LowerTriangular UniformScaling LowerTriangular LowerTriangular
LowerTriangular Array Array Array
Diagonal UpperTriangular UpperTriangular UpperTriangular
Diagonal LowerTriangular LowerTriangular LowerTriangular
Diagonal Diagonal Diagonal Diagonal
Diagonal U Bidiagonal Bidiagonal SparseMatrixCSC Mul should be Bidiagonal
Diagonal L Bidiagonal Tridiagonal SparseMatrixCSC Add should be Bidiagonal, Mul can be Bidiagonal
Diagonal SymTridiagonal SymTridiagonal SparseMatrixCSC Mul should be Tridiagonal
Diagonal UniformScaling Diagonal Diagonal
Diagonal Array Array Array
U Bidiagonal UpperTriangular Array ERROR ❌ Add should be UpperTriangular, Mul should be UpperTriangular
U Bidiagonal LowerTriangular Array ERROR ❌ Mul should be Array
U Bidiagonal Diagonal Bidiagonal SparseMatrixCSC Mul should be U Bidiagonal
U Bidiagonal U Bidiagonal Bidiagonal Array Mul should be UpperTriangular or Sparse
U Bidiagonal L Bidiagonal Tridiagonal Array Mul should be Tridiagonal
U Bidiagonal SymTridiagonal ERROR ❌ Array Add should be Tridiagonal ✔️, Mul should be Sparse
U Bidiagonal UniformScaling Bidiagonal Bidiagonal
U Bidiagonal Array Array Array
L Bidiagonal UpperTriangular Array ERROR ❌ Mul should be Array
L Bidiagonal LowerTriangular Array ERROR ❌ Add and Mul should be Lower Triangular
L Bidiagonal Diagonal Tridiagonal SparseMatrixCSC Mul should be L Bidiagonal
L Bidiagonal U Bidiagonal Tridiagonal Array Mul should be Tridiagonal
L Bidiagonal L Bidiagonal Bidiagonal Array Mul should be LowerTriangular or Sparse
L Bidiagonal SymTridiagonal ERROR ❌ Array Add should be Tridiagonal ✔️, Mul should be Sparse
L Bidiagonal UniformScaling Bidiagonal Bidiagonal
L Bidiagonal Array Array Array
SymTridiagonal UpperTriangular Array Array
SymTridiagonal LowerTriangular Array Array
SymTridiagonal Diagonal SymTridiagonal SparseMatrixCSC Mul should be Tridiagonal
SymTridiagonal U Bidiagonal ERROR ❌ Array Add should be Tridiagonal ✔️
SymTridiagonal L Bidiagonal ERROR ❌ Array Add should be Tridiagonal ✔️
SymTridiagonal SymTridiagonal SymTridiagonal Array Mul should be Sparse
SymTridiagonal UniformScaling SymTridiagonal SymTridiagonal
SymTridiagonal Array Array Array
UniformScaling UpperTriangular UpperTriangular UpperTriangular
UniformScaling LowerTriangular LowerTriangular LowerTriangular
UniformScaling Diagonal Diagonal Diagonal
UniformScaling U Bidiagonal Bidiagonal Bidiagonal
UniformScaling L Bidiagonal Bidiagonal Bidiagonal
UniformScaling SymTridiagonal SymTridiagonal SymTridiagonal
UniformScaling UniformScaling UniformScaling UniformScaling
UniformScaling Array Array Array
Array UpperTriangular Array Array
Array LowerTriangular Array Array
Array Diagonal Array Array
Array U Bidiagonal Array SparseMatrixCSC Mul should be Array
Array L Bidiagonal Array SparseMatrixCSC Mul should be Array
Array SymTridiagonal Array SparseMatrixCSC Mul should be Array
Array UniformScaling Array Array
Array Array Array Array

Below is a reduced table, where only things to be fixed are listed:

Type of A Type of B Type of A+B Type of A*B Notes
UpperTriangular U Bidiagonal Array Array Add and Mul should be LowerTriangular
LowerTriangular L Bidiagonal Array Array Add and Mul should be LowerTriangular
Diagonal U Bidiagonal Bidiagonal SparseMatrixCSC Mul should be Bidiagonal
Diagonal L Bidiagonal Tridiagonal SparseMatrixCSC Add should be Bidiagonal, Mul can be Bidiagonal
Diagonal SymTridiagonal SymTridiagonal SparseMatrixCSC Mul should be Tridiagonal
U Bidiagonal UpperTriangular Array ERROR ❌ Add should be UpperTriangular, Mul should be UpperTriangular
U Bidiagonal LowerTriangular Array ERROR ❌ Mul should be Array
U Bidiagonal Diagonal Bidiagonal SparseMatrixCSC Mul should be U Bidiagonal
U Bidiagonal U Bidiagonal Bidiagonal Array Mul should be UpperTriangular or Sparse
U Bidiagonal L Bidiagonal Tridiagonal Array Mul should be Tridiagonal
U Bidiagonal SymTridiagonal ERROR ❌ Array Add should be Tridiagonal ✔️, Mul should be Sparse
L Bidiagonal UpperTriangular Array ERROR ❌ Mul should be Array
L Bidiagonal LowerTriangular Array ERROR ❌ Add and Mul should be Lower Triangular
L Bidiagonal Diagonal Tridiagonal SparseMatrixCSC Mul should be L Bidiagonal
L Bidiagonal U Bidiagonal Tridiagonal Array Mul should be Tridiagonal
L Bidiagonal L Bidiagonal Bidiagonal Array Mul should be LowerTriangular or Sparse
L Bidiagonal SymTridiagonal ERROR ❌ Array Add should be Tridiagonal ✔️, Mul should be Sparse
SymTridiagonal Diagonal SymTridiagonal SparseMatrixCSC Mul should be Tridiagonal
SymTridiagonal U Bidiagonal ERROR ❌ Array Add should be Tridiagonal ✔️
SymTridiagonal L Bidiagonal ERROR ❌ Array Add should be Tridiagonal ✔️
SymTridiagonal SymTridiagonal SymTridiagonal Array Mul should be Sparse
Array U Bidiagonal Array SparseMatrixCSC Mul should be Array
Array L Bidiagonal Array SparseMatrixCSC Mul should be Array
Array SymTridiagonal Array SparseMatrixCSC Mul should be Array

These errors can divided into roughly 3 categories (I am working on a list of examples for each):

  1. Improper conversion during evaluation
  2. Sub-optimal algorithm
  3. Missing implementation or some conversion error

Type 1 often happens when a catch all fallback is hit. Many of the multiplication cases have very efficient implementations (thanks to #15505) but convert to Array or SparseMatrixCSC somewhere which eliminates all performance. See bidiag.jl for most of those.

Type 2 usually is with very highly structured upper/lower matrices. I am compiling a list of these, but I noticed some in addition for diagonal-ish matrices.

Type 3 seems to be the most pressing, since the others still produce correct results but not as quickly. These are often found when we have symmetric types. For example:

julia> S = SymTridiagonal(rand(4), rand(3))
4×4 SymTridiagonal{Float64,Array{Float64,1}}:
 0.285757  0.905595   ⋅          ⋅      
 0.905595  0.517686  0.618745    ⋅      
  ⋅        0.618745  0.0932621  0.333928
  ⋅         ⋅        0.333928   0.154753

julia> U = Bidiagonal(rand(4,4), :U)
4×4 Bidiagonal{Float64,Array{Float64,1}}:
 0.511746  0.344284   ⋅         ⋅      
  ⋅        0.380827  0.856202   ⋅      
  ⋅         ⋅        0.916337  0.528734
  ⋅         ⋅         ⋅        0.235697

julia> U+S
ERROR: ArgumentError: matrix cannot be represented as SymTridiagonal
Stacktrace:
 [1] SymTridiagonal(::Bidiagonal{Float64,Array{Float64,1}}) at /home/mc/github/julia-stable/usr/share/julia/stdlib/v1.0/LinearAlgebra/src/special.jl:18
 [2] convert at /home/mc/github/julia-stable/usr/share/julia/stdlib/v1.0/LinearAlgebra/src/special.jl:64 [inlined]
 [3] +(::Bidiagonal{Float64,Array{Float64,1}}, ::SymTridiagonal{Float64,Array{Float64,1}}) at /home/mc/github/julia-stable/usr/share/julia/stdlib/v1.0/LinearAlgebra/src/special.jl:105
 [4] top-level scope at none:0

julia> S+U
ERROR: ArgumentError: matrix cannot be represented as SymTridiagonal
Stacktrace:
 [1] SymTridiagonal(::Bidiagonal{Float64,Array{Float64,1}}) at /home/mc/github/julia-stable/usr/share/julia/stdlib/v1.0/LinearAlgebra/src/special.jl:18
 [2] convert at /home/mc/github/julia-stable/usr/share/julia/stdlib/v1.0/LinearAlgebra/src/special.jl:64 [inlined]
 [3] +(::SymTridiagonal{Float64,Array{Float64,1}}, ::Bidiagonal{Float64,Array{Float64,1}}) at /home/mc/github/julia-stable/usr/share/julia/stdlib/v1.0/LinearAlgebra/src/special.jl:106
 [4] top-level scope at none:0

Progress:

Currently, I have three related open PRs: #28343, #28345, #28534. My hope is that this PR subsumes them. I also have a patch that fixes most of the multiplication issues, but it fails for a symmetric matrices when the eltype is BigFloat/BigInt. This seems to be a problem with the matrix constructors and calling undefined values.

Related Issues and PRs:

#28507, #28534, #28343, #28345, JuliaLang/LinearAlgebra.jl#548, #27405, JuliaLang/LinearAlgebra.jl#533, JuliaLang/LinearAlgebra.jl#515, JuliaLang/LinearAlgebra.jl#525, JuliaLang/LinearAlgebra.jl#136, #28451, #29045


Benchmarks:

Matrices are 1000x1000 matrices of Float64.

A B stable_add dev_add stable_mul dev_mul
Type of A Type of B Time for A+B Time for A+B Time for A*B Time for A*B
UpperTriangular UpperTriangular 3.140 ms 1.907 ms 23.460 ms 23.612 ms
UpperTriangular LowerTriangular 9.440 ms 7.338 ms 23.715 ms 23.459 ms
UpperTriangular Tridiagonal 8.985 ms 2.601 ms 34.949 ms 3.526 ms
UpperTriangular Diagonal 4.355 ms 2.225 ms 2.660 ms 2.717 ms
UpperTriangular UBidiagonal 6.929 ms 4.946 ms 34.524 ms 2.523 ms
UpperTriangular LBidiagonal 7.400 ms 5.375 ms 34.767 ms 3.536 ms
UpperTriangular SymTridiagonal 6.773 ms 2.596 ms 34.804 ms 3.492 ms
UpperTriangular UniformScaling 1.522 ms 1.206 ms 1.907 ms 1.907 ms
UpperTriangular Array 6.013 ms 2.388 ms 22.329 ms 22.384 ms
LowerTriangular UpperTriangular 8.453 ms 7.371 ms 23.541 ms 23.811 ms
LowerTriangular LowerTriangular 2.007 ms 1.870 ms 23.263 ms 23.722 ms
LowerTriangular Tridiagonal 6.274 ms 2.514 ms 34.425 ms 3.581 ms
LowerTriangular Diagonal 3.985 ms 2.031 ms 2.664 ms 2.745 ms
LowerTriangular UBidiagonal 6.362 ms 5.392 ms 34.189 ms 3.567 ms
LowerTriangular LBidiagonal 6.273 ms 5.354 ms 34.527 ms 2.492 ms
LowerTriangular SymTridiagonal 6.314 ms 2.599 ms 34.567 ms 3.524 ms
LowerTriangular UniformScaling 1.222 ms 1.209 ms 1.919 ms 1.915 ms
LowerTriangular Array 5.455 ms 2.408 ms 22.034 ms 21.916 ms
Tridiagonal UpperTriangular 6.299 ms 2.650 ms 19.080 ms 3.093 ms
Tridiagonal LowerTriangular 6.267 ms 2.836 ms 19.246 ms 3.115 ms
Tridiagonal Tridiagonal 4.152 μs 4.166 μs 33.673 ms 430.877 μs
Tridiagonal Diagonal 6.736 μs 1.385 μs 7.756 ms 7.777 ms
Tridiagonal UBidiagonal 5.449 μs 2.795 μs 33.311 ms 356.477 μs
Tridiagonal LBidiagonal 5.393 μs 2.790 μs 33.282 ms 352.214 μs
Tridiagonal SymTridiagonal 4.981 μs 4.109 μs 33.282 ms 427.668 μs
Tridiagonal UniformScaling 9.864 μs 9.550 μs 4.154 μs 4.174 μs
Tridiagonal Array 4.205 ms 2.282 ms 1.976 ms 1.964 ms
Diagonal UpperTriangular 4.199 ms 2.239 ms 3.626 ms 3.639 ms
Diagonal LowerTriangular 4.183 ms 2.909 ms 4.281 ms 4.304 ms
Diagonal Tridiagonal 6.635 μs 1.386 μs 9.383 ms 6.226 ms
Diagonal Diagonal 1.383 μs 1.384 μs 1.437 μs 1.405 μs
Diagonal UBidiagonal 4.045 μs 1.389 μs 11.821 ms 7.316 ms
Diagonal LBidiagonal 2.672 μs 1.391 μs 11.960 ms 6.878 ms
Diagonal SymTridiagonal 4.018 μs 1.399 μs 9.405 ms 7.506 ms
Diagonal UniformScaling 805.068 ns 807.778 ns 1.414 μs 1.381 μs
Diagonal Array 4.225 ms 1.895 ms 2.642 ms 2.629 ms
UBidiagonal UpperTriangular 6.415 ms 5.384 ms ERROR 3.396 ms
UBidiagonal LowerTriangular 6.417 ms 5.337 ms ERROR 3.172 ms
UBidiagonal Tridiagonal 5.488 μs 2.777 μs 33.413 ms 343.670 μs
UBidiagonal Diagonal 4.110 μs 1.388 μs 10.203 ms 10.370 ms
UBidiagonal UBidiagonal 2.775 μs 2.738 μs 33.442 ms 276.346 μs
UBidiagonal LBidiagonal 1.402 μs 1.392 μs 52.543 ms 276.267 μs
UBidiagonal SymTridiagonal ERROR 2.779 μs 33.886 ms 352.516 μs
UBidiagonal UniformScaling 11.390 μs 11.472 μs 2.788 μs 2.763 μs
UBidiagonal Array 4.243 ms 4.995 ms 1.955 ms 1.965 ms
LBidiagonal UpperTriangular 6.428 ms 5.348 ms ERROR 3.098 ms
LBidiagonal LowerTriangular 6.357 ms 5.346 ms ERROR 3.371 ms
LBidiagonal Tridiagonal 5.389 μs 2.784 μs 33.407 ms 344.530 μs
LBidiagonal Diagonal 2.719 μs 1.400 μs 10.341 ms 10.431 ms
LBidiagonal UBidiagonal 1.415 μs 1.422 μs 33.344 ms 274.719 μs
LBidiagonal LBidiagonal 2.771 μs 2.811 μs 33.455 ms 272.700 μs
LBidiagonal SymTridiagonal ERROR 2.863 μs 33.383 ms 356.776 μs
LBidiagonal UniformScaling 11.502 μs 11.527 μs 2.793 μs 2.755 μs
LBidiagonal Array 4.290 ms 5.222 ms 1.951 ms 1.968 ms
SymTridiagonal UpperTriangular 6.403 ms 2.761 ms 19.208 ms 3.197 ms
SymTridiagonal LowerTriangular 6.535 ms 2.765 ms 19.505 ms 3.137 ms
SymTridiagonal Tridiagonal 5.069 μs 4.270 μs 33.756 ms 431.154 μs
SymTridiagonal Diagonal 4.030 μs 1.447 μs 8.017 ms 7.793 ms
SymTridiagonal UBidiagonal ERROR 2.843 μs 33.779 ms 349.050 μs
SymTridiagonal LBidiagonal ERROR 2.821 μs 33.868 ms 362.691 μs
SymTridiagonal SymTridiagonal 2.773 μs 2.784 μs 33.553 ms 425.618 μs
SymTridiagonal UniformScaling 2.837 μs 2.810 μs 2.782 μs 2.768 μs
SymTridiagonal Array 4.339 ms 2.292 ms 1.951 ms 1.935 ms
UniformScaling UpperTriangular 1.215 ms 1.224 ms 1.924 ms 2.029 ms
UniformScaling LowerTriangular 1.230 ms 1.222 ms 1.909 ms 1.925 ms
UniformScaling Tridiagonal 9.894 μs 9.896 μs 4.201 μs 4.146 μs
UniformScaling Diagonal 828.440 ns 822.809 ns 1.403 μs 1.394 μs
UniformScaling UBidiagonal 11.560 μs 11.629 μs 2.771 μs 2.777 μs
UniformScaling LBidiagonal 11.547 μs 11.531 μs 2.781 μs 2.755 μs
UniformScaling SymTridiagonal 2.830 μs 2.836 μs 2.772 μs 2.751 μs
UniformScaling UniformScaling 2.039 ns 1.964 ns 2.040 ns 2.129 ns
UniformScaling Array 1.258 ms 1.202 ms 1.923 ms 1.931 ms
Array UpperTriangular 5.527 ms 2.342 ms 19.707 ms 19.521 ms
Array LowerTriangular 5.614 ms 2.343 ms 20.081 ms 19.349 ms
Array Tridiagonal 4.355 ms 2.224 ms 245.244 ms 245.412 ms
Array Diagonal 4.376 ms 1.934 ms 2.653 ms 2.658 ms
Array UBidiagonal 4.367 ms 5.081 ms 245.247 ms 244.493 ms
Array LBidiagonal 4.368 ms 5.197 ms 251.998 ms 242.040 ms
Array SymTridiagonal 4.378 ms 2.224 ms 245.548 ms 242.296 ms
Array UniformScaling 1.215 ms 1.213 ms 1.925 ms 1.985 ms
Array Array 1.890 ms 1.893 ms 32.862 ms 30.270 ms

To do:

  1. [x ] Combine the 3 PRs that I have open into this one and upload the other fixes I have.
  2. Benchmark all of the changes made so far.
  3. Find a workaround to move some matrix multiplication methods out of SparseArrays

julia> versioninfo()
Julia Version 1.0.0
Commit 5d4eaca0c9 (2018-08-08 20:58 UTC)
Platform Info:
  OS: Linux (x86_64-linux-gnu)
  CPU: Intel(R) Core(TM) i7-6600U CPU @ 2.60GHz
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-6.0.0 (ORCJIT, skylake)

@chriscoey
Copy link

Thank you!!! How can I/others best help with this?

@mcognetta
Copy link
Contributor Author

@chriscoey There are a few places other than what I have already started (which I encourage people to check out and make more elegant/efficient!). The two that come to mind the most is just checking out other combinations of structured matrix types that aren't covered here and seeing if there is a way to speed them up and working with Adjoint/Transpose matrices. For the former, it seems like just a manual process of trying all the combinations and making sure they output the best type possible and do it in the best way possible (for multiply, the specialized methods are pretty efficient but there is some work that can be done on +/-). As for Adjoint/Transpose, I have precisely 0 experience working with those in Julia so any help at all on that front is greatly appreciated.

The other thing that I briefly mentioned above is removing the matrix multiplication methods from SparseArrays. There are a few combinations that output sparse but unstructured matrices (at least not structured like anything supported in the stdlib, the ones that come to mind are things like upper bidiag * tridiag, which would be a great candidate for BandedMatrices but is currently returning a sparse matrix). Currently, the multiplication methods for these are in SparseArrays, not LinearAlgebra. Someone suggested having a fallback method inside of LinearAlgebra that outputs a suboptimal type and then having it overwritten in SparseArrays which seems reasonable but I don't know how the core contributors feel about that. One upside of this approach comes from the discussion on Discourse about removing SparseArrays from the stdlib: https://discourse.julialang.org/t/future-of-sparse-matrices-in-base-stdlib/11914

Looking forward to working with anyone who is interested!

@chriscoey
Copy link

Thanks! And what about ldiv/rdiv methods for structured matrices? I opened up an issue on that yesterday #28864

@mcognetta
Copy link
Contributor Author

Ah right, in my head that was a totally different beast. If you have some progress there and want to combine it into this one that would be fine. Or they can proceed independently.

Don't the solvers/div methods usually end up calling C or Fortran code? I thought most aren't implemented in pure Julia unlike the + or * implementations.

@chriscoey
Copy link

I have been looking at div with a triangular matrix, and there are already pure Julia implementations in use for that (they have been there for years), eg

function rdiv!(A::StridedMatrix, B::UpperTriangular)
and
function ldiv!(transA::Transpose{<:Any,<:LowerTriangular}, b::AbstractVector, x::AbstractVector)
. also see the relevant discussion at JuliaLang/LinearAlgebra.jl#366

so I think there is as much to gain from structured div methods as for the +,-,* methods you listed. and it is probably worth considering them at the same time. can anyone else chime in to agree/disagree?

@mcognetta
Copy link
Contributor Author

Thanks for the info. In that case, I think they should be done at the same time. If you have some progress done on these we can continue from there.

This reverts commit 3a58908, reversing
changes made to 0facd1d.
…These should go in another PR so this one can be merged more quickly.

Revert "added sparse multiplication and division for triangular matrices. Fix JuliaLang#28451"

This reverts commit 11c1d1d.
@mcognetta
Copy link
Contributor Author

So as to keep this PR simple, I tried to remove the commits related to sparse matrix multiplication and division by @KlausC but I messed it up. I could use a bit of help with it. I think division and sparse matrix operations should be in a separate (maybe more than one) PR.


The benchmarks have been updated and all of the regressions for structured +/- have been fixed. There are a few more small optimizations that can be made (making special +/- for some of the triangular matrices, similar to how it is done for multiplication), but they aren't major performance increases.

@KlausC
Copy link
Contributor

KlausC commented Sep 15, 2018 via email

@mcognetta
Copy link
Contributor Author

@andreasnoack @Sacha0 Sorry to ping you but I believe this is ready for review.

There is one part in particular that I think may have a better solution which I discuss in #28883 (comment)

Thanks!

@andreasnoack
Copy link
Member

I have a deadline tomorrow but will try to find time to review later this week.

@mcognetta
Copy link
Contributor Author

This was discussed in Slack but there is a class of errors in this PR that are related to JuliaLang/LinearAlgebra.jl#562.

julia> A = Bidiagonal(1:3, 1:2, 'U')
3×3 Bidiagonal{Int64,UnitRange{Int64}}:
 1  1  ⋅
 ⋅  2  2
 ⋅  ⋅  3

julia> B = Bidiagonal(rand(3), rand(2), 'L')
3×3 Bidiagonal{Float64,Array{Float64,1}}:
 0.0447606   ⋅         ⋅      
 0.185935   0.930153   ⋅      
  ⋅         0.951137  0.979777

julia> A+B
ERROR: MethodError: no method matching Tridiagonal(::Array{Float64,1}, ::Array{Float64,1}, ::UnitRange{Int64})
Closest candidates are:
  Tridiagonal(::V<:AbstractArray{T,1}, ::V<:AbstractArray{T,1}, ::V<:AbstractArray{T,1}) where {T, V<:AbstractArray{T,1}} at /home/mc/github/julia-dev/usr/share/julia/stdlib/v1.1/LinearAlgebra/src/tridiag.jl:452
  Tridiagonal(::V<:AbstractArray{T,1}, ::V<:AbstractArray{T,1}, ::V<:AbstractArray{T,1}, ::V<:AbstractArray{T,1}) where {T, V<:AbstractArray{T,1}} at /home/mc/github/julia-dev/usr/share/julia/stdlib/v1.1/LinearAlgebra/src/tridiag.jl:453
Stacktrace:
 [1] +(::Bidiagonal{Int64,UnitRange{Int64}}, ::Bidiagonal{Float64,Array{Float64,1}}) at /home/mc/github/julia-dev/usr/share/julia/stdlib/v1.1/LinearAlgebra/src/bidiag.jl:307
 [2] top-level scope at none:0

julia> B = Bidiagonal(rand(Int64,3), rand(Int64,2), 'L')
3×3 Bidiagonal{Int64,Array{Int64,1}}:
 9095805711555887097                    ⋅                    ⋅
 5092396833962771884  1172413147679226839                    ⋅
                   ⋅   964896152414013730  3531136677195223251

julia> A = Bidiagonal(rand(3), rand(2), 'U')
3×3 Bidiagonal{Float64,Array{Float64,1}}:
 0.415024  0.392009   ⋅      
  ⋅        0.902526  0.119168
  ⋅         ⋅        0.682695

julia> A+B
ERROR: MethodError: no method matching Tridiagonal(::Array{Int64,1}, ::Array{Float64,1}, ::Array{Float64,1})
Closest candidates are:
  Tridiagonal(::V<:AbstractArray{T,1}, ::V<:AbstractArray{T,1}, ::V<:AbstractArray{T,1}) where {T, V<:AbstractArray{T,1}} at /home/mc/github/julia-dev/usr/share/julia/stdlib/v1.1/LinearAlgebra/src/tridiag.jl:452
  Tridiagonal(::V<:AbstractArray{T,1}, ::V<:AbstractArray{T,1}, ::V<:AbstractArray{T,1}, ::V<:AbstractArray{T,1}) where {T, V<:AbstractArray{T,1}} at /home/mc/github/julia-dev/usr/share/julia/stdlib/v1.1/LinearAlgebra/src/tridiag.jl:453
Stacktrace:
 [1] +(::Bidiagonal{Float64,Array{Float64,1}}, ::Bidiagonal{Int64,Array{Int64,1}}) at /home/mc/github/julia-dev/usr/share/julia/stdlib/v1.1/LinearAlgebra/src/bidiag.jl:307
 [2] top-level scope at none:0

so this is not ready for review just yet.

Note that this also happens in v1.

@andreasnoack
Copy link
Member

Sorry for the long delay here. Great work.

@andreasnoack andreasnoack merged commit 469fa36 into JuliaLang:master Dec 11, 2018
@andreasnoack
Copy link
Member

I forgot to add that I don't think that the error you mention should hold this back further as they are already errors. I'd prefer to get this PR in and then proceed from there.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
linear algebra Linear algebra
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants