Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimizing some structured matrix multiply methods #32521

Closed
wants to merge 7 commits into from

Conversation

mcognetta
Copy link
Contributor

@mcognetta mcognetta commented Jul 8, 2019

This PR addresses #28345 and JuliaLang/LinearAlgebra.jl#525 as well as some missed optimizations from #28883.

A summary of the changes:

  • Dense times Structured matrices output dense matrices (not SparseMatrixCSC)
  • BidiagonalxTridiagonal, SymTridiagonalxSymTridiagonal, and others output SparseMatrixCSC (not Dense). These changes were mentioned in [WIP] Optimizing {+,-,*} for structured matrices #28883, but apparently I missed them in that PR.
  • BidiagonalxBidiagonal outputs Tridiagonal or SparseMatrixCSC depending on if A.uplo == B.uplo. Before it output SparseMatrixCSC or Dense.

(EDIT: Closes JuliaLang/LinearAlgebra.jl#741, closes JuliaLang/LinearAlgebra.jl#525.)

Below are some benchmarks as well as the input/output types:


Before:

A B A*B Time
Array Diagonal Array 0.029855 seconds
Array Bidiagonal SparseMatrixCSC 2.767608 seconds
Array Tridiagonal SparseMatrixCSC 2.813555 seconds
Array SymTridiagonal SparseMatrixCSC 2.826780 seconds
Diagonal Array Array 0.030034 seconds
Diagonal Diagonal Diagonal 0.000007 seconds
Diagonal Bidiagonal SparseMatrixCSC 0.053484 seconds
Diagonal Tridiagonal SparseMatrixCSC 0.060171 seconds
Diagonal SymTridiagonal SparseMatrixCSC 0.059392 seconds
Bidiagonal Array Array 0.013068 seconds
Bidiagonal Diagonal SparseMatrixCSC 0.052733 seconds
Bidiagonal Bidiagonal Array 0.344665 seconds
Bidiagonal Tridiagonal Array 0.345147 seconds
Bidiagonal SymTridiagonal Array 0.344073 seconds
Tridiagonal Array Array 0.014226 seconds
Tridiagonal Diagonal SparseMatrixCSC 0.057033 seconds
Tridiagonal Bidiagonal Array 0.346379 seconds
Tridiagonal Tridiagonal Array 0.345191 seconds
Tridiagonal SymTridiagonal Array 0.347181 seconds
SymTridiagonal Array Array 0.011845 seconds
SymTridiagonal Diagonal SparseMatrixCSC 0.057408 seconds
SymTridiagonal Bidiagonal Array 0.349033 seconds
SymTridiagonal Tridiagonal Array 0.347890 seconds
SymTridiagonal SymTridiagonal Array 0.348104 seconds
julia> versioninfo()
Julia Version 1.1.1
Commit 55e36cc308 (2019-05-16 04:10 UTC)
Platform Info:
  OS: Linux (x86_64-linux-gnu)
  CPU: Intel(R) Core(TM) i5-8250U CPU @ 1.60GHz
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-6.0.1 (ORCJIT, skylake)

This PR:

A B A*B Time
Array Diagonal Array 0.013508 seconds
Array Bidiagonal Array 0.024095 seconds
Array Tridiagonal Array 0.007269 seconds
Array SymTridiagonal Array 0.024687 seconds
Diagonal Array Array 0.011103 seconds
Diagonal Diagonal Diagonal 0.000009 seconds
Diagonal Bidiagonal Bidiagonal 0.000103 seconds
Diagonal Tridiagonal Tridiagonal 0.000106 seconds
Diagonal SymTridiagonal Tridiagonal 0.000108 seconds
Bidiagonal Array Array 0.013104 seconds
Bidiagonal Diagonal Bidiagonal 0.000111 seconds
Bidiagonal Bidiagonal SparseMatrixCSC 0.002295 seconds
Bidiagonal Tridiagonal SparseMatrixCSC 0.002942 seconds
Bidiagonal SymTridiagonal SparseMatrixCSC 0.003073 seconds
Tridiagonal Array Array 0.022158 seconds
Tridiagonal Diagonal Tridiagonal 0.000108 seconds
Tridiagonal Bidiagonal SparseMatrixCSC 0.002936 seconds
Tridiagonal Tridiagonal SparseMatrixCSC 0.003766 seconds
Tridiagonal SymTridiagonal SparseMatrixCSC 0.003762 seconds
SymTridiagonal Array Array 0.011779 seconds
SymTridiagonal Diagonal Tridiagonal 0.000117 seconds
SymTridiagonal Bidiagonal SparseMatrixCSC 0.003116 seconds
SymTridiagonal Tridiagonal SparseMatrixCSC 0.003845 seconds
SymTridiagonal SymTridiagonal SparseMatrixCSC 0.003791 seconds

@mcognetta
Copy link
Contributor Author

attn: @ViralBShah This is the new version of #28345.

@StefanKarpinski
Copy link
Member

Let's get this thing merged this time!

@mcognetta
Copy link
Contributor Author

mcognetta commented Jul 9, 2019

I resolved the ambiguities by restricting the left side to be ::Matrix. I don't think this is a particularly good solution (since there could be other subtypes of AbstractMatrix that need consideration), but the alternative involved untangling a ton of ambiguous matrix multiply definitions. Suggestions/comments are appreciated.

@andreasnoack andreasnoack added linear algebra Linear algebra performance Must go faster labels Jul 24, 2019
@andreasnoack
Copy link
Member

I resolved the ambiguities by restricting the left side to be ::Matrix

Did you try with StridedMatrix?

@mcognetta
Copy link
Contributor Author

@andreasnoack Yes, that seems more appropriate. Thanks.

@mcognetta
Copy link
Contributor Author

Fixed merge conflicts.

@andreasnoack
Copy link
Member

Looks like something broke

@mcognetta
Copy link
Contributor Author

Seems like I accidentally deleted the relevant code. Should be fixed now.

@andreasnoack
Copy link
Member

Thanks. One concern here is that BidiagonalxTridiagonal returning a sparse matrix implies a dependency of LinearAlgebra on SparseArrays. We already have a few of these so maybe we just want to accept it but we have been thinking about making the sparse stdlibs more stand-alone and pluggable in which case it would be unfortunate to add another such dependency. An alternative could be to add an informative error message for BidiagonalxTridiagonal asking the user to convert to sparse matrices before multiplying. I doubt that this will be a common operation so I'd expect few users to hit it.

@mcognetta
Copy link
Contributor Author

@andreasnoack Yes, I have noticed this is several places with structured matrices. Would you be ok with a warning instead of an error (Bidiag times Tridag runs before this patch, having it error would be breaking)?

@andreasnoack
Copy link
Member

I.e. dense output with a warning?

@mcognetta
Copy link
Contributor Author

I had thought Sparse output with a warning (which I guess is still breaking, but better than interrupting execution).

@andreasnoack
Copy link
Member

The problem is that sparse output with a warning would introduce the dependency. We'd need to make a general decision if we can allow LinearAlgebra to depend on SparseArrays

@mcognetta
Copy link
Contributor Author

I had it backwards, I thought LinearAlgebra already had a dependency on SparseArrays, but it is the other way around. In this case, is it best to throw an error and move the method definition to SparseArrays?

@andreasnoack
Copy link
Member

I guess we could also continue to return a dense array?

@dkarrasch
Copy link
Member

I kind of agree that, if the result type is none of LinearAlgebra's, the result should be dense. So if someone wants to invoke the sparse machinery, then (s)he would need to make at least one of the factors sparse. I remember I optimized the sparse constructors for structured matrices over at #32492, so hopefully that shouldn't be a computational bottleneck.

But, I need to admit that I'm confused. In a fresh Julia session:

using LinearAlgebra
B = Bidiagonal(rand(4), rand(3), 'U')
@which similar(B, Float64, (3,3))

returns

similar(B::Bidiagonal, ::Type{T}, dims::Union{Tuple{Int64}, Tuple{Int64,Int64}}) where T in SparseArrays at /Users/julia/buildbot/worker/package_macos64/build/usr/share/julia/stdlib/v1.3/SparseArrays/src/SparseArrays.jl:50

I couldn't find where we import this similar method from SparseArrays in LinearAlgebra. Shouldn't it throw a MethodError? I was digging a bit, but couldn't find any explicit reference to sparse or Sparse in LinearAlgebra, so I assume the only dependency is via these similar methods, right? Also, in the Project.toml of LinearAlgebra, SparseArrays occurs only in the extras and for testing.

@ranocha
Copy link
Member

ranocha commented Mar 21, 2020

Is there any update on this? Or could I help in moving this PR forward?

@mcognetta
Copy link
Contributor Author

Yeah this sort of stalled. Happy to keep working on it, perhaps with @ranocha if there is interest.

@dkarrasch
Copy link
Member

BiTriSym * BiTriSym is currently governed by

function *(A::BiTriSym, B::BiTriSym)
TS = promote_op(matprod, eltype(A), eltype(B))
mul!(similar(A, TS, size(A)...), A, B)
end

I don't understand how this is "known" to Julia without declaring using SparseArrays. Anyway, this PR here doesn't change the sparse return type, it only removes the case of different (in the sense of uplo) bidiagonal matrices from that list and makes it return a tridiagonal. So, we could (i) go with it, but remove the tests for SparseMatrix return types, or (ii) remove that special casing, and only go with the *(::StridedMatrix, ::BiTriSym) part. Personally, I think adding the bidiag-bidiag special case is much less important than disentangling the stdlibs,so I'd favor option (ii).

@KristofferC
Copy link
Member

KristofferC commented May 4, 2021

I don't understand how this is "known" to Julia without declaring using SparseArrays.

Type piracy in combination with that SparseArrays exists in the sysimage.

@mcognetta
Copy link
Contributor Author

Seems all of these optimizations have been added in other PRs so I will close this one.

@mcognetta mcognetta closed this Oct 10, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
linear algebra Linear algebra performance Must go faster
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Dense * Tridiagonal = Sparse? dense*tridiag yields sparse rather than dense
6 participants