Skip to content

Commit

Permalink
Make LinearAlgebra.jl independent of SparseArrays.jl
Browse files Browse the repository at this point in the history
  • Loading branch information
dkarrasch authored and LilithHafner committed Feb 22, 2022
1 parent f4da8c2 commit 11758c3
Show file tree
Hide file tree
Showing 24 changed files with 487 additions and 353 deletions.
15 changes: 14 additions & 1 deletion NEWS.md
Original file line number Diff line number Diff line change
Expand Up @@ -95,11 +95,24 @@ Standard library changes
#### Package Manager

#### LinearAlgebra
* The BLAS submodule now supports the level-2 BLAS subroutine `spr!` ([#42830]).

* The BLAS submodule now supports the level-2 BLAS subroutine `spr!` ([#42830]).
* `cholesky[!]` now supports `LinearAlgebra.PivotingStrategy` (singleton type) values
as its optional `pivot` argument: the default is `cholesky(A, NoPivot())` (vs.
`cholesky(A, RowMaximum())`); the former `Val{true/false}`-based calls are deprecated. ([#41640])
* The standard library `LinearAlgebra.jl` is now completely independent of `SparseArrays.jl`,
both in terms of the source code as well as unit testing ([#43127]). As a consequence,
sparse arrays are no longer (silently) returned by methods from `LinearAlgebra` applied
to `Base` or `LinearAlgebra` objects. Specifically, this results in the following breaking
changes:

* Concatenations involving special "sparse" matrices (`*diagonal`) now return dense matrices;
As a consequence, the `D1` and `D2` fields of `SVD` objects, constructed upon `getproperty`
calls are now dense matrices.
* 3-arg `similar(::SpecialSparseMatrix, ::Type, ::Dims)` returns a dense zero matrix.
As a consequence, products of bi-, tri- and symmetric tridiagonal matrices with each
other result in dense output. Moreover, constructing 3-arg similar matrices of special
"sparse" matrices of (nonstatic) matrices now fails for the lack of `zero(::Type{Matrix{T}})`.

#### Markdown

Expand Down
3 changes: 1 addition & 2 deletions stdlib/LinearAlgebra/Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,6 @@ libblastrampoline_jll = "8e850b90-86db-534c-a0d3-1478176c7d93"
[extras]
Test = "8dfed614-e22c-5e08-85e1-65c5234f0b40"
Random = "9a3f8284-a2c9-5f02-9a11-845980a1fd5c"
SparseArrays = "2f01184e-e22b-5df5-ae63-d93ebab69eaf"

[targets]
test = ["Test", "Random", "SparseArrays"]
test = ["Test", "Random"]
6 changes: 4 additions & 2 deletions stdlib/LinearAlgebra/docs/src/index.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# [Linear Algebra](@id man-linalg)

```@meta
DocTestSetup = :(using LinearAlgebra, SparseArrays, SuiteSparse)
DocTestSetup = :(using LinearAlgebra)
```

In addition to (and as part of) its support for multi-dimensional arrays, Julia provides native implementations
Expand Down Expand Up @@ -308,7 +308,9 @@ of the Linear Algebra documentation.

## Standard functions

Linear algebra functions in Julia are largely implemented by calling functions from [LAPACK](http://www.netlib.org/lapack/). Sparse matrix factorizations call functions from [SuiteSparse](http://suitesparse.com). Other sparse solvers are available as Julia packages.
Linear algebra functions in Julia are largely implemented by calling functions from [LAPACK](http://www.netlib.org/lapack/).
Sparse matrix factorizations call functions from [SuiteSparse](http://suitesparse.com).
Other sparse solvers are available as Julia packages.

```@docs
Base.:*(::AbstractMatrix, ::AbstractMatrix)
Expand Down
5 changes: 2 additions & 3 deletions stdlib/LinearAlgebra/src/LinearAlgebra.jl
Original file line number Diff line number Diff line change
Expand Up @@ -372,15 +372,14 @@ algorithm.
See also: `copy_similar`, `copy_to_array`.
"""
copy_oftype(A::AbstractArray, ::Type{T}) where {T} = copyto!(similar(A,T), A)
copy_oftype(A::AbstractArray, ::Type{T}) where {T} = copyto!(similar(A, T), A)

"""
copy_similar(A, T)
Copy `A` to a mutable array with eltype `T` based on `similar(A, T, size(A))`.
Compared to `copy_oftype`, the result can be more flexible. For example,
supplying a tridiagonal matrix results in a sparse array. In general, the type
Compared to `copy_oftype`, the result can be more flexible. In general, the type
of the output corresponds to that of the three-argument method `similar(A, T, size(s))`.
See also: `copy_oftype`, `copy_to_array`.
Expand Down
2 changes: 1 addition & 1 deletion stdlib/LinearAlgebra/src/adjtrans.jl
Original file line number Diff line number Diff line change
Expand Up @@ -227,7 +227,7 @@ _adjoint_hcat(avs::Union{Number,AdjointAbsVec}...) = adjoint(vcat(map(adjoint, a
_transpose_hcat(tvs::Union{Number,TransposeAbsVec}...) = transpose(vcat(map(transpose, tvs)...))
typed_hcat(::Type{T}, avs::Union{Number,AdjointAbsVec}...) where {T} = adjoint(typed_vcat(T, map(adjoint, avs)...))
typed_hcat(::Type{T}, tvs::Union{Number,TransposeAbsVec}...) where {T} = transpose(typed_vcat(T, map(transpose, tvs)...))
# otherwise-redundant definitions necessary to prevent hitting the concat methods in sparse/sparsevector.jl
# otherwise-redundant definitions necessary to prevent hitting the concat methods in LinearAlgebra/special.jl
hcat(avs::Adjoint{<:Any,<:Vector}...) = _adjoint_hcat(avs...)
hcat(tvs::Transpose{<:Any,<:Vector}...) = _transpose_hcat(tvs...)
hcat(avs::Adjoint{T,Vector{T}}...) where {T} = _adjoint_hcat(avs...)
Expand Down
11 changes: 6 additions & 5 deletions stdlib/LinearAlgebra/src/bidiag.jl
Original file line number Diff line number Diff line change
Expand Up @@ -204,12 +204,8 @@ AbstractMatrix{T}(A::Bidiagonal) where {T} = convert(Bidiagonal{T}, A)

convert(T::Type{<:Bidiagonal}, m::AbstractMatrix) = m isa T ? m : T(m)

# For B<:Bidiagonal, similar(B[, neweltype]) should yield a Bidiagonal matrix.
# On the other hand, similar(B, [neweltype,] shape...) should yield a sparse matrix.
# The first method below effects the former, and the second the latter.
similar(B::Bidiagonal, ::Type{T}) where {T} = Bidiagonal(similar(B.dv, T), similar(B.ev, T), B.uplo)
# The method below is moved to SparseArrays for now
# similar(B::Bidiagonal, ::Type{T}, dims::Union{Dims{1},Dims{2}}) where {T} = spzeros(T, dims...)
similar(B::Bidiagonal, ::Type{T}, dims::Union{Dims{1},Dims{2}}) where {T} = zeros(T, dims...)


###################
Expand Down Expand Up @@ -706,6 +702,11 @@ function *(A::SymTridiagonal, B::Diagonal)
A_mul_B_td!(Tridiagonal(zeros(TS, size(A, 1)-1), zeros(TS, size(A, 1)), zeros(TS, size(A, 1)-1)), A, B)
end

function *(A::BiTriSym, B::BiTriSym)
TS = promote_op(matprod, eltype(A), eltype(B))
mul!(similar(A, TS, size(A)...), A, B)
end

function dot(x::AbstractVector, B::Bidiagonal, y::AbstractVector)
require_one_based_indexing(x, y)
nx, ny = length(x), length(y)
Expand Down
10 changes: 3 additions & 7 deletions stdlib/LinearAlgebra/src/diagonal.jl
Original file line number Diff line number Diff line change
Expand Up @@ -87,12 +87,8 @@ Construct an uninitialized `Diagonal{T}` of length `n`. See `undef`.
"""
Diagonal{T}(::UndefInitializer, n::Integer) where T = Diagonal(Vector{T}(undef, n))

# For D<:Diagonal, similar(D[, neweltype]) should yield a Diagonal matrix.
# On the other hand, similar(D, [neweltype,] shape...) should yield a sparse matrix.
# The first method below effects the former, and the second the latter.
similar(D::Diagonal, ::Type{T}) where {T} = Diagonal(similar(D.diag, T))
# The method below is moved to SparseArrays for now
# similar(D::Diagonal, ::Type{T}, dims::Union{Dims{1},Dims{2}}) where {T} = spzeros(T, dims...)
similar(::Diagonal, ::Type{T}, dims::Union{Dims{1},Dims{2}}) where {T} = zeros(T, dims...)

copyto!(D1::Diagonal, D2::Diagonal) = (copyto!(D1.diag, D2.diag); D1)

Expand All @@ -114,8 +110,8 @@ end
end
r
end
diagzero(::Diagonal{T},i,j) where {T} = zero(T)
diagzero(D::Diagonal{<:AbstractMatrix{T}},i,j) where {T} = zeros(T, size(D.diag[i], 1), size(D.diag[j], 2))
diagzero(::Diagonal{T}, i, j) where {T} = zero(T)
diagzero(D::Diagonal{<:AbstractMatrix{T}}, i, j) where {T} = zeros(T, size(D.diag[i], 1), size(D.diag[j], 2))

function setindex!(D::Diagonal, v, i::Int, j::Int)
@boundscheck checkbounds(D, i, j)
Expand Down
24 changes: 24 additions & 0 deletions stdlib/LinearAlgebra/src/special.jl
Original file line number Diff line number Diff line change
Expand Up @@ -339,3 +339,27 @@ end

==(A::Bidiagonal, B::SymTridiagonal) = iszero(_evview(B)) && iszero(A.ev) && A.dv == B.dv
==(B::SymTridiagonal, A::Bidiagonal) = A == B

# concatenation
const _SpecialArrays = Union{Diagonal, Bidiagonal, Tridiagonal, SymTridiagonal}
const _Symmetric_DenseArrays{T,A<:Matrix} = Symmetric{T,A}
const _Hermitian_DenseArrays{T,A<:Matrix} = Hermitian{T,A}
const _Triangular_DenseArrays{T,A<:Matrix} = AbstractTriangular{T,A}
const _Annotated_DenseArrays = Union{_SpecialArrays, _Triangular_DenseArrays, _Symmetric_DenseArrays, _Hermitian_DenseArrays}
const _Annotated_Typed_DenseArrays{T} = Union{_Triangular_DenseArrays{T}, _Symmetric_DenseArrays{T}, _Hermitian_DenseArrays{T}}
const _DenseConcatGroup = Union{Number, Vector, Adjoint{<:Any,<:Vector}, Transpose{<:Any,<:Vector}, Matrix, _Annotated_DenseArrays}
const _TypedDenseConcatGroup{T} = Union{Vector{T}, Adjoint{T,Vector{T}}, Transpose{T,Vector{T}}, Matrix{T}, _Annotated_Typed_DenseArrays{T}}

promote_to_array_type(::Tuple{Vararg{Union{_DenseConcatGroup,UniformScaling}}}) = Matrix

Base._cat(dims, xs::_DenseConcatGroup...) = Base.cat_t(promote_eltype(xs...), xs...; dims=dims)
vcat(A::Vector...) = Base.typed_vcat(promote_eltype(A...), A...)
vcat(A::_DenseConcatGroup...) = Base.typed_vcat(promote_eltype(A...), A...)
hcat(A::Vector...) = Base.typed_hcat(promote_eltype(A...), A...)
hcat(A::_DenseConcatGroup...) = Base.typed_hcat(promote_eltype(A...), A...)
hvcat(rows::Tuple{Vararg{Int}}, xs::_DenseConcatGroup...) = Base.typed_hvcat(promote_eltype(xs...), rows, xs...)
# For performance, specially handle the case where the matrices/vectors have homogeneous eltype
Base._cat(dims, xs::_TypedDenseConcatGroup{T}...) where {T} = Base.cat_t(T, xs...; dims=dims)
vcat(A::_TypedDenseConcatGroup{T}...) where {T} = Base.typed_vcat(T, A...)
hcat(A::_TypedDenseConcatGroup{T}...) where {T} = Base.typed_hcat(T, A...)
hvcat(rows::Tuple{Vararg{Int}}, xs::_TypedDenseConcatGroup{T}...) where {T} = Base.typed_hvcat(T, rows, xs...)
16 changes: 8 additions & 8 deletions stdlib/LinearAlgebra/src/svd.jl
Original file line number Diff line number Diff line change
Expand Up @@ -333,13 +333,13 @@ Q factor:
1.0 0.0
0.0 1.0
D1 factor:
2×2 SparseArrays.SparseMatrixCSC{Float64, Int64} with 2 stored entries:
0.707107
0.707107
2×2 Matrix{Float64}:
0.707107 0.0
0.0 0.707107
D2 factor:
2×2 SparseArrays.SparseMatrixCSC{Float64, Int64} with 2 stored entries:
0.707107
0.707107
2×2 Matrix{Float64}:
0.707107 0.0
0.0 0.707107
R0 factor:
2×2 Matrix{Float64}:
1.41421 0.0
Expand All @@ -352,8 +352,8 @@ julia> F.U*F.D1*F.R0*F.Q'
julia> F.V*F.D2*F.R0*F.Q'
2×2 Matrix{Float64}:
0.0 1.0
1.0 0.0
-0.0 1.0
1.0 0.0
```
"""
struct GeneralizedSVD{T,S} <: Factorization{T}
Expand Down
12 changes: 2 additions & 10 deletions stdlib/LinearAlgebra/src/tridiag.jl
Original file line number Diff line number Diff line change
Expand Up @@ -148,12 +148,8 @@ function size(A::SymTridiagonal, d::Integer)
end
end

# For S<:SymTridiagonal, similar(S[, neweltype]) should yield a SymTridiagonal matrix.
# On the other hand, similar(S, [neweltype,] shape...) should yield a sparse matrix.
# The first method below effects the former, and the second the latter.
similar(S::SymTridiagonal, ::Type{T}) where {T} = SymTridiagonal(similar(S.dv, T), similar(S.ev, T))
# The method below is moved to SparseArrays for now
# similar(S::SymTridiagonal, ::Type{T}, dims::Union{Dims{1},Dims{2}}) where {T} = spzeros(T, dims...)
similar(S::SymTridiagonal, ::Type{T}, dims::Union{Dims{1},Dims{2}}) where {T} = zeros(T, dims...)

copyto!(dest::SymTridiagonal, src::SymTridiagonal) =
(copyto!(dest.dv, src.dv); copyto!(dest.ev, _evview(src)); dest)
Expand Down Expand Up @@ -584,12 +580,8 @@ end
Matrix(M::Tridiagonal{T}) where {T} = Matrix{T}(M)
Array(M::Tridiagonal) = Matrix(M)

# For M<:Tridiagonal, similar(M[, neweltype]) should yield a Tridiagonal matrix.
# On the other hand, similar(M, [neweltype,] shape...) should yield a sparse matrix.
# The first method below effects the former, and the second the latter.
similar(M::Tridiagonal, ::Type{T}) where {T} = Tridiagonal(similar(M.dl, T), similar(M.d, T), similar(M.du, T))
# The method below is moved to SparseArrays for now
# similar(M::Tridiagonal, ::Type{T}, dims::Union{Dims{1},Dims{2}}) where {T} = spzeros(T, dims...)
similar(M::Tridiagonal, ::Type{T}, dims::Union{Dims{1},Dims{2}}) where {T} = zeros(T, dims...)

# Operations on Tridiagonal matrices
copyto!(dest::Tridiagonal, src::Tridiagonal) = (copyto!(dest.dl, src.dl); copyto!(dest.d, src.d); copyto!(dest.du, src.du); dest)
Expand Down
2 changes: 1 addition & 1 deletion stdlib/LinearAlgebra/src/uniformscaling.jl
Original file line number Diff line number Diff line change
Expand Up @@ -391,7 +391,7 @@ end
# so that the same promotion code can be used for hvcat. We pass the type T
# so that we can re-use this code for sparse-matrix hcat etcetera.
promote_to_arrays_(n::Int, ::Type, a::Number) = a
promote_to_arrays_(n::Int, ::Type{Matrix}, J::UniformScaling{T}) where {T} = copyto!(Matrix{T}(undef, n,n), J)
promote_to_arrays_(n::Int, ::Type{Matrix}, J::UniformScaling{T}) where {T} = Matrix(J, n, n)
promote_to_arrays_(n::Int, ::Type, A::AbstractVecOrMat) = A
promote_to_arrays(n,k, ::Type) = ()
promote_to_arrays(n,k, ::Type{T}, A) where {T} = (promote_to_arrays_(n[k], T, A),)
Expand Down
1 change: 0 additions & 1 deletion stdlib/LinearAlgebra/test/addmul.jl
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,6 @@ using Base: rtoldefault
using Test
using LinearAlgebra
using LinearAlgebra: AbstractTriangular
using SparseArrays
using Random

_rand(::Type{T}) where {T <: AbstractFloat} = T(randn())
Expand Down
10 changes: 1 addition & 9 deletions stdlib/LinearAlgebra/test/adjtrans.jl
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

module TestAdjointTranspose

using Test, LinearAlgebra, SparseArrays
using Test, LinearAlgebra

const BASE_TEST_PATH = joinpath(Sys.BINDIR, "..", "share", "julia", "test")

Expand Down Expand Up @@ -354,14 +354,6 @@ end
@test broadcast(+, Transpose(vec), 1, Transpose(vec))::Transpose{Complex{Int},Vector{Complex{Int}}} == tvec + tvec .+ 1
@test broadcast(+, Adjoint(vec), 1im, Adjoint(vec))::Adjoint{Complex{Int},Vector{Complex{Int}}} == avec + avec .+ 1im
@test broadcast(+, Transpose(vec), 1im, Transpose(vec))::Transpose{Complex{Int},Vector{Complex{Int}}} == tvec + tvec .+ 1im
# ascertain inference friendliness, ref. https://github.com/JuliaLang/julia/pull/25083#issuecomment-353031641
sparsevec = SparseVector([1.0, 2.0, 3.0])
@test map(-, Adjoint(sparsevec), Adjoint(sparsevec)) isa Adjoint{Float64,SparseVector{Float64,Int}}
@test map(-, Transpose(sparsevec), Transpose(sparsevec)) isa Transpose{Float64,SparseVector{Float64,Int}}
@test broadcast(-, Adjoint(sparsevec), Adjoint(sparsevec)) isa Adjoint{Float64,SparseVector{Float64,Int}}
@test broadcast(-, Transpose(sparsevec), Transpose(sparsevec)) isa Transpose{Float64,SparseVector{Float64,Int}}
@test broadcast(+, Adjoint(sparsevec), 1.0, Adjoint(sparsevec)) isa Adjoint{Float64,SparseVector{Float64,Int}}
@test broadcast(+, Transpose(sparsevec), 1.0, Transpose(sparsevec)) isa Transpose{Float64,SparseVector{Float64,Int}}
end

@testset "Adjoint/Transpose-wrapped vector multiplication" begin
Expand Down
2 changes: 1 addition & 1 deletion stdlib/LinearAlgebra/test/ambiguous_exec.jl
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# This file is a part of Julia. License is MIT: https://julialang.org/license

using Test, LinearAlgebra, SparseArrays
using Test, LinearAlgebra
let ambig = detect_ambiguities(LinearAlgebra; recursive=true)
@test isempty(ambig)
ambig = Set{Any}(((m1.sig, m2.sig) for (m1, m2) in ambig))
Expand Down
8 changes: 3 additions & 5 deletions stdlib/LinearAlgebra/test/bidiag.jl
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

module TestBidiagonal

using Test, LinearAlgebra, SparseArrays, Random
using Test, LinearAlgebra, Random
using LinearAlgebra: BlasReal, BlasFloat

const BASE_TEST_PATH = joinpath(Sys.BINDIR, "..", "share", "julia", "test")
Expand Down Expand Up @@ -98,8 +98,8 @@ Random.seed!(1)
@test similar(ubd).uplo == ubd.uplo
@test isa(similar(ubd, Int), Bidiagonal{Int})
@test similar(ubd, Int).uplo == ubd.uplo
@test isa(similar(ubd, (3, 2)), SparseMatrixCSC)
@test isa(similar(ubd, Int, (3, 2)), SparseMatrixCSC{Int})
@test isa(similar(ubd, (3, 2)), Matrix)
@test isa(similar(ubd, Int, (3, 2)), Matrix{Int})

# setindex! when off diagonal is zero bug
Bu = Bidiagonal(rand(elty, 10), zeros(elty, 9), 'U')
Expand Down Expand Up @@ -432,9 +432,7 @@ using LinearAlgebra: fillstored!, UnitLowerTriangular
exotic_arrays = Any[Tridiagonal(randn(3), randn(4), randn(3)),
Bidiagonal(randn(3), randn(2), rand([:U,:L])),
SymTridiagonal(randn(3), randn(2)),
sparse(randn(3,4)),
Diagonal(randn(5)),
sparse(rand(3)),
# LowerTriangular(randn(3,3)), # AbstractTriangular fill! deprecated, see below
# UpperTriangular(randn(3,3)) # AbstractTriangular fill! deprecated, see below
]
Expand Down
21 changes: 6 additions & 15 deletions stdlib/LinearAlgebra/test/diagonal.jl
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

module TestDiagonal

using Test, LinearAlgebra, SparseArrays, Random
using Test, LinearAlgebra, Random
using LinearAlgebra: BlasFloat, BlasComplex

n=12 #Size of matrix problem to test
Expand Down Expand Up @@ -147,7 +147,6 @@ Random.seed!(1)
@test_throws DimensionMismatch ldiv!(D, fill(elty(1), n + 1))
@test_throws SingularException ldiv!(Diagonal(zeros(relty, n)), copy(v))
b = rand(elty, n, n)
b = sparse(b)
@test ldiv!(D, copy(b)) Array(D)\Array(b)
@test_throws SingularException ldiv!(Diagonal(zeros(elty, n)), copy(b))
b = view(rand(elty, n), Vector(1:n))
Expand All @@ -157,7 +156,6 @@ Random.seed!(1)
@test c d
@test_throws SingularException ldiv!(Diagonal(zeros(elty, n)), b)
b = rand(elty, n+1, n+1)
b = sparse(b)
@test_throws DimensionMismatch ldiv!(D, copy(b))
b = view(rand(elty, n+1), Vector(1:n+1))
@test_throws DimensionMismatch ldiv!(D, b)
Expand Down Expand Up @@ -190,7 +188,7 @@ Random.seed!(1)
end

if relty <: BlasFloat
for b in (rand(elty,n,n), sparse(rand(elty,n,n)), rand(elty,n), sparse(rand(elty,n)))
for b in (rand(elty,n,n), rand(elty,n))
@test lmul!(copy(D), copy(b)) Array(D)*Array(b)
@test lmul!(transpose(copy(D)), copy(b)) transpose(Array(D))*Array(b)
@test lmul!(adjoint(copy(D)), copy(b)) Array(D)'*Array(b)
Expand Down Expand Up @@ -388,8 +386,8 @@ Random.seed!(1)
@testset "similar" begin
@test isa(similar(D), Diagonal{elty})
@test isa(similar(D, Int), Diagonal{Int})
@test isa(similar(D, (3,2)), SparseMatrixCSC{elty})
@test isa(similar(D, Int, (3,2)), SparseMatrixCSC{Int})
@test isa(similar(D, (3,2)), Matrix{elty})
@test isa(similar(D, Int, (3,2)), Matrix{Int})
end

# Issue number 10036
Expand Down Expand Up @@ -605,10 +603,10 @@ end
mul!(D2, D, D)
@test D2 == D * D

D2[diagind(D2)] .= D[diagind(D)]
copyto!(D2, D)
lmul!(D, D2)
@test D2 == D * D
D2[diagind(D2)] .= D[diagind(D)]
copyto!(D2, D)
rmul!(D2, D)
@test D2 == D * D
end
Expand Down Expand Up @@ -651,13 +649,6 @@ end

@test tr(D) == 10
@test det(D) == 4

# sparse matrix block diagonals
s = SparseArrays.sparse([1 2; 3 4])
D = Diagonal([s, s])
@test D[1, 1] == s
@test D[1, 2] == zero(s)
@test isa(D[2, 1], SparseMatrixCSC)
end

@testset "linear solve for block diagonal matrices" begin
Expand Down
Loading

0 comments on commit 11758c3

Please sign in to comment.