-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve sparse vcat #10206
Improve sparse vcat #10206
Conversation
Cc: @Jutho |
good catch |
allocation in sparse vcat. Fixes #7926. Add a test for vcat of sparse matrices of different element/index types.
Tv = promote_type(map(x->eltype(x.nzval), X)...) | ||
Ti = promote_type(map(x->eltype(x.rowval), X)...) | ||
|
||
vcat(map(x->convert(SparseMatrixCSC{Tv,Ti}, x), X)...) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rather than effectively converting the input arrays twice, another approach might be to break out the "stuff a single matrix into its slot" algorithm into a subfunction. That way the performance-intensive part will be type-stable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't fully understand. Input arrays that are of type {Tv, Ti}
will anyways be no-op for convert. Worst case, most of them get converted once and then passed to vcat, which stuffs it in. How do we end up converting twice?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I shouldn't have said "convert twice," I should have said "copy twice."
To be clearer, my suggestion is:
function vcat(X::SparseMatrixCSC...)
Tv = promote_type(map(x->eltype(x.nzval), X)...)
Ti = promote_type(map(x->eltype(x.rowval), X)...)
<compute sizes and allocate outputs here>
moffset = 0
for x in X
stuff!(colptr, rowval, nzval, moffset, x)
moffset += size(x,1)
end
< pack as sparse matrix >
end
That way, even though x
might be type-unstable, you don't really care, because it will dispatch to a type-specialized implementation of stuff!
, and it's that function that has all the compute-intensive operations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not really familiar with the sparse code, but I agree that @timholy is probably right. It should not be necessary to convert a complete array. By allocating the output array with the promoted type and then assigning into it, the convert should be taken care during the assignment at the level of single elements.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Converting is a pretty uncommon case, which is why I thought that it was worth the simplicity to just convert. Anyways, I tried implementing stuffcol!
(code below in comments), but that is still slower than my current implementation, and takes more memory.
Travis seems to have completely succeeded on mac and linux, but appveyor has failed on 32-bit. It is not clear whether the failure is related to this PR. |
This is the function vcat(X::SparseMatrixCSC...)
num = length(X)
mX = [ size(x, 1) for x in X ]
nX = [ size(x, 2) for x in X ]
m = sum(mX)
n = nX[1]
for i = 2 : num
if nX[i] != n
throw(DimensionMismatch("All inputs to vcat should have the same number of columns"))
end
end
Tv = promote_type(map(x->eltype(x.nzval), X)...)
Ti = promote_type(map(x->eltype(x.rowval), X)...)
nnzX = [ nnz(x) for x in X ]
nnz_res = sum(nnzX)
colptr = Array(Ti, n + 1)
rowval = Array(Ti, nnz_res)
nzval = Array(Tv, nnz_res)
colptr[1] = 1
for c = 1:n
mX_sofar = 0
ptr_res = colptr[c]
for i = 1 : num
colptrXi = X[i].colptr
col_length = (colptrXi[c + 1] - 1) - colptrXi[c]
ptr_Xi = colptrXi[c]
stuffcol!(X[i], colptr, rowval, nzval,
ptr_res, ptr_Xi, col_length, mX_sofar)
ptr_res += col_length + 1
mX_sofar += mX[i]
end
colptr[c + 1] = ptr_res
end
SparseMatrixCSC(m, n, colptr, rowval, nzval)
end
function stuffcol!(Xi::SparseMatrixCSC, colptr, rowval, nzval,
ptr_res, ptr_Xi, col_length, mX_sofar)
colptrXi = Xi.colptr
rowvalXi = Xi.rowval
nzvalXi = Xi.nzval
for k=ptr_res:(ptr_res + col_length)
@inbounds rowval[k] = rowvalXi[ptr_Xi] + mX_sofar
@inbounds nzval[k] = nzvalXi[ptr_Xi]
ptr_Xi += 1
end
end |
I see: since you have to iterate over columns, then you end up accessing each matrix many times. Your way seems reasonable. However, I can also get better performance if I make the type-promotion logic look like this: Tv = eltype(X[1].nzval)
Ti = eltype(X[1].rowval)
for i = 2:length(X)
Tv = promote_type(Tv, eltype(X[i].nzval))
Ti = promote_type(Ti, eltype(X[i].rowval))
end |
Thanks @timholy. I would have thought that the |
Avoid the extra convert step.
The for loop in |
It's because the return value of anonymous functions (the |
Not sure, but it's possible that the presence of two different index objects is confusing LLVM. Maybe try representing |
Also a pretty good chance LLVM 3.5 would do better here with SIMD than 3.3. |
Fix the type instability causing slowdown and extra memory allocation in sparse vcat. Fixes JuliaLang/LinearAlgebra.jl#135.