Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Change concatenation involving sparse matrices, sparse vectors and de… #16722

Merged
merged 2 commits into from
Jun 10, 2016

Conversation

pkofod
Copy link
Contributor

@pkofod pkofod commented Jun 2, 2016

…nse vectors to return sparse arrays.

Fixes #16661 and also makes sparse array dense vector return sparse arrays. This implementation makes some unnecessary allocations that I can take a look at before a merge.

@tkelman tkelman added the sparse Sparse arrays label Jun 2, 2016
@@ -967,3 +967,15 @@ for Tv in [Float32, Float64, Int64, Int32, Complex128]
end
end
end

# Matrix vector cat not supported for sparse #13130 and #16661
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it is now, right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is. fixed below.

@pkofod
Copy link
Contributor Author

pkofod commented Jun 3, 2016

I added Vector to the hvcat(X::Union{Matrix, SparseMatrixCSC}...) method (hvcat(X::Union{Vector, Matrix, SparseMatrixCSC}...))

Should I write more efficient methods for all the different cats? I mean versions along the lines of

function hcat{Tv,Ti}(X::AbstractSparseVector{Tv,Ti}...)
that doesn't do unnecessary conversions and copies? I'd be happy to do it. It will save some allocations.

Also, everything seems to work, but should I add hvcat methods as well?

@tkelman
Copy link
Contributor

tkelman commented Jun 3, 2016

Up to you, but one thing we should do is carefully check if there are any performance implications to non-sparse cases. Ref #16128 (comment), would your previous PR have influenced how dense hcat or hvcat get dispatched at all?

@pkofod
Copy link
Contributor Author

pkofod commented Jun 3, 2016

That would be unfortunate! I did add these e280a27#diff-8cc03187983013adb308460f8365e1d0R720 could those be the reason? If so, I'm not sure how.

I will have to checkout just before and after the merge to be sure.

@pkofod
Copy link
Contributor Author

pkofod commented Jun 3, 2016

One commit before mine

julia,fib,0.042998,0.128659,0.046870,0.002392
julia,parse_int,0.242335,2.028559,0.268087,0.078824
julia,mandel,0.136463,0.198907,0.138009,0.003110
julia,quicksort,0.316253,0.580323,0.334022,0.013232
julia,pi_sum,40.930918,41.945148,41.060571,0.152101
julia,rand_mat_stat,17.376123,20.571832,18.228378,0.572528
julia,rand_mat_mul,44.568483,56.915150,47.338284,2.601669
julia,printfd,20.593159,21.428679,20.693425,0.141257
julia,micro.mem,270.566406,270.566406,270.566406,0.000000

mine

julia,fib,0.042982,0.124890,0.046680,0.002268
julia,parse_int,0.251515,1.852985,0.279358,0.079412
julia,mandel,0.136527,0.205742,0.138098,0.002956
julia,quicksort,0.313997,0.465756,0.332772,0.007824
julia,pi_sum,40.936211,41.703891,41.049687,0.113519
julia,rand_mat_stat,39.680477,46.019009,41.446778,1.222534
julia,rand_mat_mul,44.922529,55.156609,47.373832,2.114620
julia,printfd,20.579925,21.573189,20.672500,0.166164
julia,micro.mem,271.417969,271.417969,271.417969,0.000000

almost latest master (from yesterday I think)

julia,fib,0.044243,0.119606,0.046837,0.003259
julia,parse_int,0.289132,7.768136,0.322119,0.152336
julia,mandel,0.136763,0.250258,0.138992,0.006331
julia,quicksort,0.318064,0.534072,0.334189,0.016374
julia,pi_sum,40.511055,55.959972,46.687331,4.624115
julia,rand_mat_stat,50.049147,76.699907,59.645703,10.819121
julia,rand_mat_mul,71.592445,157.663542,89.907365,20.233438
julia,printfd,23.629916,42.501379,30.292729,5.762164
julia,micro.mem,268.761719,268.761719,268.761719,0.000000

It does seem that there's a doubling (ouch) from my commit. However, since then it seems to have gotten even worse (especially rand_mat_mul). This is just one run of test/perf/micro/perf.jl on each commit.

@tkelman
Copy link
Contributor

tkelman commented Jun 8, 2016

The issue was most-of-the-way fixed by #16741, right? Does this PR's change make anything worse, or is it good to merge?

@pkofod
Copy link
Contributor Author

pkofod commented Jun 8, 2016

I'll rebase and see if these changes introduce perf regressions.

@pkofod
Copy link
Contributor Author

pkofod commented Jun 8, 2016

Is there a preferred way to test this? I got

pkm@pkm:~/julia5/julia/test/perf/micro$ julia5 perf.jl
julia,fib,0.042720,1.306022,0.047831,0.008625
julia,parse_int,0.290898,1.702319,0.321545,0.090161
julia,mandel,0.136706,0.289263,0.145230,0.014145
julia,quicksort,0.317058,0.534886,0.345690,0.022850
julia,pi_sum,42.172793,45.845542,42.654163,0.729085
julia,rand_mat_stat,25.078313,29.350104,26.802243,0.950726
julia,rand_mat_mul,37.935638,52.054462,42.789682,2.751409
julia,printfd,21.270570,24.687512,21.830449,0.430672
julia,micro.mem,268.531250,268.531250,268.531250,0.000000

pkm@pkm:~/julia5/julia/test/perf/micro$ julia5 perf.jl
julia,fib,0.042923,0.133529,0.049572,0.011022
julia,parse_int,0.289910,1.947488,0.369711,0.136813
julia,mandel,0.136767,0.265466,0.148014,0.016032
julia,quicksort,0.318611,0.504198,0.357842,0.034795
julia,pi_sum,42.116520,43.955154,42.997052,0.433688
julia,rand_mat_stat,24.955700,33.386973,27.649122,2.278243
julia,rand_mat_mul,39.459221,90.426123,57.806531,13.358714
julia,printfd,20.883406,26.952122,22.168338,1.468372
julia,micro.mem,270.500000,270.500000,270.500000,0.000000

With two runs on this brnach. That rand_mat_mul for example seem to vary by quite a bit!

edit
Here are the numbers from two runs on master without these change

pkm@pkm:~/julia5/julia/test/perf/micro$ julia5 perf.jl
julia,fib,0.042726,0.126508,0.048676,0.008815
julia,parse_int,0.290436,1.801650,0.329631,0.107630
julia,mandel,0.136968,0.243548,0.147561,0.015236
julia,quicksort,0.317342,0.488691,0.350085,0.020635
julia,pi_sum,41.430584,44.633807,42.719802,0.837564
julia,rand_mat_stat,25.223977,29.263104,26.444987,0.696015
julia,rand_mat_mul,39.219704,74.863053,56.828978,11.324896
julia,printfd,20.955399,23.558197,21.545105,0.529952
julia,micro.mem,269.328125,269.328125,269.328125,0.000000
pkm@pkm:~/julia5/julia/test/perf/micro$ julia5 perf.jl
julia,fib,0.042935,0.239299,0.047400,0.007143
julia,parse_int,0.288526,1.706215,0.326151,0.096476
julia,mandel,0.136805,0.248370,0.144293,0.012237
julia,quicksort,0.317265,0.517223,0.347484,0.026142
julia,pi_sum,41.242639,44.630416,42.685517,0.899982
julia,rand_mat_stat,25.018778,29.455444,26.452839,0.898054
julia,rand_mat_mul,38.703395,54.602920,46.244349,3.945921
julia,printfd,20.578417,26.599364,21.420759,0.832918
julia,micro.mem,274.039063,274.039063,274.039063,0.000000

I don't really see any perf regressions from this PR. There's still some way to go to get back to what it was, but I believe that is due to changes after my first sparse cat PR.

@tkelman
Copy link
Contributor

tkelman commented Jun 8, 2016

The output there could use a header, but I think what it's printing is min, max, mean (?), variance. We can also ask
@nanosoldier runbenchmarks(ALL, vs = ":master")
to be on the safe side

@nanosoldier
Copy link
Collaborator

Your benchmark job has completed - possible performance regressions were detected. A full report can be found here. cc @jrevels

@pkofod
Copy link
Contributor Author

pkofod commented Jun 8, 2016

Timing in the array setindex benchmark was slower.

@tkelman
Copy link
Contributor

tkelman commented Jun 8, 2016

Does that setindex benchmark exercise any of the code you're changing here, or is that noise?

@tkelman
Copy link
Contributor

tkelman commented Jun 9, 2016

I'm inclined to merge this. Dissenting opinions?

@pkofod
Copy link
Contributor Author

pkofod commented Jun 10, 2016

Feel free to do so. They can be optimized, but I can come back to them at a later point in time.

@tkelman
Copy link
Contributor

tkelman commented Aug 7, 2016

@ViralBShah what are you tagging this "backport pending 0.5" for, this was merged months ago

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
sparse Sparse arrays
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants