-
-
Notifications
You must be signed in to change notification settings - Fork 546
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sparse determinants are slow #5402
Comments
comment:1
I would tend to avoid converting to a dense matrix by default since one often uses sparse matrices because they don't have enough memory to store the dense ones. |
comment:2
Sure, but as of right now, I've been waiting over fifteen minutes for
whereas
So, currently, for a matrix large enough to have memory issues, determinants are already completely infeasible. So I don't think anything would be lost by the one-line change proposed by William. Of course, writing a sparse algorithm that is competitive speed-wise, or could be used only for very large matrices, would be ideal. But it took me some time today to figure out why computing ~1000 determinants of 8x8 sparse matrices was taking so long, and if there could be a quick fix to spare others from my pain, it would be nice. :) |
comment:3
Yep, I agree with Jason's position. We are doing the same for some other LA bits where the sparse methods takes longer and uses more memory than the dense method for many cases. Cheers, Michael |
comment:4
Apparently, this issue has been fixed. The determinant that used to take 15 minutes now takes 0.05s... |
comment:5
I wouldn't say that it's fixed. Note the comparison is better but still they are not that close.
It's just that everything is faster now. How much faster would Moore's law say these should be after four years? |
comment:6
Replying to @kcrisman:
In comment 2, someone complains about this test running for 15 minutes. On your machine, it runs in 228ms (sparse case). This is about 4000x times faster. Moore's law tell you that the number of transistor on a chip doubles every 18 months, so that in 4 years we would have at the very maximum a 10x speedup. So, there was clearly some improvement. However, if you look into it a bit deeper, then you'll see that in the sparse case, computing the determinant triggers the computation of the charpoly(), and the computation of the charpoly creates a dense version of the matrix, then runs the dense charpoly algorithm. I thus agree that this is suboptimal (in particular because linbox has fancy iterative methods for this kind of computation on sparse matrices). Anyway, why don't we : a) close this ticket, considering that it is outdated and that the situation has changed, and b) open a "feature request" ticket related to sparse matrices over exact rings? I would be happy to look into this. |
comment:7
Interestingly, other than that one, the rest indeed show a 4-10x speedup.
Well, given that we already create the dense matrix, then William's fix seems like it would speed things up at least a little, or be no worse? I don't see why we (by which I mean you and other people who compute such determinants!) couldn't try it out, as well as of course creating a new ticket to use fancy methods. Seem reasonable? |
comment:8
Fair enough. I'll post a patch this week-end. |
comment:9
Attachment: 5402_faster_dumber_det.patch.gz With patch applied:
So that now the sparse case is faster! I really don't understand why tough, because it converts the matrix to a dense representation before computing the (dense) det... |
Author: Charles Bouillaguet |
comment:10
I get the same thing. With a heavy load on the same computer (accounting for non-sparse slowdown),
Maybe the way we make dictionary-defined matrices dense with I like the patch, though it would be nice if you could find someone who has worked with caching before just to check that that code is correct - I haven't really used it in the past and don't have time to find similar examples in the code now, my apologies. |
comment:11
There is an example of caching (the one I just mimicked...) in matrix_sparse.pyx in the definition of charpoly(). I checked that caching works by computing a large sparse determinant twice : the first call takes a few seconds, while the second call returns instantly. For the sake of easy reviewing :-) here is the code of charpoly() : def charpoly(self, var='x', **kwds):
f = self.fetch('charpoly')
if f is not None:
return f.change_variable_name(var)
f = self.dense_matrix().charpoly(var=var, **kwds)
self.cache('charpoly', f)
return f |
comment:12
Well, there you have it. And the way to test it worked was perfect. Okay, open that other ticket, and then you have yourself a positive review. Still unclear as to why it's faster than the dense case... |
comment:13
#13915 is opened |
Reviewer: Karl-Dieter Crisman |
Merged: sage-5.6.beta3 |
The following timings should be able to be improved.
William suggested:
Likely the fix will be to implement
a 1-line function that is just
until there is somebody who wants to implement a sparse algorithm.
CC: @jbandlow
Component: linear algebra
Keywords: determinant
Author: Charles Bouillaguet
Reviewer: Karl-Dieter Crisman
Merged: sage-5.6.beta3
Issue created by migration from https://trac.sagemath.org/ticket/5402
The text was updated successfully, but these errors were encountered: