-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RFC: factorize() and \ #3315
RFC: factorize() and \ #3315
Conversation
This is really cool! I'm gonna let some of the numerical linear algebra experts take a look though. |
It seems more sensible to have both |
I remember that a colleague of mine once checked, and LU was actually faster than Bunch-Kaufman for symmetric-indefinite matrices, although the latter requires less memory. Is this still true? |
@stevengj Thank you for the comments. I think you are right about the Symmetric/Hermitian thing. I hadn't tested LU vs Bunch-Kaufman before your post but expected the LAPACK defaults to be good choices. Based on a few tests, it seems that for the complete solution of an indefinite problem, i.e. factorize+solve, Bunch-Kaufman is fastest. However, the factorization is not as clean as the LU and therefore each application of the factorization will be slightly slower. I haven't looked into the relative memory consumption of the two procedures. |
We should perhaps also check for I wonder if we can rewrite things to classify a matrix in the minimal number of sweeps over the matrix. |
Let's merge this as soon as it is stable, and more intelligence for various matrix types can be added over time. |
@ViralBShah I tried to write a @dmbates It would be good if you could have a look at the |
The time may not matter for |
That is right. I'll finish the changes now such that we can get this merged. |
@andreasnoackjensen The Julia implementation of xgelsy looks very good. I appreciate your taking the initiative. A couple of things occur to me. I'm not sure of the motivation of your check on Two parenthetical remarks here - your code calculates abs(r[1]) three different times; you should only do that once (the compiler may recognize this but it is better to avoid this in the code). Secondly I saw an idiom in code by Kevin or Jameson of writing checks on argument values like abs(r[1]) >= rcond || error("Left hand side is zero") which I rather like. I often use the And while I am being pedantic and picky, the error message in One thing about calling xgelsy is that the transformed response Is it necessary to extract |
@dmbates Thank for the comments.They are appreciated. I'll follow up on them as soon as possible. |
@andreasnoackjensen You're welcome. And my thanks to you for all the work you have put into the linear algebra facilities. I think the community and especially the core developers are building something quite marvelous here! |
Some parallel test failed for clang, but gcc passed. |
I think this one is about to be ready to get merged. The suggestions of @dmbates have been incorporated except for the option of returning |
I haven't touched the parallel stuff so I think it must be a Travis thing. |
Please merge when you are ready to do so. I think it is ok to not have |
…ia implementation of gelsy. Some changes to the treatment of symmetric and hermitian matrices and some bug fixes. And some tests.
transpose(A::Symmetric) = A | ||
|
||
*(A::Symmetric, B::Symmetric) = *(full(A), full(B)) | ||
*(A::Symmetric, B::StridedMatrix) = *(full(A), B) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Was there a reason for the fallbacks on this and the next line, allocating a new array rather than calling BLAS.symm!
?
Only the fallback from the line below are still on master, should that fallback be removed? How would you else be able to dispatch here or here
cc @andreasnoack
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not completely sure but I think OpenBLAS' symmetric kernels used to be quite a bit slower than gemm and maybe that is the reason for this. We should revisit and probably use symm
if it is not slower.
A new general matrix factorization method. Some changes to the treatment of symmetric and hermitian matrices and some bug fixes. And some tests.
This is a follow up on the discussion in JuliaLang/LinearAlgebra.jl#16. The main new thing is
factorize
which tries to guess a good factorization depending on the structure of matrix. For exampleFor complex indifinite matrices it is necessary to distinguish between symmetric and hermitian matrices (the Bunch-Kaufman case) and therefore I have added a
Symmetric
type which also works for real matrices. HenceHermitian
is now a complex matrix only type. I have also added thebkfact
method for constructing the Bunch-Kaufman decoposition directly.To handle least squares, I had to reimplement the functionality of
xgelsy
in Julia because I wanted to be able to solve rank deficient problems. You can now reuse a QR based "predicter":I got some random failures in the tests but I couldn't get them if I ran the tests with
include
from a Julia session.