Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

QMR #255

Merged
merged 5 commits into from
Oct 23, 2019
Merged

QMR #255

merged 5 commits into from
Oct 23, 2019

Conversation

platawiec
Copy link
Contributor

I've finished a bare-bones QMR implementation. No pre-conditioning or look-ahead Lanczos (yet). Please have a look and let me know what you think.

I've mostly followed the style for MINRES. I split the Lanczos bi-orthogonalization process from the QMR iteration, in keeping with how the algorithms are abstracted in the original papers.

This algorithm should work with GPUs though it remains untested.

@codecov-io
Copy link

codecov-io commented Aug 25, 2019

Codecov Report

Merging #255 into master will increase coverage by <.01%.
The diff coverage is 90.56%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #255      +/-   ##
==========================================
+ Coverage   90.52%   90.53%   +<.01%     
==========================================
  Files          17       18       +1     
  Lines        1077     1130      +53     
==========================================
+ Hits          975     1023      +48     
- Misses        102      107       +5
Impacted Files Coverage Δ
src/qmr.jl 90.56% <90.56%> (ø)

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 17ef261...e5aded3. Read the comment docs.

@platawiec
Copy link
Contributor Author

Ref Issue #1

Added a bit of documentation and renamed the iterable construction to keep in line with the syntax for other methods.

@mschauer
Copy link
Member

Thank you. How would you assess this code yourself? Do you think it is good to go once checked?

@platawiec
Copy link
Contributor Author

Yes I think so. I haven't built the documentation so I don't know if there will be any issues there, but the core code works with minimal allocations. I have used it successfully where idrs was failing.

As I said, it is missing preconditioners, but so are other methods.

There could be an argument made for eliminating the (very small) vectors storing the relevant Hessenberg and right-hand-side values (H and g in my implementation) and just storing them directly in QMRIterable, but I chose to follow the pattern from MINRES for consistency (fields H and rhs in MINRESIterable). Overhead for any of these smaller design decisions is trivial compared to the matrix-vector product anyways.

One potentially dicey issue is the convergence for low-precision dense matrices. In the tests for dense matrices, I relaxed the tolerance by a factor of 10. Even though the solve would show up as converged, the residual test when calculated directly for Float32 and Complex{Float32} would sometimes fail. I chalk it up to numerical issues and the fact that we are not explicitly tracking the residual. If this is a bigger issue than I think it is we can discuss a strategy to mitigate.

@mschauer
Copy link
Member

Thank you, this is really helpful

Copy link
Member

@mschauer mschauer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can merge this.

@platawiec
Copy link
Contributor Author

Can this be merged? Are there any open items left to address?

@mschauer mschauer merged commit e090fc6 into JuliaLinearAlgebra:master Oct 23, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants