Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

reordering #168

Closed
gdmcbain opened this issue Apr 8, 2019 · 7 comments
Closed

reordering #168

gdmcbain opened this issue Apr 8, 2019 · 7 comments

Comments

@gdmcbain
Copy link
Contributor

gdmcbain commented Apr 8, 2019

In reviewing the example of forced convection #162, some suggestions were made to reduce the five-minute run-time. These concern the Navier–Stokes solver rather than forced convection per se, so new issues are being launched to take them up. The first suggestion was:

Maybe precalculating Reverse Cuthill-McKee reordering for the rows and columns of self.S would help a bit but can't say for sure.

Does reordering help in general? Does it help specifically with meshes imported from Gmsh and systems solved with scipy.sparse.linalg.solve? Does it depend on the block-structure of the matrix? (The Stokes problem as assembled in ex24 has the classic symmetric semidefinite saddle-point structure of the Taylor–Hood velocity–pressure mixed finite elements.) What about iterative solvers like GMRES as in the second suggestion?

The reordering is implemented in skfem.utils.rcm, but is not yet used in the examples.

@gdmcbain
Copy link
Contributor Author

gdmcbain commented Apr 8, 2019

I hadn't bothered much about reordering hitherto as I recall a while ago reading a reply from one of the Gmsh authors on why Gmsh didn't reorder nodes. The rationale was that the way to do this was so dependent on the solver that it was best left to the latter. I got the impression that many modern canned solvers knew what was best for them in terms of order and took care of it themselves. I haven't looked into this for scipy.sparse.linalg.solve or the related Krylov solvers.

@gdmcbain
Copy link
Contributor Author

gdmcbain commented Apr 8, 2019

The quote was (C. Geuzaine, 2017-07-28 [Gmsh] Mesh ordering)

No, we leave this to each solver, as each has different requirements.

@kinnala
Copy link
Owner

kinnala commented Apr 8, 2019

I think you're correct. If you plan to reuse a single LU-decomposition then you should reorder because it leads to less fill-in -> less memory usage and faster back substitution. But here you have a different matrix each time so it does not help.

@kinnala kinnala closed this as completed Apr 8, 2019
@gdmcbain
Copy link
Contributor Author

gdmcbain commented Apr 9, 2019

Ah. Although the matrix changes each time (each Reynolds number and each Newton iteration within that), I had been thinking that the sparsity would be the same so that the reordering could be shared, or even applied to the mesh (and then the basis regenerated). I'll leave this for the moment anyway.

@kinnala
Copy link
Owner

kinnala commented Apr 13, 2019

What you say is most certainly true, but I suppose scipy.sparse.linalg.solve will compute some (possibly better) reordering quite quickly before each solve anyways.

@gdmcbain
Copy link
Contributor Author

The issue of reordering arose in the discussion of ‘Laplace “without” boundary conditions’ #592; in comparing ostensibly the same matrix with that generated in another code, the other had a tidier sparsity pattern.

@kinnala
Copy link
Owner

kinnala commented Mar 31, 2021

I suppose it could be possible to have an option to reorder mesh during initialization as an attempt to reduce fill in. However, I think it should not be enabled by default because it will cause surprising interpretation of the rows/cols of the resulting matrices.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants