-
-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Assembly directly to HYPREMatrix. #5
Conversation
Codecov ReportBase: 74.39% // Head: 75.97% // Increases project coverage by
Additional details and impacted files@@ Coverage Diff @@
## master #5 +/- ##
==========================================
+ Coverage 74.39% 75.97% +1.58%
==========================================
Files 4 4
Lines 984 1053 +69
==========================================
+ Hits 732 800 +68
- Misses 252 253 +1
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. ☔ View full report at Codecov. |
With this you can either assemble every element contribution (might be too slow due to MPI communication, but require less memory): A = HYPREMatrix(comm, proc_row_first, proc_row_last)
hypre_assembler = HYPRE.start_assemble!(A)
for cell in cell_for_proc
ke = ...
global_dofs_for_cell = ...
HYPRE.assemble!(hypre_assembler, global_dofs_for_cell, ke)
end
HYPRE.finish_assemble!(hypre_assembler) or assemble the processor-matrix at once: A = HYPREMatrix(comm, proc_row_first, proc_row_last)
hypre_assembler = HYPRE.start_assemble!(A)
Alocal = process_local_sparsity_pattern(...)
ferrite_assembler = start_assemble(Alocal)
for cell in cell_for_proc
ke = ...
local_dofs_for_cell = ...
Ferrite.assemble!(ferrite_assembler, local_dofs_for_cell, ke)
end
global_dofs_for_proc = ...
HYPRE.assemble!(hypre_assembler, global_dofs_for_proc, Alocal)
HYPRE.finish_assemble!(hypre_assembler) |
Nice. But would the first example really be that slow? HYPRE_IJMatrixAddToValues does not do any MPI communication right, it is only at assemble_matrix(A.A) where mpi communication is done? I think I like the first approach better since it more resembles the current assembler in Ferrite and does not require local dof ids |
Yea you are right, it is actually documented to be
|
This patch adds an assembler interface such that it is possible to directly assemble HYPRE(Vector|Matrix) without going through a sparse matrix datastructure in Julia. This is done as follows: 1. Create a new (empty) matrix/vector using the constructor. 2. Create an assembler and initialize the assembly using HYPRE.start_assemble!. 3. Assemble all contributions using HYPRE.assemble!. 4. Finalize the assembly using HYPRE.finish_assemble!. HYPRE.start_assemble! The assembler caches some buffers that are (re)used by every call to HYPRE.assemble! so this should be efficient. All MPI communication should happen in the finalization step.
f0366fb
to
9a91f0b
Compare
No description provided.