Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Assembly directly to HYPREMatrix. #5

Merged
merged 1 commit into from
Oct 12, 2022
Merged

Assembly directly to HYPREMatrix. #5

merged 1 commit into from
Oct 12, 2022

Conversation

fredrikekre
Copy link
Owner

No description provided.

@fredrikekre
Copy link
Owner Author

cc @lijas @termi-official

@codecov
Copy link

codecov bot commented Oct 11, 2022

Codecov Report

Base: 74.39% // Head: 75.97% // Increases project coverage by +1.58% 🎉

Coverage data is based on head (9a91f0b) compared to base (19bfeaf).
Patch coverage: 98.48% of modified lines in pull request are covered.

Additional details and impacted files
@@            Coverage Diff             @@
##           master       #5      +/-   ##
==========================================
+ Coverage   74.39%   75.97%   +1.58%     
==========================================
  Files           4        4              
  Lines         984     1053      +69     
==========================================
+ Hits          732      800      +68     
- Misses        252      253       +1     
Impacted Files Coverage Δ
src/HYPRE.jl 98.80% <98.48%> (-0.08%) ⬇️
src/LibHYPRE.jl 48.27% <0.00%> (+5.96%) ⬆️

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

☔ View full report at Codecov.
📢 Do you have feedback about the report comment? Let us know in this issue.

@fredrikekre
Copy link
Owner Author

With this you can either assemble every element contribution (might be too slow due to MPI communication, but require less memory):

A = HYPREMatrix(comm, proc_row_first, proc_row_last)
hypre_assembler = HYPRE.start_assemble!(A)
for cell in cell_for_proc
    ke = ...
    global_dofs_for_cell = ...
    HYPRE.assemble!(hypre_assembler, global_dofs_for_cell, ke)
end
HYPRE.finish_assemble!(hypre_assembler)

or assemble the processor-matrix at once:

A = HYPREMatrix(comm, proc_row_first, proc_row_last)
hypre_assembler = HYPRE.start_assemble!(A)
Alocal = process_local_sparsity_pattern(...)
ferrite_assembler = start_assemble(Alocal)
for cell in cell_for_proc
    ke = ...
    local_dofs_for_cell = ...
    Ferrite.assemble!(ferrite_assembler, local_dofs_for_cell, ke)
end
global_dofs_for_proc = ...
HYPRE.assemble!(hypre_assembler, global_dofs_for_proc, Alocal)
HYPRE.finish_assemble!(hypre_assembler)

@lijas
Copy link

lijas commented Oct 12, 2022

Nice. But would the first example really be that slow? HYPRE_IJMatrixAddToValues does not do any MPI communication right, it is only at assemble_matrix(A.A) where mpi communication is done?

I think I like the first approach better since it more resembles the current assembler in Ferrite and does not require local dof ids

@fredrikekre
Copy link
Owner Author

Yea you are right, it is actually documented to be

Not collective

See HYPRE_IJMatrixAddToValues.

This patch adds an assembler interface such that it is possible to
directly assemble HYPRE(Vector|Matrix) without going through a sparse
matrix datastructure in Julia. This is done as follows:

 1. Create a new (empty) matrix/vector using the constructor.
 2. Create an assembler and initialize the assembly using
    HYPRE.start_assemble!.
 3. Assemble all contributions using HYPRE.assemble!.
 4. Finalize the assembly using HYPRE.finish_assemble!.
    HYPRE.start_assemble!

The assembler caches some buffers that are (re)used by every call to
HYPRE.assemble! so this should be efficient. All MPI communication
should happen in the finalization step.
@fredrikekre fredrikekre changed the title WIP: Assembly directly to HYPREMatrix. Assembly directly to HYPREMatrix. Oct 12, 2022
@fredrikekre fredrikekre merged commit 3247480 into master Oct 12, 2022
@fredrikekre fredrikekre deleted the fe/assembly branch October 12, 2022 15:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants