-
Notifications
You must be signed in to change notification settings - Fork 92
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement a proper sparsity pattern #888
Conversation
Supersedes #596 ? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for this one Fredrik! It looks really awesome. :) I roughly went over the implementation and have some things which I like to discuss.
Since I could not annotate the files: I think you missed updating the the dg example and the integration test.
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #888 +/- ##
==========================================
- Coverage 93.79% 93.69% -0.11%
==========================================
Files 36 38 +2
Lines 5499 5723 +224
==========================================
+ Hits 5158 5362 +204
- Misses 341 361 +20 ☔ View full report in Codecov by Sentry. |
Before cleaning up and finishing this PR there are some things I would like input on:
|
I think this is a good idea not only for this reason. New users might not understand the difference between sparsity pattern and the actual sparse matrix and might try to pass the pattern to the assembler anyway. Alternatively we can also not error in this case and just assemble into a standard matrix format, which can be queried by
I think this is a great convenience for the large portion of our user base.
I think I am not understanding the issue here. If we want to have a very specific matrix, then we should be specific here. Not sure if |
Nice work @fredrikekre!
|
Perhaps a stupid question, but would it not make sense to support |
The issue is that it looks ugly to have something like A = create_matrix(BlockMatrix{Float64, Matrix{SparseMatrixCSR{1, Float64, Int64}}}, sparsity_pattern) But with aliases we could have e.g. A = create_matrix(MatrixTypes.BlockMatrix{MatrixTypes.CSRMatrix{Float64, Int}}, sparsity_pattern) A method for that would be defined in an extension still. The point is that type parameters isn't always "user-friendly" so with MatrixTypes we could normalize it a bit. But we can do this later as Knut points out.
The matrix type doesn't necessarily map directly to one pattern type. And if you want a specific pattern type, why not call that constructor directly ( As it is right now, |
If |
BlockSparsityPattern is in Ferrite now, but even with the hack you could still call the constructor (but get an error telling you to load the needed package which you would have to do anyway in order to pass the matrix type?). |
Note that the assembler itself is still tightly coupled to SparseMatrixCSC. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very nice description (and work on this PR, looking forward to the benchmarks 🚀)
Added some suggestions also to make it faster to get to the main points at the end of the docs!
|
||
## Sparsity pattern | ||
|
||
The sparse structure of the linear system depends on many factors such as e.g. the weak |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm a bit torn here since the derivative is not always the definition of the matrix (but implicitly assumed in the text below for e.g. reference to Poisson's equation). Yet, I find it logical introduce the coupling via the derivative, e.g.
In general, a finite element problem requires solving an equation system
to find
or for linear problems, a system matrix is constructed as
Each entry,
With the current text, it is for example not obvious that i
and j
couple").
(Not sure if it makes sense to discuss cases when using alternative techniques for defining the stiffness matrix via quasi-newton methods here, perhaps just mention that when discussing the coupling
keyword?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This point also picked my attention. I would not even to down to nonlinear problems here. The point is really about linear operators, which either appear naturally as part of a PDE description (see e.g. Navier-Stokes example or Poisson problem or even linear elasticity) or in the linearization (e.g. second variation in energy-based elasticity). Hence the sparse matrix is not directly about linear systems, but about the representation of the discretization of an operator which allows e.g. efficient linear solves and fast simulations. So I have something into this direction in my head:
The sparse structure of the linear system depends on many factors such as e.g. the weak | |
The sparse structure of linear operators in the context of the simulation of partial differential equations depends on many factors such as e.g. the weak |
but it needs further refinement. Does this make sense for you guys?
I have another consideration which came to my mind. Is |
To me, it makes sense to have this function to do the "right thing" (such that it will work) given the inputs. I thought that if you give it a dofhandler with DG-interpolations it will call For advanced use-cases (for example if you have additional info that can increase the sparsity), I think the separation with the extra functions are really nice, but I don't think we need to add extra convenience for many different cases (at least not yet). |
Yea, right now |
This only happens if you also pass the topology. You cannot differentiate between DG and "normal" P0 on interpolation level, as the coupling is a property of the weak form and not of the interpolation. |
Any thoughts on #888 (comment)? Perhaps also change |
I am not sure if it is really some ambiguity in the word |
I think |
I agree with @lijas that I find that |
|
14c5073
to
eb77210
Compare
7b82c7f
to
7220095
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks really cool already! Found some things which I discuss or which I think needs some clarification.
|
||
## Sparsity pattern | ||
|
||
The sparse structure of the linear system depends on many factors such as e.g. the weak |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This point also picked my attention. I would not even to down to nonlinear problems here. The point is really about linear operators, which either appear naturally as part of a PDE description (see e.g. Navier-Stokes example or Poisson problem or even linear elasticity) or in the linearization (e.g. second variation in energy-based elasticity). Hence the sparse matrix is not directly about linear systems, but about the representation of the discretization of an operator which allows e.g. efficient linear solves and fast simulations. So I have something into this direction in my head:
The sparse structure of the linear system depends on many factors such as e.g. the weak | |
The sparse structure of linear operators in the context of the simulation of partial differential equations depends on many factors such as e.g. the weak |
but it needs further refinement. Does this make sense for you guys?
I am a bit uneasy about the HeapAllocator stuff. While I understand it was fun to write is there really no way whatever this is used for that cannot be used in pure, safe Julia? |
PR one matrix
master one matrix
PR two matrices
master two matrices
(70x70x70 OOM killed) Code for PRusing Ferrite
n = 10
const grid = generate_grid(Hexahedron, (n, n, n))
const top = ExclusiveTopology(grid)
const dh = DofHandler(grid)
add!(dh, :u, Lagrange{RefHexahedron, 2}()^3)
add!(dh, :p, Lagrange{RefHexahedron, 1}())
close!(dh)
const ch = ConstraintHandler(dh)
const facets = collect_periodic_facets(grid)
add!(ch, PeriodicDirichlet(:u, facets))
close!(ch)
function single_matrix(dh, ch)
dsp = init_sparsity_pattern(dh)
add_cell_entries!(dsp, dh, ch)
add_constraint_entries!(dsp, ch)
K = allocate_matrix(dsp)
return K
end
function double_matrix(dh, ch)
dsp = init_sparsity_pattern(dh)
add_cell_entries!(dsp, dh, ch)
add_constraint_entries!(dsp, ch)
K = allocate_matrix(dsp)
M = allocate_matrix(dsp)
return K, M
end
@time single_matrix(dh, ch)
@time double_matrix(dh, ch) Code for masterusing Ferrite
n = 10
const grid = generate_grid(Hexahedron, (n, n, n))
const top = ExclusiveTopology(grid)
const dh = DofHandler(grid)
add!(dh, :u, Lagrange{RefHexahedron, 2}()^3)
add!(dh, :p, Lagrange{RefHexahedron, 1}())
close!(dh)
const ch = ConstraintHandler(dh)
const facets = collect_periodic_facets(grid)
add!(ch, PeriodicDirichlet(:u, facets))
close!(ch)
function single_matrix(dh, ch)
K = create_sparsity_pattern(dh, ch)
return K
end
function double_matrix(dh, ch)
K = create_sparsity_pattern(dh, ch)
M = create_sparsity_pattern(dh, ch)
return K, M
end
@time single_matrix(dh, ch)
@time double_matrix(dh, ch) |
I simplified it greatly and in particular removed the manual malloc and pointer computation. Now it is just a way to batch allocate. It is very similar to what Knut did in #974 except that it batches based on size so that it becomes a bit easier to reuse memory. I think we can merge the two implementations in the future. |
c732b6c
to
1aa0d84
Compare
This patch changes the sparsity pattern interface to decouple the pattern creation from matrix instantiation. This will make it easier to support many different types of matrices from the same sparstiy pattern data structure. Main additions: - New sparsity pattern datastructures `SparsityPattern` and `BlockSparsityPattern`. - New functions for adding entries: `add_cell_entries!`, `add_interface_entries!`, `add_constraint_entries!`, and `Ferrite.add_entry!`. There is also `add_sparsity_entries!` which calls the three first methods. - New function `allocate_matrix` which instantiates a matrix of chosen type from a sparsity pattern. - `allocate_matrix` is also a convenience method that can be used when no customization of the patterns is necessary, therefore, the new `allocate_matrix` can be used as a direct replacement for the old `create_sparsity_pattern`.
Thanks for the comprehensive benchmarks!
And another question, why is the sparsity pattern row-major? Seems like this adds a small perf-hit with the extra buffer for standard CSC matrices? |
It is a trade-off between memory and time -- master uses quite a lot more memory. In my experience that have been the limiting factor. Compared to a linear solve of this size I am not too worried. I suppose also one should factor in that we now have quite a lot more flexibility. We can keep the old memory-hungry alg for
That does look a bit strange, I will have a look again. |
I didn't run the function first so almost all the allocations are from compilation. I can update the table later. |
I also don't think that is worth it (and there is no doubt that this PR is an improvement, was more curious if something could be done to get the same perf). |
Without compilation: PR one matrix
master one matrix
PR two matrices
master two matrices
|
This PR introduces
struct SparsityPattern
for a way to work with the pattern directly instead of producing a matrix directly.Some advantages of this are:
ndofs(dh) x ndofs(dh)
so it is easy to insert new dofsSparseMatrixCSC
(e.g. how element connections and constraint condensation were implemented). With this patch everything is handled on theSparsityPattern
level independent of what type of matrix will be instantiated in the end. All that is needed to support a new matrix type is now to write aMatrixType(::SparsityPattern)
constructor.