-
Notifications
You must be signed in to change notification settings - Fork 94
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
More efficient way of creating condensed sparsity pattern #436
Conversation
Codecov ReportBase: 91.37% // Head: 92.00% // Increases project coverage by
Additional details and impacted files@@ Coverage Diff @@
## master #436 +/- ##
==========================================
+ Coverage 91.37% 92.00% +0.63%
==========================================
Files 22 22
Lines 3258 3703 +445
==========================================
+ Hits 2977 3407 +430
- Misses 281 296 +15
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. ☔ View full report at Codecov. |
# Store linear constraint index for each constrained dof | ||
distribute = Dict{Int,Int}(acs[c].constrained_dof => c for c in 1:length(acs)) | ||
|
||
#Adding new entries to K is extremely slow, so create a new sparsity triplet for the condensed sparsity pattern | ||
N = length(acs)*2 # TODO: Better size estimate for additional condensed sparsity pattern. | ||
I = Int[]; resize!(I, N) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could use sizehint!
here instead and then use push!
instead of setindex!
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, and perhaps in _create_sparsity_pattern() aswell? How much does push! Grow the vector if it grows beyond the size hint?
This patch removes some unnecessary allocations of a sparse matrix column which was needed before when the sparse matrix changed in-place. Since #436 we create a new matrix for the new entries and then add them in the end instead.
After #436 the condensation of the pattern is done by creating a new matrix which is added to the original matrix. Before this patch there is still a check for whether each new entry already exist in the original matrix before adding it to the new matrix. With the new approach implemented in #436 this seems unnecessary and this patch removes the check. This also removes some extra complexity from the code. A datapoint, which support this, is the Stoke's flow example in the documentation. In that problem, the condensation adds 32k new entries, of which 30k are new, and 2k exist in the original matrix. Checking whether the 32k elements exist is much more expensive than simply including the extra 2k entries in the new matrix. The new approach reduces the time for creating the combined matrix from 4ms to 3.2ms. Matrix creation isn't a bottleneck by any means, but it is nice to see that we with simpler code also get better performance.
After #436 the condensation of the pattern is done by creating a new matrix which is added to the original matrix. Before this patch there is still a check for whether each new entry already exist in the original matrix before adding it to the new matrix. With the new approach implemented in #436 this seems unnecessary and this patch removes the check. This also removes some extra complexity from the code. A datapoint, which support this, is the Stoke's flow example in the documentation. In that problem, the condensation adds 32k new entries, of which 30k are new, and 2k exist in the original matrix. Checking whether the 32k elements exist is much more expensive than simply including the extra 2k entries in the new matrix. The new approach reduces the time for creating the combined matrix from 4ms to 3.2ms. Matrix creation isn't a bottleneck by any means, but it is nice to see that we with simpler code also get better performance.
Creating the sparsity pattern with affine constraints was extremely slow.
A lot of %gc time still which I dont understand