-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Specialisation of mul! for left multiplication with scalars #173
Conversation
This allows for the linear combination to inherit the potentially rich structure of the terms (e.g. when the terms are BlockArrays)
Codecov Report
@@ Coverage Diff @@
## master #173 +/- ##
==========================================
+ Coverage 98.40% 98.54% +0.13%
==========================================
Files 15 15
Lines 1193 1307 +114
==========================================
+ Hits 1174 1288 +114
Misses 19 19
Continue to review full report at Codecov.
|
Sounds like a good idea. Perhaps, the |
Also, since this is a new feature, feel free to bump the minor version right away, and leave a comment in the |
I am getting some interesting results. The method at LinearMaps.jl/src/conversion.jl Line 76 in f8eaf98
Is only called when all coefficients in the linear combination are unity and when the linear combination is entered by hand, i.e. when the storage type is However, the conditions for entering this code path are very brittle, and in all other cases the fallback implementation is 300 slower. In terms of memory my method is on par.
|
This is getting very close to what I imagined. I think we should make the most generic |
Fully agree. Came to the same conclusion as I progressed. I will implement these suggestions in the next couple of days. |
There is actually one thing I can't figure out: If a user defines a custom LinearMap CustomMap in an external package, the docs say that implementing mul! suffices. However, assume that this custom map is then scaled, giving rise to
In other words, when the custom map appears at a lower level of a more complicated LinearMap, the user supplied Am I missing something here? Currently I am working around this by not only supplying mul! but also _unsafe_mul! in LiftedMaps.jl. |
Can you provide a minimal example? Because of LinearMaps.jl/src/LinearMaps.jl Lines 246 to 247 in aa54eb3
a "missing" LinearMaps.jl/src/LinearMaps.jl Lines 248 to 250 in aa54eb3
so, unless I am missing some path, it should really end up with the user-defined |
I see... I didn't bother drilling down deeper after encountering The problem still exist for mapmat and mapnum I think. For example, MWE:
The same is true for matnum, which is written also written in terms of It seems to me that |
Awesome, thanks for finding and sharing that. There are two things that bite each other here. The |
Yea, perhaps only |
Yes, that seems to be the way out. Just a shame |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I left a few remarks and suggestions. This looks pretty good, and we should get it done and out soon (edit: this is referring to myself and not putting pressure on you, of course!).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few more comments, sorry for being so picky. I'm not 100% sure, but I believe we should provide targetted methods for 3-arg multiplication because some map types may not come with 5-arg implementations. Calling 5-arg methods (even with default values) requires much care that no intermediate arrays are allocated.
I need to fix the blockmul case, but otherwise this is really close to being finished. It is somewhat unfortunate that even for matrices, the blockmul code falls back to |
For me, all your benchmarks from #173 (comment) are now on the order of milliseconds, with very small overhead for the linear combination of the doubled maps. I used this code for benchmarking: using LinearAlgebra, LinearMaps, BenchmarkTools
n = 1024
A = [rand(n,n) for i in 1:10]
LC1 = sum((LinearMap(a) for a in A)) # creates a LinearMapTuple
LC2 = sum((2*LinearMap(a) for a in A)) # creates a LinearMapTuple
LC3 = sum([LinearMap(a) for a in A]) # creates a LinearMapVector
LC4 = sum([2*LinearMap(a) for a in A]) # creates a LinearMapVector
@btime B1 = Matrix{Float64}(LC1);
@btime B2 = Matrix{Float64}(LC2);
@btime B2 = Matrix{Float64}(LC3);
@btime B3 = Matrix{Float64}(LC4);
12.059 ms (29 allocations: 8.00 MiB)
12.252 ms (29 allocations: 8.00 MiB)
12.147 ms (39 allocations: 8.00 MiB)
12.884 ms (39 allocations: 8.00 MiB) I agree that the slight regression for the one special case is acceptable, given that this "recurses" into the |
In
Base.LinearAlgebra
,mul!
has specialisations for whens
inmul!(C,s,B,alpha,beta)
is a scalar. This is needed to make mul! essentially act like inplace addition (add! does not seem to exist andadd!(C,B)
is equivalent tomul!(C,1,B,1,1)
).I suggest to echo the treatment of this special but important case also in LinearMaps. As a motivation, what I want is the conversion of a LinearCombination to a Matrix, with minimal overhead:
as opposed to
The latter does a lot of spurious copying and allocating, and does not allow for clever specialisations in case
map
is aWrappedMap
around a sparse matrix.Closes #181.