-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Request For Comment: Arithemtic between Operators and LazyOperators #86
Changes from 4 commits
f9c3a8c
f4b872e
690fff2
64da84f
d34baa8
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -11,7 +11,7 @@ must be sorted. | |
Additionally, a factor is stored in the `factor` field which allows for fast | ||
multiplication with numbers. | ||
""" | ||
mutable struct LazyTensor{BL,BR,F,I,T} <: AbstractOperator{BL,BR} | ||
mutable struct LazyTensor{BL,BR,F,I,T} <: LazyOperator{BL,BR} | ||
basis_l::BL | ||
basis_r::BR | ||
factor::F | ||
|
@@ -96,21 +96,45 @@ isequal(x::LazyTensor, y::LazyTensor) = samebases(x,y) && isequal(x.indices, y.i | |
# Arithmetic operations | ||
-(a::LazyTensor) = LazyTensor(a, -a.factor) | ||
|
||
function +(a::LazyTensor{B1,B2}, b::LazyTensor{B1,B2}) where {B1,B2} | ||
if length(a.indices) == 1 && a.indices == b.indices | ||
const single_dataoperator{B1,B2} = LazyTensor{B1,B2,F,I,Tuple{T}} where {B1,B2,F,I,T<:DataOperator} | ||
function +(a::T1,b::T2) where {T1 <: single_dataoperator{B1,B2},T2 <: single_dataoperator{B1,B2}} where {B1,B2} | ||
if a.indices == b.indices | ||
op = a.operators[1] * a.factor + b.operators[1] * b.factor | ||
return LazyTensor(a.basis_l, a.basis_r, a.indices, (op,)) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is the only place I am not sure about defaulting to laziness. It's quite a special case, but I encounter it quite a bit. I suppose the reason to do lazy summing here is mainly to be consistent with the laziness-preserving principle. I have some code that makes use of the existing behavior, but of course I can still do this kind of concrete summing manually if I want to, so I'm not arguing hard to keep it. What are your thoughts? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. My experience was that the previous implementation was very limiting. Especially since the custom operators I have been playing around with were not DataOperators but AbstractOperators, where the operation was defined via a function rather than a matrix. Therefore, these cannot be trivially added (except by using LazySum), and the above implementation fails. Also, length(a.indices) ==1 is required, and I could imagine situations where one would like to be able to add LazyTensors containing more than one operator. However, one could perhaps keep the original behavior by dispatching on LazyTensors containing only one DataOperator. That is adding a function like this (draft, I'm not entirely sure it works): const single_dataoperator{B1,B2} = LazyTensor{B1,B2,ComplexF64,Vector{Int64},Tuple{T}} where {B1,B2,T<:DataOperator}
function +(a::T1,b::T2) where {T1 <: single_dataoperator{B1,B2},T2 <: single_dataoperator{B1,B2}}
if length(a.indices) == 1 && a.indices == b.indices
op = a.operators[1] * a.factor + b.operators[1] * b.factor
return LazyTensor(a.basis_l, a.basis_r, a.indices, (op,))
end
throw(ArgumentError("Addition of LazyTensor operators is only defined in case both operators act nontrivially on the same, single tensor factor."))
end There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @mabuni1998 I think it's worth trying to keep the original intact as you suggest. If we can handle it via dispatch, we won't lose anything. Or am I missing some case here? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. No I don't think we will lose anything. I have implemented to above as: There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Thanks for finding a way to keep the original behavior. This is not type-stable, but I can't think of an obvious way to make it otherwise, except by letting LazyTensor There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Probably won't be performance-critical no, as you are most likely creating the operators once at the beginning of the simulation and then not changing them as you do multiplications etc. |
||
end | ||
throw(ArgumentError("Addition of LazyTensor operators is only defined in case both operators act nontrivially on the same, single tensor factor.")) | ||
LazySum(a) + LazySum(b) | ||
end | ||
|
||
function -(a::LazyTensor{B1,B2}, b::LazyTensor{B1,B2}) where {B1,B2} | ||
if length(a.indices) == 1 && a.indices == b.indices | ||
function -(a::T1,b::T2) where {T1 <: single_dataoperator{B1,B2},T2 <: single_dataoperator{B1,B2}} where {B1,B2} | ||
if a.indices == b.indices | ||
op = a.operators[1] * a.factor - b.operators[1] * b.factor | ||
return LazyTensor(a.basis_l, a.basis_r, a.indices, (op,)) | ||
end | ||
throw(ArgumentError("Subtraction of LazyTensor operators is only defined in case both operators act nontrivially on the same, single tensor factor.")) | ||
LazySum(a) - LazySum(b) | ||
end | ||
|
||
function tensor(a::LazyTensor{B1,B2},b::AbstractOperator{B3,B4}) where {B1,B2,B3,B4} | ||
if isequal(b,identityoperator(b)) | ||
btotal_l = a.basis_l ⊗ b.basis_l | ||
btotal_r = a.basis_r ⊗ b.basis_r | ||
LazyTensor(btotal_l,btotal_r,a.indices,(a.operators...,),a.factor) | ||
elseif B3 <: CompositeBasis || B4 <: CompositeBasis | ||
throw(ArgumentError("tensor(a::LazyTensor{B1,B2},b::AbstractOperator{B3,B4}) is not implemented for B3 or B4 being CompositeBasis unless b is identityoperator ")) | ||
else | ||
a ⊗ LazyTensor(b.basis_l,b.basis_r,[1],(b,),1) | ||
end | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. How should we treat LazyTensor with non-equivalent LR and BR. The first if statement is only well defined for BL==BR it seems. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yeah, so LazyTensor with non-rectangular factors in the composite basis implicitly defines simple isometries, like you constructed in your new implementation of I have an Perhaps a nicer solution is to introduce a proper LazyIdentity/LazyIsometry operator that gets materialized only when necessary. Then we could implement There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @amilsted , could you put up a gist with This discussion and There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'd also say we can ignore the special One alternative way of solving this would be doing something like #90, but that's just an idea. For now I'd say don't special handle it, remove the new By the way, thanks for all the work @mabuni1998 and thanks for the reviews @amilsted @Krastanov. I appreciate it! There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. You're welcome! I now pushed a new version with the above things fixed. I would like to say that originally I put the check for identityoperator there because, in my own code, I noticed performance drops when doing stuff like the following (WaveguideBasis is a custom basis with operators defined as functions, not as matrices, and they, therefore, have to be Lazy):
Because here, identityoperator(bc) is explicitly stored in the resulting LazyTensor (there is a custom method creating LazyTensor here). Probably this is better solved by allowing operators spanning multiple basises to be stored in LazyTensor. Right now, we explicitly check for this case and don't allow it. However, this would require extra work (and I don't know how much) in the mul! method so let's leave it. This problem (I think) would not be solved by implementing LazyIdentity since identityoperator(bc) would not return such abjoct. Also, I'm aware that in the above, one could just use embed or explicitly define LazyTensor. I simply just wanted to make it easier for the user to have better performance. |
||
end | ||
function tensor(a::AbstractOperator{B1,B2},b::LazyTensor{B3,B4}) where {B1,B2,B3,B4} | ||
if isequal(a,identityoperator(a)) | ||
btotal_l = a.basis_l ⊗ b.basis_l | ||
btotal_r = a.basis_r ⊗ b.basis_r | ||
LazyTensor(btotal_l,btotal_r,b.indices.+length(a.basis_l.shape) ,(b.operators...,),b.factor) | ||
elseif B1 <: CompositeBasis || B2 <: CompositeBasis | ||
throw(ArgumentError("tensor(a::AbstractOperator{B1,B2},b::LazyTensor{B3,B4}) is not implemented for B1 or B2 being CompositeBasis unless b is identityoperator ")) | ||
else | ||
LazyTensor(a.basis_l,a.basis_r,[1],(a,),1) ⊗ b | ||
end | ||
end | ||
|
||
|
||
function *(a::LazyTensor{B1,B2}, b::LazyTensor{B2,B3}) where {B1,B2,B3} | ||
indices = sort(union(a.indices, b.indices)) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is strange... I am probably missing something obvious, but it looks like these two variants should give exactly equal results for
CompositeBasis
?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I agree they should, but from my testing they don't. Try running the following example:
edit: I should specify that with the above addition the below example is obviously true, while in the current release version of QuantumOpticsBase, the below example won't return true.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's odd. Could it be that the eltype is different? Maybe that
identityoperator()
method is broken?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's the data inside; they do not represent the same matrix. Maybe Julias UnitScaling follows another convention than we expect, or maybe the example I'm making shouldn't return true?
To make it even clearer, consider:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh... Yeah, I think in this case they actually should not be equal. It's kind of funny to even define
identityoperator()
in the rectangular case. Clearly it's not the identity!There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To summarize, the two methods of
identityoperator()
above are only equivalent in case the left and right composite bases both have the same length and have the same number of factors, and the dimensions of the factors match. Otherwise the first definition (the original one) is correct - it actually produces an identity operator in the square case - where the second is not. The second version also fails in case the left basis has more factors than the right.Should we consider raising a warning if
identityoperator()
is used in rectangular cases?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 for a warning
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see good point. But then I would like some input on how we should treat LazyTensor with non-equivalent left and right basises. See below
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay, so that method here should be removed again.