-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Always return sparse matrices for spre
, spost
, and sprepost
#162
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #162 +/- ##
==========================================
+ Coverage 92.89% 92.94% +0.05%
==========================================
Files 27 28 +1
Lines 1956 1971 +15
==========================================
+ Hits 1817 1832 +15
Misses 139 139 ☔ View full report in Codecov by Sentry. |
Seems that |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So the main change is to define sparse(transpose(sparse(…)))
or transpose(sparse(…))
depending on the Julia version?
And also convert a dense matrix to a sparse one.
Are these the main changes?
Are this changes compatible with GPU arrays? As far as I remember, sparse(transpose(sparse(…)))
was needed for GPUs.
The main change is to remove unnecessary The That is, instead of calling kron(sparse(transpose(sparse(A))), B) like we did before. We can directly use the following call start from julia kron(transpose(sparse(A)), B) we can even remove kron(transpose(A), B) But we still need to keep the old versions supported, so I made multiple dispatch for different julia versions. As for the GPU support, I have merged from Maybe they extend some methods in |
Ok. Is it also supported for gpus? |
I also tried the following cases: using CUDA, QuantumToolbox
Xs = cu(sigmax()) # sparse
Xd = cu(sparse_to_dense(sigmax())) # dense
spre(Xs)
spre(Xd)
spost(Xs)
spost(Xd) They all worked, and returns
in my local PC |
Perfect. Let’s wait the run tests and then we can merge it |
@albertomercurio |
Currently, the return matrix type of
spre
depends on the input, butspost
andsprepost
always return sparse matrix.For superoperators, it should be more feasible to use sparse matrix for large systems. Therefore, I suggest to always return sparse matrix for
spre
.Also, I find out that if
A
is a sparse matrix,sparse(A)
will make a copy of it. But we are going to do Kronecker product right away in generating superoperators, I don't think we need to do this copy.Therefore, I made intrinsic functions for
spre
,spost
,sprepost
with multiple dispatches. To deal with different types (dense or sparse) of inputOpeartor
.I believe this could improve the performance when creating the Liouvillian.