-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ensure adjoint allocates memory for max concurrent observables only #221
Conversation
Hello. You may have forgotten to update the changelog!
|
Codecov Report
@@ Coverage Diff @@
## master #221 +/- ##
==========================================
+ Coverage 99.70% 99.72% +0.01%
==========================================
Files 4 4
Lines 344 358 +14
==========================================
+ Hits 343 357 +14
Misses 1 1
Continue to review full report at Codecov.
|
This reverts commit 2f48486. Favour chunking at Python layer due to configurability.
@@ -65,6 +67,12 @@ | |||
) | |||
|
|||
|
|||
def _chunk_iterable(it, num_chunks): | |||
"Lazy-evaluted chunking of given iterable from https://stackoverflow.com/a/22045226" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"Lazy-evaluted chunking of given iterable from https://stackoverflow.com/a/22045226" | |
"Lazy-evaluated chunking of given iterable from https://stackoverflow.com/a/22045226" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice work @mlxd! I'm happy to approve!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome work! 🚀 I've had only a few comments...
Co-authored-by: Ali Asadi <[email protected]>
Co-authored-by: Ali Asadi <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice work! Don't see any problem beyond what Ali pointed out.
All new features must include a unit test.
If you've fixed a bug or added code that should be tested, add a test to the
tests
directory!All new functions and code must be clearly commented and documented.
If you do make documentation changes, make sure that the docs build and
render correctly by running
make docs
.Ensure that the test suite passes, by running
make test
.Add a new entry to the
.github/CHANGELOG.md
file, summarizing thechange, and including a link back to the PR.
Ensure that code is properly formatted by running
make format
.When all the above are checked, delete everything above the dashed
line and fill in the pull request template.
Context: The current implementation of the adjoint Jacobian method in Lightning parallelizes over observables, with each observable handled by a separate OpenMP thread. This works fine in practice unless a large number of observables are required, wherein a new statevector memory block is allocated upfront for the total number of observables. This causes OOM errors for large numbers of observables, even for modest qubit counts. The solution is to restrict the requested number of observables into batches of OMP_NUM_THREADS concurrent executions, where the number of statevectors allocated is limited by the number of executing threads. This imposes a small overhead in requiring additional computation across batches, but ensures the user can reach much higher qubit counts for large numbers of concurrent executions.
Description of the Change: The adjoint Jacobian calculation is now batched at the maximum number of OpenMP threads, and allocates enough memory for a given batch only.
Benefits: Reduces overall memory footprint within a given series of concurrent calculations, allowing larger numbers of qubits in a given calculation, prevents OOM errors encountered for large workflows.
Possible Drawbacks: Imposes additional overheads due to repetition between batches.
Related GitHub Issues: