Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Turn caching off by default when max_diff==1 #5243

Merged
merged 8 commits into from
Feb 23, 2024
Merged

Conversation

albi3ro
Copy link
Contributor

@albi3ro albi3ro commented Feb 21, 2024

Context:

Recent benchmarks (see #5211 (comment)) have shown that caching adds massive classical overheads, but often does not actually lead to a reduction in the the number of executions in normal workflows.

Because of this, we want to make smart choices about when to use caching. Higher-order derivatives often result in duplicate circuits, so we need to keep caching when calculating higher-order derivatives. But, we can make caching opt-in for normal workflows. This will lead to reduced overheads in the vast majority of workflows.

Description of the Change:

The QNode keyword argument cache defaults to None. This is interpreted as True if max_diff > 1 and False otherwise.

Benefits:

Vastly reduced classical overheads in most cases.

Possible Drawbacks:

Increased number of executions in a few edge cases. But these edge cases would be fairly convoluted. Somehow a transform would have to turn the starting tape into two identical tapes.

Related GitHub Issues:

Performance Numbers:

n_layers = 1

dev = qml.device('lightning.qubit', wires=n_wires)

shape = qml.StronglyEntanglingLayers.shape(n_wires=n_wires, n_layers=n_layers)
rng = qml.numpy.random.default_rng(seed=42)
params = rng.random(shape)

@qml.qnode(dev, cache=False)
def circuit(params):
    qml.StronglyEntanglingLayers(params, wires=range(n_wires))
    return qml.expval(qml.Z(0))

@qml.qnode(dev, cache=True)
def circuit_cache(params):
    qml.StronglyEntanglingLayers(params, wires=range(n_wires))
    return qml.expval(qml.Z(0))

For n_wires = 20
Screenshot 2024-02-21 at 5 19 49 PM

But for n_wires= 10:
Screenshot 2024-02-21 at 5 20 43 PM

For n_wires=20 n_layers=5, we have:
Screenshot 2024-02-21 at 6 02 40 PM

While the cache version does seem to be faster here, that does seem to be statistical fluctuations.

For n_wires=10 n_layers=20:

Screenshot 2024-02-21 at 6 09 04 PM

Copy link

codecov bot commented Feb 21, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 99.67%. Comparing base (eb03750) to head (ebcb269).
Report is 2 commits behind head on master.

Additional details and impacted files
@@            Coverage Diff             @@
##           master    #5243      +/-   ##
==========================================
- Coverage   99.68%   99.67%   -0.01%     
==========================================
  Files         399      399              
  Lines       36853    36575     -278     
==========================================
- Hits        36736    36457     -279     
- Misses        117      118       +1     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@albi3ro albi3ro requested a review from josh146 February 22, 2024 16:10
Copy link
Member

@josh146 josh146 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @albi3ro!

Out of curiosity, I wonder if the caching affected any first-order gradient workflows that used qml.jacobian()? That is, any workflows where multiple identical tapes would be executed to build up the Jacobian?

pennylane/workflow/qnode.py Outdated Show resolved Hide resolved
@albi3ro
Copy link
Contributor Author

albi3ro commented Feb 23, 2024

Thanks @albi3ro!

Out of curiosity, I wonder if the caching affected any first-order gradient workflows that used qml.jacobian()? That is, any workflows where multiple identical tapes would be executed to build up the Jacobian?

We add an additional mandatory level of caching for autograd, so cache_execute was no influencing it.

pennylane/workflow/qnode.py Outdated Show resolved Hide resolved
pennylane/workflow/qnode.py Outdated Show resolved Hide resolved
pennylane/workflow/qnode.py Outdated Show resolved Hide resolved
pennylane/workflow/qnode.py Outdated Show resolved Hide resolved
pennylane/workflow/qnode.py Outdated Show resolved Hide resolved
@albi3ro albi3ro requested a review from a team February 23, 2024 14:05
@timmysilv
Copy link
Contributor

i'm not sure what the current state of the benchmarking suite is, but perhaps this PR would be a good opportunity to test it out?

@albi3ro
Copy link
Contributor Author

albi3ro commented Feb 23, 2024

[sc-57404]

@trbromley trbromley added this to the v0.35 milestone Feb 23, 2024
Copy link
Contributor

@AmintorDusko AmintorDusko left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good idea! Thank you for that!

@trbromley trbromley added the merge-ready ✔️ All tests pass and the PR is ready to be merged. label Feb 23, 2024
@albi3ro albi3ro enabled auto-merge (squash) February 23, 2024 19:27
@albi3ro albi3ro disabled auto-merge February 23, 2024 19:50
@albi3ro
Copy link
Contributor Author

albi3ro commented Feb 23, 2024

i'm not sure what the current state of the benchmarking suite is, but perhaps this PR would be a good opportunity to test it out?

Current benchmarks are using a github runner and too noisy to extract reliable data.

@albi3ro albi3ro merged commit 880b9da into master Feb 23, 2024
38 checks passed
@albi3ro albi3ro deleted the default-caching branch February 23, 2024 19:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
merge-ready ✔️ All tests pass and the PR is ready to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants