Skip to content

Commit

Permalink
Turn caching off by default when max_diff==1 (#5243)
Browse files Browse the repository at this point in the history
**Context:**

Recent benchmarks (see
#5211 (comment))
have shown that caching adds massive classical overheads, but often does
not actually lead to a reduction in the the number of executions in
normal workflows.

Because of this, we want to make smart choices about when to use
caching. Higher-order derivatives often result in duplicate circuits, so
we need to keep caching when calculating higher-order derivatives. But,
we can make caching opt-in for normal workflows. This will lead to
reduced overheads in the vast majority of workflows.

**Description of the Change:**

The `QNode` keyword argument `cache` defaults to `None`. This is
interpreted as `True` if `max_diff > 1` and `False` otherwise.

**Benefits:**

Vastly reduced classical overheads in most cases.

**Possible Drawbacks:**

Increased number of executions in a few edge cases. But these edge cases
would be fairly convoluted. Somehow a transform would have to turn the
starting tape into two identical tapes.

**Related GitHub Issues:**

**Performance Numbers:**

```
n_layers = 1

dev = qml.device('lightning.qubit', wires=n_wires)

shape = qml.StronglyEntanglingLayers.shape(n_wires=n_wires, n_layers=n_layers)
rng = qml.numpy.random.default_rng(seed=42)
params = rng.random(shape)

@qml.qnode(dev, cache=False)
def circuit(params):
    qml.StronglyEntanglingLayers(params, wires=range(n_wires))
    return qml.expval(qml.Z(0))

@qml.qnode(dev, cache=True)
def circuit_cache(params):
    qml.StronglyEntanglingLayers(params, wires=range(n_wires))
    return qml.expval(qml.Z(0))
```

For `n_wires = 20`
![Screenshot 2024-02-21 at 5 19
49 PM](https://github.com/PennyLaneAI/pennylane/assets/6364575/8b94647c-b0f3-4b0b-8fc7-62c26ac06d70)

But for `n_wires= 10`:
![Screenshot 2024-02-21 at 5 20
43 PM](https://github.com/PennyLaneAI/pennylane/assets/6364575/04248f9e-c949-49a3-a125-21025fd030d6)

For `n_wires=20 n_layers=5`, we have:
![Screenshot 2024-02-21 at 6 02
40 PM](https://github.com/PennyLaneAI/pennylane/assets/6364575/feeccd11-cc52-440d-8655-14a2d15ed93d)

While the cache version does seem to be faster here, that does seem to
be statistical fluctuations.

For `n_wires=10 n_layers=20`:

![Screenshot 2024-02-21 at 6 09
04 PM](https://github.com/PennyLaneAI/pennylane/assets/6364575/64801dd4-a1ce-40c5-84ae-367ddf66aa26)
  • Loading branch information
albi3ro authored Feb 23, 2024
1 parent d3e1bdb commit 880b9da
Show file tree
Hide file tree
Showing 3 changed files with 29 additions and 3 deletions.
4 changes: 4 additions & 0 deletions doc/releases/changelog-dev.md
Original file line number Diff line number Diff line change
Expand Up @@ -396,6 +396,10 @@

<h3>Breaking changes 💔</h3>

* Caching of executions is now turned off by default when `max_diff == 1`, as the classical overhead cost
outweighs the probability that duplicate circuits exists.
[(#5243)](https://github.com/PennyLaneAI/pennylane/pull/5243)

* The entry point convention registering compilers with PennyLane has changed.
[(#5140)](https://github.com/PennyLaneAI/pennylane/pull/5140)

Expand Down
8 changes: 5 additions & 3 deletions pennylane/workflow/qnode.py
Original file line number Diff line number Diff line change
Expand Up @@ -183,8 +183,9 @@ class QNode:
Only applies if the device is queried for the gradient; gradient transform
functions available in ``qml.gradients`` are only supported on the backward
pass. The 'best' option chooses automatically between the two options and is default.
cache (bool or dict or Cache): Whether to cache evaluations. This can result in
a significant reduction in quantum evaluations during gradient computations.
cache="auto" (str or bool or dict or Cache): Whether to cache evalulations.
``"auto"`` indicates to cache only when ``max_diff > 1``. This can result in
a reduction in quantum evaluations during higher order gradient computations.
If ``True``, a cache with corresponding ``cachesize`` is created for each batch
execution. If ``False``, no caching is used. You may also pass your own cache
to be used; this can be any object that implements the special methods
Expand Down Expand Up @@ -415,7 +416,7 @@ def __init__(
expansion_strategy="gradient",
max_expansion=10,
grad_on_execution="best",
cache=True,
cache="auto",
cachesize=10000,
max_diff=1,
device_vjp=False,
Expand Down Expand Up @@ -483,6 +484,7 @@ def __init__(
self.diff_method = diff_method
self.expansion_strategy = expansion_strategy
self.max_expansion = max_expansion
cache = (max_diff > 1) if cache == "auto" else cache

# execution keyword arguments
self.execute_kwargs = {
Expand Down
20 changes: 20 additions & 0 deletions tests/test_qnode.py
Original file line number Diff line number Diff line change
Expand Up @@ -77,6 +77,26 @@ def test_copy():
assert copied_qn.expansion_strategy == qn.expansion_strategy


class TestInitialization:
def test_cache_initialization_maxdiff_1(self):
"""Test that when max_diff = 1, the cache initializes to false."""

@qml.qnode(qml.device("default.qubit"), max_diff=1)
def f():
return qml.state()

assert f.execute_kwargs["cache"] is False

def test_cache_initialization_maxdiff_2(self):
"""Test that when max_diff = 2, the cache initialization to True."""

@qml.qnode(qml.device("default.qubit"), max_diff=2)
def f():
return qml.state()

assert f.execute_kwargs["cache"] is True


# pylint: disable=too-many-public-methods
class TestValidation:
"""Tests for QNode creation and validation"""
Expand Down

0 comments on commit 880b9da

Please sign in to comment.