Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

StochasticSwap pass does not get strictly better with increased trials #4094

Open
ajavadia opened this issue Apr 7, 2020 · 8 comments
Open
Labels
type: enhancement It's working, but needs polishing

Comments

@ajavadia
Copy link
Member

ajavadia commented Apr 7, 2020

@mtreinish reported a case where increasing the number of trials in the StochasticSwap pass resulted in longer circuits. This should not happen with a fixed seed, because the trials should just pick the best one. Unless there is something going on with how individual layers get better with increased trials, but those layer choices combine to make the overall circuit worse. I would like to understand what's going on here, and whether we can ensure that increasing the trials gives us better circuits (at the cost of longer transpilation time).

I don't have a particular example to reproduce here, I think the original example was a quantum volume circuit. But really a variety of examples should be investigated.

@mtreinish
Copy link
Member

I meant to open an issue about this, thanks for doing it. The two cases where this happens are in QV benchmarks (there might be other cases but that was the most obvious). With a 50x20 model circuit and a level 1 transpile the depth increased on the commit changing the number of trials from 20 to 50: https://qiskit.github.io/qiskit/#transpiler_levels.TranspilerLevelBenchmarks.track_depth_quantum_volume_transpile_50_x_20?commits=a8dcbe11&p-transpiler%20optimization%20level=1

The same was true on the QV 14x14 model circuit benchmark for the same commit on level1 and level 2 transpiles: https://qiskit.github.io/qiskit/#transpiler_levels.TranspilerLevelBenchmarks.track_depth_transpile_qv_14_x_14?commits=a8dcbe11&p-transpiler%20optimization%20level=1&p-transpiler%20optimization%20level=2

The circuits being transpiled there are generated using:
https://github.com/Qiskit/qiskit/blob/master/test/benchmarks/utils.py#L120-L137
with a fixed seed of 0. The transpile calls are also fixed with a seed of 0:

https://github.com/Qiskit/qiskit/blob/master/test/benchmarks/transpiler_levels.py#L159-L163
and
https://github.com/Qiskit/qiskit/blob/master/test/benchmarks/transpiler_levels.py#L214-L216

@nonhermitian
Copy link
Contributor

It would also be good to get an aggregate view of this on multiple instances to see if the statement is true on average.

@mtreinish
Copy link
Member

My only thought here besides a bug in stochastic swap is that the depth measurements in those benchmarks are not isolated to stochastic swap, it's a full transpile with the preset pass managers. Maybe the stochastic swap output is finding a case slightly more efficient but it moves things around in a way that later optimizations in the pipeline aren't able to improve things as much. I'm gonna try and do some more in depth debugging on this today and see if I can figure out what's going on.

@mtreinish
Copy link
Member

It would also be good to get an aggregate view of this on multiple instances to see if the statement is true on average.

I just ran a sweep of 100x 50x20 quantum volume model circuits (without a seed set) with the level 1 transpiler and stochastic swap at 20 and 50 trials. It was pretty evenly split 48 of the circuits had a smaller depth with 20 trials and 52 of the circuits had a smaller depth with 50 trials.

The script I ran is:

from qiskit import QuantumCircuit
import numpy as np
from qiskit.quantum_info.random import random_unitary
from qiskit.transpiler import preset_passmanagers
from qiskit.transpiler import PassManagerConfig
from qiskit.transpiler import CouplingMap
from qiskit.tools.parallel import parallel_map


def build_qv_model_circuit(width, depth, seed=None):
    """
    The model circuits consist of layers of Haar random
    elements of SU(4) applied between corresponding pairs
    of qubits in a random bipartition.
    """
    np.random.seed(seed)
    circuit = QuantumCircuit(width)
    # For each layer
    for _ in range(depth):
        # Generate uniformly random permutation Pj of [0...n-1]
        perm = np.random.permutation(width)
        # For each pair p in Pj, generate Haar random SU(4)
        for k in range(int(np.floor(width/2))):
            U = random_unitary(4)
            pair = int(perm[2*k]), int(perm[2*k+1])
            circuit.append(U, [pair[0], pair[1]])
    return circuit


rochester_coupling_map = [
    [0, 5],
    [0, 1],
    [1, 2],
    [1, 0],
    [2, 3],
    [2, 1],
    [3, 4],
    [3, 2],
    [4, 6],
    [4, 3],
    [5, 9],
    [5, 0],
    [6, 13],
    [6, 4],
    [7, 16],
    [7, 8],
    [8, 9],
    [8, 7],
    [9, 10],
    [9, 8],
    [9, 5],
    [10, 11],
    [10, 9],
    [11, 17],
    [11, 12],
    [11, 10],
    [12, 13],
    [12, 11],
    [13, 14],
    [13, 12],
    [13, 6],
    [14, 15],
    [14, 13],
    [15, 18],
    [15, 14],
    [16, 19],
    [16, 7],
    [17, 23],
    [17, 11],
    [18, 27],
    [18, 15],
    [19, 20],
    [19, 16],
    [20, 21],
    [20, 19],
    [21, 28],
    [21, 22],
    [21, 20],
    [22, 23],
    [22, 21],
    [23, 24],
    [23, 22],
    [23, 17],
    [24, 25],
    [24, 23],
    [25, 29],
    [25, 26],
    [25, 24],
    [26, 27],
    [26, 25],
    [27, 26],
    [27, 18],
    [28, 32],
    [28, 21],
    [29, 36],
    [29, 25],
    [30, 39],
    [30, 31],
    [31, 32],
    [31, 30],
    [32, 33],
    [32, 31],
    [32, 28],
    [33, 34],
    [33, 32],
    [34, 40],
    [34, 35],
    [34, 33],
    [35, 36],
    [35, 34],
    [36, 37],
    [36, 35],
    [36, 29],
    [37, 38],
    [37, 36],
    [38, 41],
    [38, 37],
    [39, 42],
    [39, 30],
    [40, 46],
    [40, 34],
    [41, 50],
    [41, 38],
    [42, 43],
    [42, 39],
    [43, 44],
    [43, 42],
    [44, 51],
    [44, 45],
    [44, 43],
    [45, 46],
    [45, 44],
    [46, 47],
    [46, 45],
    [46, 40],
    [47, 48],
    [47, 46],
    [48, 52],
    [48, 49],
    [48, 47],
    [49, 50],
    [49, 48],
    [50, 49],
    [50, 41],
    [51, 44],
    [52, 48]]
rochester_cmap = CouplingMap(rochester_coupling_map)
basis_gates = ['u1', 'u2', 'u3', 'cx', 'id']

config_20_trials = PassManagerConfig(basis_gates=basis_gates,
                                     coupling_map=rochester_cmap,
                                     trials=20, seed_transpiler=0)
config_50_trials = PassManagerConfig(basis_gates=basis_gates,
                                     coupling_map=rochester_cmap,
                                     trials=50, seed_transpiler=0)

pm_level1_20 = preset_passmanagers.level_1_pass_manager(config_20_trials)
pm_level1_50 = preset_passmanagers.level_1_pass_manager(config_50_trials)

smaller_depth = {"20_trials": 0, "50_trials": 0, "equal": 0}


def _run_transpiles(seed):
    qv_50_x_20 = build_qv_model_circuit(50, 20)
    depth_20 = pm_level1_20.run(qv_50_x_20).depth()
    depth_50 = pm_level1_50.run(qv_50_x_20).depth()
    print("QV %s\n\t20 Trials: %s\n\t50 Trials: %s" % (seed, depth_20,
                                                       depth_50))
    return depth_20, depth_50


results = parallel_map(_run_transpiles, list(range(100)))
for depth_20, depth_50 in results:
    if depth_20 == depth_50:
        smaller_depth["equal"] += 1
    elif depth_20 > depth_50:
        smaller_depth["50_trials"] += 1
    else:
        smaller_depth["20_trials"] += 1
print(smaller_depth)

I also had to modify the PassManagerConfig class and level_1_pass_manager() function to make trials configurable.

@mtreinish
Copy link
Member

I also just reran that script but replaced build_model_circuit(50, 20) with qiskit.circuit.random.random_circuit(53, 10, 3, True, True, seed=seed) and got: {'20_trials': 39, '50_trials': 61, 'equal': 0} for the shorter depth from running it 100 times. It's definitely not isolated to just QV.

@kdk
Copy link
Member

kdk commented Apr 7, 2020

Can you try with a passmanager like:

pm = PassManager([
  TrivialLayout(),
  ApplyLayout(),
  FullAncillaAllocation(coupling_map),
  EnlargeWithAncilla(),
  ApplyLayout(),
  Unroll3qOrMore(),
  StochasticSwap(coupling_map, trials=N, seed=seed_transpiler)
])

Also, do you have an idea of the size of the differences (maybe a quick histogram)?

@mtreinish
Copy link
Member

mtreinish commented Apr 7, 2020

@kdk Running with that pass manager with trials set to 20 and 50 instead of the full level 1 yielded: {'20_trials': 50, '50_trials': 48, 'equal': 2}

As for the size differences I'll modify the script to build some graphs. But looking at bits of the output they're no consistent trend. Some are really close like within 1 or 2 other times they can vary by much larger numbers like in the most recent test I just found an example with a depth of 710 with 50 trials and a depth of 583 with 20 trials.

QV 41
	20 Trials: 583
	50 Trials: 710

@nonhermitian
Copy link
Contributor

It could be a bug, but I am wondering if greedy optimization by layer, what is currently being done, does not correlate well with global circuit swap optimization.

mtreinish added a commit to mtreinish/qiskit-core that referenced this issue Apr 7, 2020
In Qiskit#3999 we increased the number of trials, but the global benefit from
doing this isn't clear. With the pending release weighing the increased
run time for more trials against this, this commit temporarily reverts
the number of trials for level 1 and level 2 back to their previous
setting of 20. When we get to the bottom of Qiskit#4094 and have a fix or
change for that we can increase the number of trials again.

Related Qiskit#4094
ajavadia pushed a commit that referenced this issue Apr 7, 2020
In #3999 we increased the number of trials, but the global benefit from
doing this isn't clear. With the pending release weighing the increased
run time for more trials against this, this commit temporarily reverts
the number of trials for level 1 and level 2 back to their previous
setting of 20. When we get to the bottom of #4094 and have a fix or
change for that we can increase the number of trials again.

Related #4094
@1ucian0 1ucian0 added priority: high type: enhancement It's working, but needs polishing labels Apr 20, 2020
faisaldebouni pushed a commit to faisaldebouni/qiskit-terra that referenced this issue Aug 5, 2020
…4099)

In Qiskit#3999 we increased the number of trials, but the global benefit from
doing this isn't clear. With the pending release weighing the increased
run time for more trials against this, this commit temporarily reverts
the number of trials for level 1 and level 2 back to their previous
setting of 20. When we get to the bottom of Qiskit#4094 and have a fix or
change for that we can increase the number of trials again.

Related Qiskit#4094
@1ucian0 1ucian0 mentioned this issue Jun 12, 2024
4 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type: enhancement It's working, but needs polishing
Projects
None yet
Development

No branches or pull requests

5 participants