-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
StochasticSwap pass does not get strictly better with increased trials #4094
Comments
It would also be good to get an aggregate view of this on multiple instances to see if the statement is true on average. |
My only thought here besides a bug in stochastic swap is that the depth measurements in those benchmarks are not isolated to stochastic swap, it's a full transpile with the preset pass managers. Maybe the stochastic swap output is finding a case slightly more efficient but it moves things around in a way that later optimizations in the pipeline aren't able to improve things as much. I'm gonna try and do some more in depth debugging on this today and see if I can figure out what's going on. |
I just ran a sweep of 100x 50x20 quantum volume model circuits (without a seed set) with the level 1 transpiler and stochastic swap at 20 and 50 trials. It was pretty evenly split 48 of the circuits had a smaller depth with 20 trials and 52 of the circuits had a smaller depth with 50 trials. The script I ran is: from qiskit import QuantumCircuit
import numpy as np
from qiskit.quantum_info.random import random_unitary
from qiskit.transpiler import preset_passmanagers
from qiskit.transpiler import PassManagerConfig
from qiskit.transpiler import CouplingMap
from qiskit.tools.parallel import parallel_map
def build_qv_model_circuit(width, depth, seed=None):
"""
The model circuits consist of layers of Haar random
elements of SU(4) applied between corresponding pairs
of qubits in a random bipartition.
"""
np.random.seed(seed)
circuit = QuantumCircuit(width)
# For each layer
for _ in range(depth):
# Generate uniformly random permutation Pj of [0...n-1]
perm = np.random.permutation(width)
# For each pair p in Pj, generate Haar random SU(4)
for k in range(int(np.floor(width/2))):
U = random_unitary(4)
pair = int(perm[2*k]), int(perm[2*k+1])
circuit.append(U, [pair[0], pair[1]])
return circuit
rochester_coupling_map = [
[0, 5],
[0, 1],
[1, 2],
[1, 0],
[2, 3],
[2, 1],
[3, 4],
[3, 2],
[4, 6],
[4, 3],
[5, 9],
[5, 0],
[6, 13],
[6, 4],
[7, 16],
[7, 8],
[8, 9],
[8, 7],
[9, 10],
[9, 8],
[9, 5],
[10, 11],
[10, 9],
[11, 17],
[11, 12],
[11, 10],
[12, 13],
[12, 11],
[13, 14],
[13, 12],
[13, 6],
[14, 15],
[14, 13],
[15, 18],
[15, 14],
[16, 19],
[16, 7],
[17, 23],
[17, 11],
[18, 27],
[18, 15],
[19, 20],
[19, 16],
[20, 21],
[20, 19],
[21, 28],
[21, 22],
[21, 20],
[22, 23],
[22, 21],
[23, 24],
[23, 22],
[23, 17],
[24, 25],
[24, 23],
[25, 29],
[25, 26],
[25, 24],
[26, 27],
[26, 25],
[27, 26],
[27, 18],
[28, 32],
[28, 21],
[29, 36],
[29, 25],
[30, 39],
[30, 31],
[31, 32],
[31, 30],
[32, 33],
[32, 31],
[32, 28],
[33, 34],
[33, 32],
[34, 40],
[34, 35],
[34, 33],
[35, 36],
[35, 34],
[36, 37],
[36, 35],
[36, 29],
[37, 38],
[37, 36],
[38, 41],
[38, 37],
[39, 42],
[39, 30],
[40, 46],
[40, 34],
[41, 50],
[41, 38],
[42, 43],
[42, 39],
[43, 44],
[43, 42],
[44, 51],
[44, 45],
[44, 43],
[45, 46],
[45, 44],
[46, 47],
[46, 45],
[46, 40],
[47, 48],
[47, 46],
[48, 52],
[48, 49],
[48, 47],
[49, 50],
[49, 48],
[50, 49],
[50, 41],
[51, 44],
[52, 48]]
rochester_cmap = CouplingMap(rochester_coupling_map)
basis_gates = ['u1', 'u2', 'u3', 'cx', 'id']
config_20_trials = PassManagerConfig(basis_gates=basis_gates,
coupling_map=rochester_cmap,
trials=20, seed_transpiler=0)
config_50_trials = PassManagerConfig(basis_gates=basis_gates,
coupling_map=rochester_cmap,
trials=50, seed_transpiler=0)
pm_level1_20 = preset_passmanagers.level_1_pass_manager(config_20_trials)
pm_level1_50 = preset_passmanagers.level_1_pass_manager(config_50_trials)
smaller_depth = {"20_trials": 0, "50_trials": 0, "equal": 0}
def _run_transpiles(seed):
qv_50_x_20 = build_qv_model_circuit(50, 20)
depth_20 = pm_level1_20.run(qv_50_x_20).depth()
depth_50 = pm_level1_50.run(qv_50_x_20).depth()
print("QV %s\n\t20 Trials: %s\n\t50 Trials: %s" % (seed, depth_20,
depth_50))
return depth_20, depth_50
results = parallel_map(_run_transpiles, list(range(100)))
for depth_20, depth_50 in results:
if depth_20 == depth_50:
smaller_depth["equal"] += 1
elif depth_20 > depth_50:
smaller_depth["50_trials"] += 1
else:
smaller_depth["20_trials"] += 1
print(smaller_depth) I also had to modify the |
I also just reran that script but replaced |
Can you try with a passmanager like:
Also, do you have an idea of the size of the differences (maybe a quick histogram)? |
@kdk Running with that pass manager with trials set to 20 and 50 instead of the full level 1 yielded: As for the size differences I'll modify the script to build some graphs. But looking at bits of the output they're no consistent trend. Some are really close like within 1 or 2 other times they can vary by much larger numbers like in the most recent test I just found an example with a depth of 710 with 50 trials and a depth of 583 with 20 trials.
|
It could be a bug, but I am wondering if greedy optimization by layer, what is currently being done, does not correlate well with global circuit swap optimization. |
In Qiskit#3999 we increased the number of trials, but the global benefit from doing this isn't clear. With the pending release weighing the increased run time for more trials against this, this commit temporarily reverts the number of trials for level 1 and level 2 back to their previous setting of 20. When we get to the bottom of Qiskit#4094 and have a fix or change for that we can increase the number of trials again. Related Qiskit#4094
In #3999 we increased the number of trials, but the global benefit from doing this isn't clear. With the pending release weighing the increased run time for more trials against this, this commit temporarily reverts the number of trials for level 1 and level 2 back to their previous setting of 20. When we get to the bottom of #4094 and have a fix or change for that we can increase the number of trials again. Related #4094
…4099) In Qiskit#3999 we increased the number of trials, but the global benefit from doing this isn't clear. With the pending release weighing the increased run time for more trials against this, this commit temporarily reverts the number of trials for level 1 and level 2 back to their previous setting of 20. When we get to the bottom of Qiskit#4094 and have a fix or change for that we can increase the number of trials again. Related Qiskit#4094
@mtreinish reported a case where increasing the number of trials in the StochasticSwap pass resulted in longer circuits. This should not happen with a fixed seed, because the trials should just pick the best one. Unless there is something going on with how individual layers get better with increased trials, but those layer choices combine to make the overall circuit worse. I would like to understand what's going on here, and whether we can ensure that increasing the trials gives us better circuits (at the cost of longer transpilation time).
I don't have a particular example to reproduce here, I think the original example was a quantum volume circuit. But really a variety of examples should be investigated.
The text was updated successfully, but these errors were encountered: