-
Notifications
You must be signed in to change notification settings - Fork 376
MPS (Matrix Product State) giving wrong result for QAOA #1433
Comments
Is this really just just #1434 taken further because with MPS you can get to even more qubits? At smaller qubit sizes does the MPS simulator produce a correct result that is line with the other simulation modes? - I would hope it does. |
I did some more experiment and wanted to enlist the result here, head to head seems to me MPS is having more fluctuations and error around 9-10 qubits than default QASM simulator. Please see the "Final" tab. |
What happens when a simulator reaches timeout? What will a QAOA user see? |
@yaelbh If running the circuits on the backend returns failure, if it times out, then the user will see a circuit execution error raised from Aqua https://github.com/Qiskit/qiskit-aqua/blob/858305641429197560da2e31eb89bf362c8e6210/qiskit/aqua/utils/run_circuits.py#L338 (If it raises an error internally itself then that should be seen) |
Thanks @woodsp-ibm. @amitracal can you please share the code. |
@yaelbh There is a zip attached with a jupyter notebook, that I believe demonstrates the problem, at the end of the issue description above |
Yes I know, but I think there is another piece of code for the last spreadsheet. |
@yaelbh @woodsp-ibm I am attaching another zip with code and latest excel which I have attached in the new raised RQAOA issue as well https://github.com/Qiskit/qiskit-aqua/issues/1453. |
Based on the notebooks, I created the following code:
For this code, the printed results are identical between the simulators, in all the runs. Questions and comments that will help me proceed with investigation:
|
create a QUBOqubo = QuadraticProgram() qubo.minimize(linear=[1,-2,3,-6,5,4,4,5,5,5,6,6,0.3,6,6,-2,-2,-2,-2,-2],
|
@yaelbh @woodsp-ibm I also had an observation today which I think I should mention to you, as I ran a different QUBO from an actual business problem through real hardware, MPS, QASM and RQAOA. Actually MPS did very similar to QASM although both were wrong with respect to Cplex the classical solver which I consider to be my north star. Let me attach those results and notebooks as well. I only did this for 15 qubits. qubo.minimize(linear=[137.02211292926253,92.010710781040601,21.047319760897697,105.14998995419403,60.25426907360287, Here is the result for this QUBO with 15 qubits - |
I'm able to restore the results for 20 qubits, add see the difference between the simulators. I'm checking it now. |
Here's an update.
|
Thank you @yaelbh, please let me know if you need any help |
I understand now what's going on. The bottom line is a numerical difference 10 positions after the decimal point, which propagates to totally change the flow of QAOA. Note that this implies something to be improved in the sensitivity of QAOA. This is the instance where we see differences:
Note that I work with the master branches of all the repositories.
Now I need to explain why this disagreement occurs, and what are its consequences. Why it occurs:
Consequences: for the statevector simulator, average no. 13 is the maximum over the 42 averages. For the MPS simulator, the average is elsewhere. This seems to drastically affect the optimizer's subsequence choices. I wonder what can be done to make QAOA less sensitive to 1 shot out of 42*1024 shots. Maybe increase the number of shots? The story here is not in the type of simulators; we learn that a different randomization with the same simulator, or with a real device, can yield a very different QAOA result. I guess that this can be seen if one runs only the statevector simulator, several times, each time with a different random seed. |
@yaelbh I can start working on it starting on Thursday because of other priorities, its great that you found the root cause, please let me know what changes you want me to do. |
I think it's best to consult with someone from Aqua about the best way to use QAOA (for example, what is the recommended number of shots?). Also, following discussion in #1463, it may help to stop fixing the simulator seed (i.e., remove the parameter |
SLSQP is a gradient based optimizer and by default its using a finite difference where eps (the epsilon distance from the current point to surrounding points) is very small. I can imagine that small perturbations here (ie sampling noise) can have quite an impact. Normally in such 'noisy' environments we suggest using an optimizer that is designed to work in the presence of noise such as SPSA. In this case though. since include_custom is true, is the outcome not supposed to be ideal the same as using statevector? |
Although |
Hmmm, I wonder if the check for Aer qasm simulator is not returning correctly when the QasmSimulator is given directly like that with the MPS method. To include snaphots the ExpectationFactory would need to select the AerPauliExpectation. Perhaps you would like to set the expectation manually and see if that works as expected - on QAOA constructor add |
I did one QUBO with 100 variables from a real example through QAOA MPS, it ran ok for 4 days but provided wrong results, attaching it with results through CPLEX and MPS QAOA |
Information
Qiskit Aqua version:
'qiskit-terra': '0.16.0',
'qiskit-aer': '0.7.0',
'qiskit-ignis': '0.5.0',
'qiskit-ibmq-provider': '0.11.0',
'qiskit-aqua': '0.8.0',
'qiskit': '0.23.0'
Python version:
3.7.6
Operating system:
Windows 10
What is the current behavior?
Cplex provides optimized value as -23.5 when MPS is producing values above +50 (see the attached notebook)
Steps to reproduce the problem
Run the attached notebook
What is the expected behavior?
Opitimized value should be close to -23.5
RQAOA_MPS.zip
Suggested solutions
The text was updated successfully, but these errors were encountered: