Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reported errors on output don’t vary with num_samples in RB #428

Closed
pdc-quantum opened this issue Oct 17, 2021 · 7 comments · Fixed by #472
Closed

Reported errors on output don’t vary with num_samples in RB #428

pdc-quantum opened this issue Oct 17, 2021 · 7 comments · Fixed by #472
Assignees
Labels
bug Something isn't working
Milestone

Comments

@pdc-quantum
Copy link
Contributor

Informations

  • Qiskit Experiments version: 0.2.0
  • Python version: 3.9.7
  • Operating system: Windows10

What is the current behavior?

In randomized benchmarking (RB), increasing the number of circuits in each sequence length (num_samples) doesn’t decrease the calculated errors (CE) on the output including the tying function parameters, the error per Clifford (EPC) and the error per gate (EPG). This occurs in standard and interleaved RB for one and two-qubit experiments.
This is because the original code uses the count means as ydata argument and the standard deviations of the counts (SD) as sigma argument for the scipy.optimize.curve_fit. SD doesn’t increase with num_samples. Therefore, the fitter returns a covariance matrix for the parameters which doesn’t vary in function of num_samples.

Steps to reproduce the problem

  • Run experiments with increasing values of num_samples (see notebook 1 and slide 1 A).

  • Run enough experiments with independently randomly generated circuit sets while maintaining num_samples constant. Compare the standard deviation (SD) of the output distributions with the CE (see notebook 2 and slide 2 A).

What is the expected behavior?

Increasing num_sample must logically decrease the CE, allowing to narrow bounds for the output. Notably, the EPG upper bound is an important value to report in device benchmarking.

Suggested solutions

Use the standard errors of the count means (SEM) as sigma argument for the fitter instead of SD . Then the CE will be reduced by a factor of about 1/(square root of num_samples).
This can be done by modifying the following line of code:
https://github.com/Qiskit/qiskit-experiments/blob/39e2a1b6d65bd1df47a94a731b425f98d0f7e3e2/qiskit_experiments/library/randomized_benchmarking/rb_analysis.py#L161
which becomes :
y_err=sigma / np.sqrt(data.y.size/np.unique(data.x).size),

The correct implementation of the solution can be verified by running notebook 1 while importing the modified code, and observing that the CE are now reduced by a factor of about 1/(square root of num_samples). See slide 1 B.

The legitimacy of the solution can be assessed by running notebook 2 while importing the modified code, and observing that the CE are now in accordance with the experimentally observed standard deviations of the corresponding output distributions. See slide 2 B.

Finally, running the randomized benchmarking tutorial while importing the modified code demonstrates that the solution is valid for all RB modalities. See notebook 3.
Slides.pdf
notebooks.zip

@pdc-quantum pdc-quantum added the bug Something isn't working label Oct 17, 2021
@pdc-quantum pdc-quantum changed the title : Reported errors on output don’t vary with num_samples in RB Reported errors on output don’t vary with num_samples in RB Oct 17, 2021
@pdc-quantum
Copy link
Contributor Author

pdc-quantum commented Oct 27, 2021

Additional check:
Does CE decrease when increasing num_samples (or nseeds) in ignis as it does in Qiskit experiments?
The answer is no, using code derived from the ignis based RB tutorial, as shown in the attached graph.
Fitter reported and SD distribution based errors are well in accordance. Both decrease in fonction of the square root of num_samples.
The error on EPC is halved when num_samples is increased four times (15 to 60).
image

@chriseclectic
Copy link
Collaborator

chriseclectic commented Oct 27, 2021

I think this line
https://github.com/Qiskit/qiskit-experiments/blob/8f0c736594c21d2d028394cf63218881551922f0/qiskit_experiments/curve_analysis/data_processing.py#L116
might need to be changed to something like

np.sqrt(np.mean((y_means[i] - ys) ** 2) / y_means[i].size) 

edit, fixed typo in y_means[i].size

@pdc-quantum
Copy link
Contributor Author

pdc-quantum commented Oct 27, 2021

@chriseclectic Thank for your suggestion. I am presently trying it with the demo notebook but on my laptop it takes some times. I will post here the resulting notebook ASAP.

@chriseclectic
Copy link
Collaborator

Making the above change is effectively replacing the sample std dev (old sigma) with the standard error of the mean (sigma/sqrt(n)), so the comment in the code should also be updated accordingly.

@pdc-quantum
Copy link
Contributor Author

pdc-quantum commented Oct 28, 2021

Hi,
@chriseclectic . Thanks again for your proposal. Finally it came to a solution.

After the typo correction it didn’t work directly: y_means[i] has size 1 in this loop. Fortunately, ys has size num_samples.
So, it worked with the code:
y_sigmas[i] = np.sqrt(np.mean((y_means[i] - ys) ** 2) / ys.size)
I join the three test notebooks named 1c, 2c and 3c obtained with the code correction in data_processing.py.

Note that the errors on EPG from the distributions and from the fitter are more like each other when num_samples is increased from 15 to 60 (which could be expected). And the error is well halved when stepping from 15 to 60
notebooks_c.zip

image

By the way, I checked the two nearly indentical values in the table (6.7403e-06 and 6.7408e-06): it's not a typo!

@yaelbh
Copy link
Collaborator

yaelbh commented Nov 29, 2021

What's the status of this issue?

@yaelbh
Copy link
Collaborator

yaelbh commented Nov 29, 2021

Ok, I see #472

chriseclectic added a commit to pdc-quantum/qiskit-experiments that referenced this issue Dec 6, 2021
chriseclectic added a commit to pdc-quantum/qiskit-experiments that referenced this issue Dec 6, 2021
chriseclectic added a commit to pdc-quantum/qiskit-experiments that referenced this issue Dec 7, 2021
chriseclectic added a commit that referenced this issue Dec 7, 2021
Reported errors on output don’t vary with num_samples in RB:
#428

Co-authored-by: Christopher J. Wood <[email protected]>
paco-ri pushed a commit to paco-ri/qiskit-experiments that referenced this issue Jul 11, 2022
Reported errors on output don’t vary with num_samples in RB:
qiskit-community#428

Co-authored-by: Christopher J. Wood <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants