Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create benchmark for calibrator performance #1934

Open
natestemen opened this issue Jul 28, 2023 · 3 comments
Open

Create benchmark for calibrator performance #1934

natestemen opened this issue Jul 28, 2023 · 3 comments
Assignees

Comments

@natestemen
Copy link
Member

Motivation

When making a change to the calibration module it is important to know that we are making improvements to the end performance of the calibrator. Currently we do not have a way to track whether a change accurately finds better error mitigation techniques.

Description

Add a test (can be run manually) to compare the performance of different iterations of the Calibrator. I propose as a first go, we compute the average error across a series of experiments (different circuit types, using executors with multiple noise models). Then as we improve the Calibrator, hopefully we drive that error down. This would allow us to slightly worsen the Calibrator in certain areas as long as it improves over our entire test suite.

Implementation

I suggest we add a new file in mitiq/calibration with a script that performs such an experiment. This should not be run as part of CI, but rather the values should be computed, and compared when opening PRs to modify this module.


cc @andreamari WDYT of this plan?

@andreamari
Copy link
Member

cc @andreamari WDYT of this plan?

Makes sense to me.
Possible things to record:

  • root-mean-squared-errors (or other metric) and maybe time without mitigation,
  • root-mean-squared-errors (or other metric) and maybe time with the calibrator,
  • name and params of optimal strategies. This can be useful to understand if the calibrator is always suggesting the same strategy, or the same technique. Strategies that are never selected as optimal can be improved or removed from the calibrator settings to speed up the calibration process.

@Misty-W Misty-W modified the milestone: 0.29.0 Aug 18, 2023
@kozhukalov
Copy link
Contributor

At this point the calibrator.best_strategy method returns the technique which gives smaller errors for all benchmarks under test. So this is a matter of an agreement of what circuits/executors/noise models/strategies are chosen for the benchmark. And testing many different combinations could be computationally difficult.

Sounds more like a task for Metriq.

BTW, I have been struggling to find any charts on Metriq.info where users can find best mitigation techniques at least those supported by Mitiq, but there are only paper abstracts. Maybe I looked at the wrong place.

@kozhukalov
Copy link
Contributor

I would like to work on the issue.

@natestemen natestemen added this to the 0.32.0 milestone Nov 3, 2023
@natestemen natestemen assigned natestemen and kozhukalov and unassigned natestemen Nov 3, 2023
@natestemen natestemen removed this from the 0.32.0 milestone Dec 15, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants