You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When making a change to the calibration module it is important to know that we are making improvements to the end performance of the calibrator. Currently we do not have a way to track whether a change accurately finds better error mitigation techniques.
Description
Add a test (can be run manually) to compare the performance of different iterations of the Calibrator. I propose as a first go, we compute the average error across a series of experiments (different circuit types, using executors with multiple noise models). Then as we improve the Calibrator, hopefully we drive that error down. This would allow us to slightly worsen the Calibrator in certain areas as long as it improves over our entire test suite.
Implementation
I suggest we add a new file in mitiq/calibration with a script that performs such an experiment. This should not be run as part of CI, but rather the values should be computed, and compared when opening PRs to modify this module.
root-mean-squared-errors (or other metric) and maybe time without mitigation,
root-mean-squared-errors (or other metric) and maybe time with the calibrator,
name and params of optimal strategies. This can be useful to understand if the calibrator is always suggesting the same strategy, or the same technique. Strategies that are never selected as optimal can be improved or removed from the calibrator settings to speed up the calibration process.
At this point the calibrator.best_strategy method returns the technique which gives smaller errors for all benchmarks under test. So this is a matter of an agreement of what circuits/executors/noise models/strategies are chosen for the benchmark. And testing many different combinations could be computationally difficult.
Sounds more like a task for Metriq.
BTW, I have been struggling to find any charts on Metriq.info where users can find best mitigation techniques at least those supported by Mitiq, but there are only paper abstracts. Maybe I looked at the wrong place.
Motivation
When making a change to the calibration module it is important to know that we are making improvements to the end performance of the calibrator. Currently we do not have a way to track whether a change accurately finds better error mitigation techniques.
Description
Add a test (can be run manually) to compare the performance of different iterations of the
Calibrator
. I propose as a first go, we compute the average error across a series of experiments (different circuit types, using executors with multiple noise models). Then as we improve theCalibrator
, hopefully we drive that error down. This would allow us to slightly worsen theCalibrator
in certain areas as long as it improves over our entire test suite.Implementation
I suggest we add a new file in
mitiq/calibration
with a script that performs such an experiment. This should not be run as part of CI, but rather the values should be computed, and compared when opening PRs to modify this module.cc @andreamari WDYT of this plan?
The text was updated successfully, but these errors were encountered: