Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add option to control figure generation in composite experiments #1240

Merged
Merged
Show file tree
Hide file tree
Changes from 22 commits
Commits
Show all changes
29 commits
Select commit Hold shift + click to select a range
42155b6
add selective figure generation param
coruscating Jul 28, 2023
68473cd
expose compositeexperiment and update tutorial
coruscating Jul 30, 2023
bbc49a9
fix bug in curve analysis code
coruscating Jul 30, 2023
7f34d93
lint
coruscating Jul 30, 2023
eb4d03d
update docs
coruscating Sep 5, 2023
1a3d02f
Merge remote-tracking branch 'upstream/main' into selective-figure-ge…
coruscating Sep 5, 2023
5fc1017
update test
coruscating Sep 6, 2023
b8defd2
update tests
coruscating Sep 6, 2023
c358938
add selective figure generation param
coruscating Jul 28, 2023
4ac13a1
expose compositeexperiment and update tutorial
coruscating Jul 30, 2023
eb2f8ce
fix bug in curve analysis code
coruscating Jul 30, 2023
fdfcec5
lint
coruscating Jul 30, 2023
31f7778
update docs
coruscating Sep 5, 2023
9f7cb6f
update test
coruscating Sep 6, 2023
b3997cf
update tests
coruscating Sep 6, 2023
f12a3bf
Merge branch 'selective-figure-generation' of github.com:coruscating/…
coruscating Sep 6, 2023
c0cdedd
fixed logic and added compositecurveanalysis test
coruscating Sep 6, 2023
6a2c3e3
update howto
coruscating Sep 6, 2023
0cc6a16
fix test
coruscating Sep 7, 2023
c32423b
update quality criteria for analysis classes
coruscating Sep 7, 2023
23d8525
lint
coruscating Sep 7, 2023
7ba1285
Merge remote-tracking branch 'upstream/main' into selective-figure-ge…
coruscating Sep 7, 2023
f198864
merged main
coruscating Sep 29, 2023
e8eced9
address review comments
coruscating Sep 29, 2023
63a3b1c
Merge branch 'main' into selective-figure-generation
coruscating Sep 29, 2023
fdedbf4
fix composite experiment classes
coruscating Sep 29, 2023
af2d930
Merge branch 'selective-figure-generation' of github.com:coruscating/…
coruscating Sep 29, 2023
39d6ce1
lint
coruscating Sep 29, 2023
2de644c
Merge branch 'main' into selective-figure-generation
coruscating Oct 12, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
44 changes: 44 additions & 0 deletions docs/howtos/figure_generation.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
Control figure generation
=========================

Problem
-------

You want to change the default behavior where figures are generated with every experiment.

Solution
--------

For a single non-composite experiment, figure generation can be switched off by setting the analysis
option ``plot`` to ``False``:

.. jupyter-input::

experiment.analysis.set_options(plot = False)

For composite experiments, there is a ``generate_figures`` parameter which controls how child figures are
generated. There are three options:

- ``always``: The default behavior, generate figures for each child experiment.
- ``never``: Never generate figures for any child experiment.
- ``selective``: Only generate figures for analysis results where ``quality`` is ``bad``. This is useful
for large composite experiments where you only want to examine qubits with problems.

This parameter should be set upon composite experiment instantiation:

.. jupyter-input::

parallel_exp = ParallelExperiment(
[T1(physical_qubits=(i,), delays=delays) for i in range(2)], generate_figures="selective"
)

Discussion
----------

These options are useful for large composite experiments, where generating all figures incurs a significant
overhead.

See Also
--------

* The `Visualization tutorial <visualization.html>`_ discusses how to customize figures
20 changes: 17 additions & 3 deletions docs/tutorials/getting_started.rst
Original file line number Diff line number Diff line change
Expand Up @@ -238,6 +238,8 @@ supports can be set:

exp.set_run_options(shots=1000,
meas_level=MeasLevel.CLASSIFIED)
print(f"Shots set to {exp.run_options.get('shots')}, "
"measurement level set to {exp.run_options.get('meas_level')}")

Consult the documentation of the run method of your
specific backend type for valid options.
Expand All @@ -253,6 +255,7 @@ before execution:
exp.set_transpile_options(scheduling_method='asap',
optimization_level=3,
basis_gates=["x", "sx", "rz"])
print(f"Transpile options are {exp.transpile_options}")

Consult the documentation of :func:`qiskit.compiler.transpile` for valid options.

Expand All @@ -267,14 +270,15 @@ upon experiment instantiation, but can also be explicitly set via
exp = T1(physical_qubits=(0,), delays=delays)
new_delays=np.arange(1e-6, 600e-6, 50e-6)
exp.set_experiment_options(delays=new_delays)
print(f"Experiment options are {exp.experiment_options}")

Consult the :doc:`API documentation </apidocs/index>` for the options of each experiment
class.

Analysis options
----------------

These options are unique to each analysis class. Unlike the other options, analyis
These options are unique to each analysis class. Unlike the other options, analysis
options are not directly set via the experiment object but use instead a method of the
associated ``analysis``:

Expand All @@ -295,7 +299,7 @@ Running experiments on multiple qubits
======================================

To run experiments across many qubits of the same device, we use **composite
experiments**. A composite experiment is a parent object that contains one or more child
experiments**. A :class:`.CompositeExperiment` is a parent object that contains one or more child
experiments, which may themselves be composite. There are two core types of composite
experiments:

Expand Down Expand Up @@ -323,7 +327,7 @@ Note that when the transpile and run options are set for a composite experiment,
child experiments's options are also set to the same options recursively. Let's examine
how the parallel experiment is constructed by visualizing child and parent circuits. The
child experiments can be accessed via the
:meth:`~.ParallelExperiment.component_experiment` method, which indexes from zero:
:meth:`~.CompositeExperiment.component_experiment` method, which indexes from zero:

.. jupyter-execute::

Expand All @@ -333,6 +337,16 @@ child experiments can be accessed via the

parallel_exp.component_experiment(1).circuits()[0].draw(output='mpl')

Similarly, the child analyses can be accessed via :meth:`.CompositeAnalysis.component_analysis` or via
the analysis of the child experiment class:

.. jupyter-execute::

parallel_exp.component_experiment(0).analysis.set_options(plot = True)

# This should print out what we set because it's the same option
print(parallel_exp.analysis.component_analysis(0).options.get("plot"))

The circuits of all experiments assume they're acting on virtual qubits starting from
index 0. In the case of a parallel experiment, the child experiment
circuits are composed together and then reassigned virtual qubit indices:
Expand Down
7 changes: 5 additions & 2 deletions docs/tutorials/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -10,11 +10,14 @@ They're suitable for beginners who want to get started with the package.
The Basics
----------

.. This toctree is hardcoded since Getting Started is already included in the sidebar for more visibility.

.. toctree::
:maxdepth: 2
:maxdepth: 1

intro

getting_started

Exploring Modules
-----------------

Expand Down
7 changes: 3 additions & 4 deletions qiskit_experiments/curve_analysis/base_curve_analysis.py
Original file line number Diff line number Diff line change
Expand Up @@ -154,8 +154,8 @@ def _default_options(cls) -> Options:
the analysis result.
plot_raw_data (bool): Set ``True`` to draw processed data points,
dataset without formatting, on canvas. This is ``False`` by default.
plot (bool): Set ``True`` to create figure for fit result.
This is ``True`` by default.
plot (bool): Set ``True`` to create figure for fit result or ``False`` to
not create a figure. This overrides the behavior of ``generate_figures``.
return_fit_parameters (bool): Set ``True`` to return all fit model parameters
with details of the fit outcome. Default to ``True``.
return_data_points (bool): Set ``True`` to include in the analysis result
Expand Down Expand Up @@ -207,7 +207,6 @@ def _default_options(cls) -> Options:

options.plotter = CurvePlotter(MplDrawer())
options.plot_raw_data = False
options.plot = True
options.return_fit_parameters = True
options.return_data_points = False
options.data_processor = None
Expand Down Expand Up @@ -338,7 +337,7 @@ def _evaluate_quality(
Returns:
String that represents fit result quality. Usually "good" or "bad".
"""
if fit_data.reduced_chisq < 3.0:
if 0 < fit_data.reduced_chisq < 3.0:
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This chisq > 0 condition has been added to quality=good because of edge cases where zero chisq is a sign that something is wrong (in this case, running RB with only one sample per length)
Pasted image 20230728110246

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the explanation. This change itself makes sense to me. (I'm not sure we can really determine if a fit is good or bad based on a single value, though, but I know it's another issue.)

return "good"
return "bad"

Expand Down
54 changes: 34 additions & 20 deletions qiskit_experiments/curve_analysis/composite_curve_analysis.py
Original file line number Diff line number Diff line change
Expand Up @@ -279,6 +279,15 @@ def _run_analysis(
experiment_data: ExperimentData,
) -> Tuple[List[AnalysisResultData], List["matplotlib.figure.Figure"]]:

# Flag for plotting can be "always", "never", or "selective"
# the analysis option overrides self._generate_figures if set
if self.options.get("plot", None):
plot = "always"
elif self.options.get("plot", None) is False:
plot = "never"
else:
plot = getattr(self, "_generate_figures", "always")

analysis_results = []

fit_dataset = {}
Expand All @@ -294,26 +303,8 @@ def _run_analysis(
models=analysis.models,
)

if self.options.plot and analysis.options.plot_raw_data:
for model in analysis.models:
sub_data = processed_data.get_subset_of(model._name)
self.plotter.set_series_data(
model._name + f"_{analysis.name}",
x=sub_data.x,
y=sub_data.y,
)

# Format data
formatted_data = analysis._format_data(processed_data)
if self.options.plot:
for model in analysis.models:
sub_data = formatted_data.get_subset_of(model._name)
self.plotter.set_series_data(
model._name + f"_{analysis.name}",
x_formatted=sub_data.x,
y_formatted=sub_data.y,
y_formatted_err=sub_data.y_err,
)

# Run fitting
fit_data = analysis._run_curve_fit(
Expand All @@ -327,6 +318,29 @@ def _run_analysis(
else:
quality = "bad"

# After the quality is determined, plot can become a boolean flag for whether
# to generate the figure
plot_bool = plot == "always" or (plot == "selective" and quality == "bad")

if plot_bool:
if analysis.options.plot_raw_data:
for model in analysis.models:
sub_data = processed_data.get_subset_of(model._name)
self.plotter.set_series_data(
model._name + f"_{analysis.name}",
x=sub_data.x,
y=sub_data.y,
)
else:
for model in analysis.models:
sub_data = formatted_data.get_subset_of(model._name)
self.plotter.set_series_data(
model._name + f"_{analysis.name}",
x_formatted=sub_data.x,
y_formatted=sub_data.y,
y_formatted_err=sub_data.y_err,
)

if self.options.return_fit_parameters:
overview = AnalysisResultData(
name=PARAMS_ENTRY_PREFIX + analysis.name,
Expand All @@ -345,7 +359,7 @@ def _run_analysis(
)

# Draw fit result
if self.options.plot:
if plot_bool:
x_interp = np.linspace(
np.min(formatted_data.x), np.max(formatted_data.x), num=100
)
Expand Down Expand Up @@ -395,7 +409,7 @@ def _run_analysis(
analysis_results.extend(primary_results)
self.plotter.set_supplementary_data(primary_results=primary_results)

if self.options.plot:
if plot_bool:
return analysis_results, [self.plotter.figure()]

return analysis_results, []
55 changes: 34 additions & 21 deletions qiskit_experiments/curve_analysis/curve_analysis.py
Original file line number Diff line number Diff line change
Expand Up @@ -386,6 +386,15 @@ def _run_analysis(
self, experiment_data: ExperimentData
) -> Tuple[List[AnalysisResultData], List["pyplot.Figure"]]:

# Flag for plotting can be "always", "never", or "selective"
# the analysis option overrides self._generate_figures if set
if self.options.get("plot", None):
plot = "always"
elif self.options.get("plot", None) is False:
plot = "never"
else:
plot = getattr(self, "_generate_figures", "always")

# Prepare for fitting
self._initialize(experiment_data)

Expand All @@ -397,26 +406,8 @@ def _run_analysis(
models=self._models,
)

if self.options.plot and self.options.plot_raw_data:
for model in self._models:
sub_data = processed_data.get_subset_of(model._name)
self.plotter.set_series_data(
model._name,
x=sub_data.x,
y=sub_data.y,
)

# Format data
formatted_data = self._format_data(processed_data)
if self.options.plot:
for model in self._models:
sub_data = formatted_data.get_subset_of(model._name)
self.plotter.set_series_data(
model._name,
x_formatted=sub_data.x,
y_formatted=sub_data.y,
y_formatted_err=sub_data.y_err,
)

# Run fitting
fit_data = self._run_curve_fit(
Expand All @@ -426,10 +417,32 @@ def _run_analysis(

if fit_data.success:
quality = self._evaluate_quality(fit_data)
self.plotter.set_supplementary_data(fit_red_chi=fit_data.reduced_chisq)
else:
quality = "bad"

# After the quality is determined, plot can become a boolean flag for whether
# to generate the figure
plot_bool = plot == "always" or (plot == "selective" and quality == "bad")

if plot_bool:
self.plotter.set_supplementary_data(fit_red_chi=fit_data.reduced_chisq)
for model in self._models:
if self.options.plot_raw_data:
sub_data = processed_data.get_subset_of(model._name)
self.plotter.set_series_data(
model._name,
x=sub_data.x,
y=sub_data.y,
)
else:
sub_data = formatted_data.get_subset_of(model._name)
self.plotter.set_series_data(
model._name,
x_formatted=sub_data.x,
y_formatted=sub_data.y,
y_formatted_err=sub_data.y_err,
)

if self.options.return_fit_parameters:
# Store fit status overview entry regardless of success.
# This is sometime useful when debugging the fitting code.
Expand All @@ -451,7 +464,7 @@ def _run_analysis(
self.plotter.set_supplementary_data(primary_results=primary_results)

# Draw fit curves and report
if self.options.plot:
if plot_bool:
for model in self._models:
sub_data = formatted_data.get_subset_of(model._name)
if sub_data.x.size == 0:
Expand Down Expand Up @@ -489,7 +502,7 @@ def _run_analysis(
)

# Finalize plot
if self.options.plot:
if plot_bool:
return analysis_results, [self.plotter.figure()]

return analysis_results, []
Expand Down
4 changes: 2 additions & 2 deletions qiskit_experiments/curve_analysis/standard_analysis/decay.py
Original file line number Diff line number Diff line change
Expand Up @@ -98,13 +98,13 @@ def _evaluate_quality(self, fit_data: curve.CurveFitResult) -> Union[str, None]:
"""Algorithmic criteria for whether the fit is good or bad.

A good fit has:
- a reduced chi-squared lower than three
- a reduced chi-squared lower than three and greater than zero
- tau error is less than its value
"""
tau = fit_data.ufloat_params["tau"]

criteria = [
fit_data.reduced_chisq < 3,
0 < fit_data.reduced_chisq < 3,
curve.utils.is_error_not_significant(tau),
]

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -185,14 +185,14 @@ def _evaluate_quality(self, fit_data: curve.CurveFitResult) -> Union[str, None]:
"""Algorithmic criteria for whether the fit is good or bad.

A good fit has:
- a reduced chi-squared lower than three,
- a reduced chi-squared lower than three and greater than zero,
- a measured angle error that is smaller than the allowed maximum good angle error.
This quantity is set in the analysis options.
"""
fit_d_theta = fit_data.ufloat_params["d_theta"]

criteria = [
fit_data.reduced_chisq < 3,
0 < fit_data.reduced_chisq < 3,
abs(fit_d_theta.nominal_value) < abs(self.options.max_good_angle_error),
]

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -126,7 +126,7 @@ def _evaluate_quality(self, fit_data: curve.CurveFitResult) -> Union[str, None]:
"""Algorithmic criteria for whether the fit is good or bad.

A good fit has:
- a reduced chi-squared less than 3,
- a reduced chi-squared less than 3 and greater than zero,
- a peak within the scanned frequency range,
- a standard deviation that is not larger than the scanned frequency range,
- a standard deviation that is wider than the smallest frequency increment,
Expand All @@ -149,7 +149,7 @@ def _evaluate_quality(self, fit_data: curve.CurveFitResult) -> Union[str, None]:
fit_data.x_range[0] <= fit_freq.n <= fit_data.x_range[1],
1.5 * freq_increment < fit_sigma.n,
fit_width_ratio < 0.25,
fit_data.reduced_chisq < 3,
0 < fit_data.reduced_chisq < 3,
curve.utils.is_error_not_significant(fit_sigma),
snr > 2,
]
Expand Down
Loading