Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PR for correcting new suggestions [ Draft ] #1414

Open
wants to merge 69 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 19 commits
Commits
Show all changes
69 commits
Select commit Hold shift + click to select a range
73b7e86
Restored lost codes
Musa-Sina-Ertugrul Feb 18, 2024
7bbaf2b
bootstrapping not matched exp_data
Musa-Sina-Ertugrul Feb 18, 2024
9903513
Added and adjested tests for new workflow #1268
Musa-Sina-Ertugrul Feb 20, 2024
e2965c2
new workflow named properly #1268
Musa-Sina-Ertugrul Feb 29, 2024
9c4d859
Solved service problem #1268
Musa-Sina-Ertugrul Feb 29, 2024
b9d4822
Deprecated _add_cononical_dict_data #1268
Musa-Sina-Ertugrul Mar 2, 2024
a1c5d4f
Replaced deprecated method #1268
Musa-Sina-Ertugrul Mar 2, 2024
ac160b5
Updated new workflow tests #1268
Musa-Sina-Ertugrul Mar 2, 2024
4f69eb0
Reformatting #1268
Musa-Sina-Ertugrul Mar 5, 2024
de572f2
Merge branch 'Qiskit-Extensions:main' into test
Musa-Sina-Ertugrul Mar 12, 2024
a9af57d
Commit before pull
Musa-Sina-Ertugrul Mar 12, 2024
2b2413e
pylint check
Musa-Sina-Ertugrul Mar 12, 2024
d963884
Tests corrected for bootstrapping #1268
Musa-Sina-Ertugrul Mar 14, 2024
7839c58
FakeProvider error solved #1268
Musa-Sina-Ertugrul Mar 14, 2024
9be5d2a
Adjusting bootstrapping tests #1268
Musa-Sina-Ertugrul Mar 16, 2024
debed9c
Finished tests #1268
Musa-Sina-Ertugrul Apr 15, 2024
b33387b
Algorithim correction #1268
Musa-Sina-Ertugrul Apr 15, 2024
8edaa7b
Sperated init children data, create children #1268
Musa-Sina-Ertugrul Apr 15, 2024
c005aca
Reformatting and linting
Musa-Sina-Ertugrul Apr 17, 2024
e4c20a1
Merge branch 'main' into test
nkanazawa1989 Apr 19, 2024
d9dd408
Commit before merge #1268
Musa-Sina-Ertugrul Apr 22, 2024
af26522
Refactoring _run_analysis in composite_analysis #1268
Musa-Sina-Ertugrul Oct 14, 2023
e0b1f93
Updated according to @nkanazawa1989 's suggestion #1268
Musa-Sina-Ertugrul Oct 17, 2023
df4eb66
Updated _add_data #1268
Musa-Sina-Ertugrul Oct 24, 2023
c6b0565
Updated add_data for early initialization #1268
Musa-Sina-Ertugrul Oct 24, 2023
89228cd
commit before pull
Musa-Sina-Ertugrul Oct 29, 2023
634b0b2
Updated add_data method #1268
Musa-Sina-Ertugrul Oct 29, 2023
9b55372
Updated add_data method #1268
Musa-Sina-Ertugrul Nov 4, 2023
7079f67
Passed test new start
Musa-Sina-Ertugrul Nov 15, 2023
7418966
Updated add_data tests passed #1268
Musa-Sina-Ertugrul Nov 15, 2023
4c3dad6
Updated add_data tests passed #1268
Musa-Sina-Ertugrul Nov 15, 2023
b217a64
Updated add_data and deprecated _add_data #1268
Musa-Sina-Ertugrul Dec 10, 2023
52af226
Updated add_data and _add_result_data, deprecated _add_data #1268
Musa-Sina-Ertugrul Dec 10, 2023
aaffc97
Updated add_data and _add_result_data, deprecated _add_data #1268
Musa-Sina-Ertugrul Dec 10, 2023
742b3aa
Updated add_data #1268
Musa-Sina-Ertugrul Dec 17, 2023
a8aa908
Updated add_data, _run_analysis, composite_test #1268
Musa-Sina-Ertugrul Dec 18, 2023
b066a4d
commit before second approach
Musa-Sina-Ertugrul Dec 19, 2023
35708e7
Tests passed , Finished second approach add_data #1268
Musa-Sina-Ertugrul Dec 19, 2023
47e01f4
Updated add_data #1268
Musa-Sina-Ertugrul Dec 20, 2023
1815edc
Tests passed second approach, Updated add_data #1268
Musa-Sina-Ertugrul Dec 20, 2023
f9ab8a5
Test passed, recursive approach started #1268
Musa-Sina-Ertugrul Dec 20, 2023
fc3efea
Tests passed , Updated recursive approach, Updated add_data #1268
Musa-Sina-Ertugrul Dec 21, 2023
5713ab1
Update qiskit_experiments/framework/experiment_data.py
Musa-Sina-Ertugrul Dec 28, 2023
a1f2864
Started on new suggestions, suggestion 1 finished #1268
Musa-Sina-Ertugrul Dec 28, 2023
325dfb8
Waiting respond, not bootstrapped exp_data appear suddenly after runn…
Musa-Sina-Ertugrul Dec 28, 2023
caf1667
Fix marginalize problems
nkanazawa1989 Jan 5, 2024
c244df3
Bump actions/checkout from 3 to 4 (#1378)
dependabot[bot] Jan 26, 2024
f5c3ded
Bump actions/setup-python from 4 to 5 (#1377)
dependabot[bot] Jan 27, 2024
f18d6d0
Solving merge problems #1268
Musa-Sina-Ertugrul Apr 25, 2024
99856b2
Solved merge problems,solved artifact problem#1268
Musa-Sina-Ertugrul Apr 25, 2024
8e4847e
Updating bootstrapping test #1268
Musa-Sina-Ertugrul Apr 25, 2024
d0e597c
Reformatting and linting #1268
Musa-Sina-Ertugrul Apr 25, 2024
bb9aba5
Writing releasenote #1268
Musa-Sina-Ertugrul Apr 25, 2024
82219f8
Solved merge confilict #1268
Musa-Sina-Ertugrul Apr 26, 2024
23f6f1d
Updated bootstrapping doc #1268
Musa-Sina-Ertugrul May 1, 2024
a5959a1
Added raising #1268
Musa-Sina-Ertugrul May 1, 2024
c92ce3a
Updated for multithreading #1268
Musa-Sina-Ertugrul May 1, 2024
af18d0c
Reformatting
Musa-Sina-Ertugrul May 1, 2024
8452119
Reverting unnecassary changes
Musa-Sina-Ertugrul May 10, 2024
b2a997a
Reverting unnecassry changes
Musa-Sina-Ertugrul May 10, 2024
9af62f3
Removing unnecessary changes
Musa-Sina-Ertugrul May 10, 2024
f1b7469
Fixed typos
Musa-Sina-Ertugrul May 10, 2024
2af640b
Fixing typos
Musa-Sina-Ertugrul May 10, 2024
b6909d0
linting
Musa-Sina-Ertugrul May 10, 2024
7b2a6d3
Updated docstring
Musa-Sina-Ertugrul May 14, 2024
4d105e7
Reformatting
Musa-Sina-Ertugrul May 15, 2024
4e6b7a2
Updating exceptions
Musa-Sina-Ertugrul May 17, 2024
196cd7f
Updating exceptions
Musa-Sina-Ertugrul May 17, 2024
0774a1d
Updated exceptions
Musa-Sina-Ertugrul May 17, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions docs/_ext/custom_styles/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,8 +25,8 @@
from qiskit_experiments.framework import BaseExperiment


_parameter_regex = re.compile(r'(.+?)\(\s*(.*[^\s]+)\s*\):(.*[^\s]+)')
_rest_role_regex = re.compile(r':(.+?) (.+?):\s*(.*[^\s]+)')
_parameter_regex = re.compile(r"(.+?)\(\s*(.*[^\s]+)\s*\):(.*[^\s]+)")
_rest_role_regex = re.compile(r":(.+?) (.+?):\s*(.*[^\s]+)")


def _trim_empty_lines(docstring_lines: List[str]) -> List[str]:
Expand Down Expand Up @@ -80,7 +80,7 @@ def _generate_analysis_ref(
raise Exception(f"Option docstring for analysis_ref is missing.")

analysis_ref_lines = []
for line in lines[analysis_ref_start + 1:]:
for line in lines[analysis_ref_start + 1 :]:
# add lines until hitting to next section
if line.startswith("# section:"):
break
Expand Down
241 changes: 14 additions & 227 deletions qiskit_experiments/framework/composite/composite_analysis.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,15 +13,10 @@
Composite Experiment Analysis class.
"""

from typing import List, Dict, Union, Optional, Tuple
from typing import List, Union, Optional, Tuple
import warnings
import numpy as np
from qiskit.result import marginal_distribution
from qiskit.result.postprocess import format_counts_memory
from qiskit_experiments.framework import BaseAnalysis, ExperimentData
from qiskit_experiments.framework.analysis_result_data import AnalysisResultData
from qiskit_experiments.framework.base_analysis import _requires_copy
from qiskit_experiments.exceptions import AnalysisError


class CompositeAnalysis(BaseAnalysis):
Expand Down Expand Up @@ -120,242 +115,36 @@ def copy(self):
ret._analyses = [analysis.copy() for analysis in ret._analyses]
return ret

def run(
self,
experiment_data: ExperimentData,
replace_results: bool = False,
**options,
) -> ExperimentData:
# Make a new copy of experiment data if not updating results
if not replace_results and _requires_copy(experiment_data):
experiment_data = experiment_data.copy()

if not self._flatten_results:
# Initialize child components if they are not initialized
# This only needs to be done if results are not being flattened
self._add_child_data(experiment_data)

# Run analysis with replace_results = True since we have already
# created the copy if it was required
return super().run(experiment_data, replace_results=True, **options)

def _run_analysis(self, experiment_data: ExperimentData):
# Return list of experiment data containers for each component experiment
# containing the marginalized data from the composite experiment
component_expdata = self._component_experiment_data(experiment_data)
child_data = experiment_data.child_data()

# Run the component analysis on each component data
for i, sub_expdata in enumerate(component_expdata):
if len(self._analyses) != len(child_data):
# Child data is automatically created when composite result data is added.
# Validate that child data size matches with number of analysis entries.
experiment_data.create_child_data()
Musa-Sina-Ertugrul marked this conversation as resolved.
Show resolved Hide resolved

for sub_analysis, sub_data in zip(self._analyses, child_data):
# Since copy for replace result is handled at the parent level
# we always run with replace result on component analysis
self._analyses[i].run(sub_expdata, replace_results=True)
sub_analysis.run(sub_data, replace_results=True)

# Analysis is running in parallel so we add loop to wait
# for all component analysis to finish before returning
# the parent experiment analysis results
for sub_expdata in component_expdata:
sub_expdata.block_for_results()
for sub_data in child_data:
sub_data.block_for_results()

# Optionally flatten results from all component experiments
# for adding to the main experiment data container
if self._flatten_results:
analysis_results, figures = self._combine_results(component_expdata)
analysis_results, figures = self._combine_results(child_data)

for res in analysis_results:
# Override experiment ID because entries are flattened
res.experiment_id = experiment_data.experiment_id
return analysis_results, figures
return [], []

def _component_experiment_data(self, experiment_data: ExperimentData) -> List[ExperimentData]:
"""Return a list of marginalized experiment data for component experiments.

Args:
experiment_data: a composite experiment data container.

Returns:
The list of analysis-ready marginalized experiment data for each
component experiment.

Raises:
AnalysisError: If the component experiment data cannot be extracted.
"""
if not self._flatten_results:
# Retrieve child data for component experiments for updating
component_index = experiment_data.metadata.get("component_child_index", [])
if not component_index:
raise AnalysisError("Unable to extract component child experiment data")
component_expdata = [experiment_data.child_data(i) for i in component_index]
else:
# Initialize temporary ExperimentData containers for
# each component experiment to analysis on. These will
# not be saved but results and figures will be collected
# from them
component_expdata = self._initialize_component_experiment_data(experiment_data)

# Compute marginalize data for each component experiment
marginalized_data = self._marginalized_component_data(experiment_data.data())

# Add the marginalized component data and component job metadata
# to each component child experiment. Note that this will clear
# any currently stored data in the experiment. Since copying of
# child data is handled by the `replace_results` kwarg of the
# parent container it is safe to always clear and replace the
# results of child containers in this step
for sub_expdata, sub_data in zip(component_expdata, marginalized_data):
# Clear any previously stored data and add marginalized data
sub_expdata._result_data.clear()
sub_expdata.add_data(sub_data)

return component_expdata

def _marginalized_component_data(self, composite_data: List[Dict]) -> List[List[Dict]]:
"""Return marginalized data for component experiments.

Args:
composite_data: a list of composite experiment circuit data.

Returns:
A List of lists of marginalized circuit data for each component
experiment in the composite experiment.
"""
# Marginalize data
marginalized_data = {}
for datum in composite_data:
metadata = datum.get("metadata", {})

# Add marginalized data to sub experiments
if "composite_clbits" in metadata:
composite_clbits = metadata["composite_clbits"]
else:
composite_clbits = None

# Pre-process the memory if any to avoid redundant calls to format_counts_memory
f_memory = self._format_memory(datum, composite_clbits)

for i, index in enumerate(metadata["composite_index"]):
if index not in marginalized_data:
# Initialize data list for marginalized
marginalized_data[index] = []
sub_data = {
k: v for k, v in datum.items() if k not in ("metadata", "counts", "memory")
}
sub_data["metadata"] = metadata["composite_metadata"][i]
if "counts" in datum:
if composite_clbits is not None:
sub_data["counts"] = marginal_distribution(
counts=datum["counts"],
indices=composite_clbits[i],
)
else:
sub_data["counts"] = datum["counts"]
if "memory" in datum:
if composite_clbits is not None:
# level 2
if f_memory is not None:
idx = slice(
-1 - composite_clbits[i][-1], -composite_clbits[i][0] or None
)
sub_data["memory"] = [shot[idx] for shot in f_memory]
# level 1
else:
mem = np.array(datum["memory"])

# Averaged level 1 data
if len(mem.shape) == 2:
sub_data["memory"] = mem[composite_clbits[i]].tolist()
# Single-shot level 1 data
if len(mem.shape) == 3:
sub_data["memory"] = mem[:, composite_clbits[i]].tolist()
else:
sub_data["memory"] = datum["memory"]
marginalized_data[index].append(sub_data)

# Sort by index
return [marginalized_data[i] for i in sorted(marginalized_data.keys())]

@staticmethod
def _format_memory(datum: Dict, composite_clbits: List):
"""A helper method to convert level 2 memory (if it exists) to bit-string format."""
f_memory = None
if (
"memory" in datum
and composite_clbits is not None
and isinstance(datum["memory"][0], str)
):
num_cbits = 1 + max(cbit for cbit_list in composite_clbits for cbit in cbit_list)
header = {"memory_slots": num_cbits}
f_memory = list(format_counts_memory(shot, header) for shot in datum["memory"])

return f_memory

def _add_child_data(self, experiment_data: ExperimentData):
"""Save empty component experiment data as child data.

This will initialize empty ExperimentData objects for each component
experiment and add them as child data to the main composite experiment
ExperimentData container container for saving.

Args:
experiment_data: a composite experiment experiment data container.
"""
component_index = experiment_data.metadata.get("component_child_index", [])
if component_index:
# Child components are already initialized
return

# Initialize the component experiment data containers and add them
# as child data to the current experiment data
child_components = self._initialize_component_experiment_data(experiment_data)
start_index = len(experiment_data.child_data())
for i, subdata in enumerate(child_components):
experiment_data.add_child_data(subdata)
component_index.append(start_index + i)

# Store the indices of the added child data in metadata
experiment_data.metadata["component_child_index"] = component_index

def _initialize_component_experiment_data(
self, experiment_data: ExperimentData
) -> List[ExperimentData]:
"""Initialize empty experiment data containers for component experiments.

Args:
experiment_data: a composite experiment experiment data container.

Returns:
The list of experiment data containers for each component experiment
containing the component metadata, and tags, share level, and
auto save settings of the composite experiment.
"""
# Extract component experiment types and metadata so they can be
# added to the component experiment data containers
metadata = experiment_data.metadata
num_components = len(self._analyses)
experiment_types = metadata.get("component_types", [None] * num_components)
component_metadata = metadata.get("component_metadata", [{}] * num_components)

# Create component experiments and set the backend and
# metadata for the components
component_expdata = []
for i, _ in enumerate(self._analyses):
subdata = ExperimentData(backend=experiment_data.backend)
subdata.experiment_type = experiment_types[i]
subdata.metadata.update(component_metadata[i])

if self._flatten_results:
# Explicitly set auto_save to false so the temporary
# data can't accidentally be saved
subdata.auto_save = False
else:
# Copy tags, share_level and auto_save from the parent
# experiment data if results are not being flattened.
subdata.tags = experiment_data.tags
subdata.share_level = experiment_data.share_level
subdata.auto_save = experiment_data.auto_save

component_expdata.append(subdata)

return component_expdata

def _set_flatten_results(self):
"""Recursively set flatten_results to True for all composite components."""
self._flatten_results = True
Expand Down Expand Up @@ -406,7 +195,5 @@ def _combine_results(
for _, series in analysis_table.iterrows():
data = AnalysisResultData.from_table_element(**series.to_dict())
analysis_results.append(data)
for artifact in sub_expdata.artifacts():
analysis_results.append(artifact)

return analysis_results, figures
Loading
Loading