Skip to content
This repository has been archived by the owner on Jul 13, 2022. It is now read-only.

Commit

Permalink
T2 experiment with Hahn echoes (qiskit-community#365)
Browse files Browse the repository at this point in the history
* Add template for T2 exp, still need to check results

* Made the code cleaner

* cleared RB documantation

* Fixed errors and for 1 echo the circuit is generated (need to work on options)

Changed rotation to Pi/2
This code support only n_echoes=1 for further testing

* Changed the Echo pulse to pi instead of pi/2

* Changed the class to support only single qubit

Changed the class to support only single qubit and added checks to verify parameters

* added options instead of fields

* Pylint and Black

* used base class attribute for qubit

used base class attribute to store the qubit and not through options

* Updated code to be as in other experiments

* Added the Analysis class

* Base analysis class draft + fixes

* Update T2Hahn.py

* Update t2hahn.py

* Update t2hahn.py

* Added Backend template from t2ramsey

* Update t2hahn_backend.py

* fixed doc string and input check function

* the circuit is now working on qubit '0' exclusively

* Update t2hahn.py

* Added operation for 'RX' gate

* Removed duplicate length from verify parameters

* changed documentation suggestions

* changed documanttion

* Removed * from T_2

* changed basis gate 'h', 'p' to 'ry', 'rx'

* Update t2hahn_backend.py

* Added Ry, Rx and measure gates

* Changed every t2ramsy to t2hahn (WIP)

* changed "T2Hahn" to "T2"

* added tests (still not working)

* Delay now applied once

By using the model for this backhand if I do two delays we get the following probability for measuring '0':

P('0') = (1-P(err))^2 + 0.5 * P(err) * (2 - P(err) )
we can see that the fitting isn't the same as one we expected (as P(err) is approximate by e^(-t/[tau]) so this probability isn't the same. hence' we will use 1 noise for delay so we will get the approximated fitting)

* added tests for T2hahn Echo

* Applied black

* Update t2hahn_backend.py

* Update t2hahn_backend.py

* Changed the gate op to be a function and added time evolution of the state

* Added output types

* Added measurement on "ZY" plane in Z basis

* Added angle parameter for rotation and fixed bugs

Changed last Ry gate angle to pi/2 (instead of -pi/2 because the number of echoes we do is odd)
Added angle to the rotations gate for precision.
added initialization error

* changed "==" to "np.isclose()"

* Pass pylint

* Pylint + black

* cleaned code

* incomplete change to backend

* deleted unnecessary fit parameters and changed input for backend

* changes to use only Rx without Ry

* Experiment Working Up to tau that is not correct

* Update comment Y90 to X90

Co-authored-by: Yael Ben-Haim <[email protected]>

* Deleted extra space from measurement function in the backend

Co-authored-by: Yael Ben-Haim <[email protected]>

* changed the backend as Yael review comments

* Test are working

Test are working.

Need to add:
 * Parallel Experiment

Could be a problem:
The meta data right now is for the cumulative delay time.

* Passed Black and Lint

* Update t2hahn_backend.py

* deleted the abs as the projection is a real number

* Black and pylint

* Added comments

* Added tutorial

* Added tutorial and fix bug for echoes

* Added Parallel experiment to the test and backend

* Changed that the analysis class will be passed to constructor

* Added release notes

* Changed the code dor recent changes (added config, removed unit, etc)

Add Hahn analysis class to the lists under __init__ both in "characterization" and "characterization\analysis".
Removed the support in units.
Added experiment.config test and serialization.

* Passed black and pylint

* passed pylint and black

* fixed bug

* fixed docs

fixed docs spaces and blank lines

* Update t2hahn.py

* Raising error when delay applied and the qubit isn't in the XY plain or theta isn't pi or 0

* Update test_t2hahn.py

* edited comment and change 'plain' to 'plane'

* added tests for number of echoes

* Fixed Black

* rerun pylint

* Added to tutorial both experiments

* updated code + tutorial per instructions

There is one thing missing, The analysis of Hahn echo doesn't work for num_echoes=0.

* fixed a bug for 0 echoes case.

* fixed issue

added explanation about the frequency.
added experiment with 0 echoes.

* In the test excluded quality check for num_echoes=0

Excluded quality check for num_echoes=0 since it needs to be bad. The reason it is still there is to make sure the backend still compatible with 0 echoes.

* Changed tutorial text and a bit of code

* Added bounds and removed p0 from analysis, finished tutorial

* Edited tutorial as reviewed

* Update t2hahn_characterization.ipynb

* fixed test to be with "self.json_equiv" and fixed lint

* added cosmetic fixes to the tutorial

* Added text and changed functions

I have added linking to Ramsey experiment.
Added the term detuning frequency.
Changed number of echoes in the comparation to '1 v.s. 0' instead then '4 v.s. 0'. I removed it because the there is no pint in doing 4 echoes in mock backend without T1 noise.

* updated feature text.

* black version 22.1.0 pass

* fixed angles to be of absolute value and not previous ones.

* updated text in the tutorial.

* changed doc string year to 2022

* updated tutorial text

* fixed text

Co-authored-by: Yael Ben-Haim <[email protected]>
  • Loading branch information
ItamarGoldman and yaelbh authored Jan 31, 2022
1 parent cc3767c commit 0044db6
Show file tree
Hide file tree
Showing 27 changed files with 1,349 additions and 83 deletions.
6 changes: 2 additions & 4 deletions docs/_ext/autodoc_analysis.py
Original file line number Diff line number Diff line change
Expand Up @@ -26,14 +26,12 @@ class AnalysisDocumenter(ClassDocumenter):
"""Sphinx extension for the custom documentation of the standard analysis class."""

objtype = "analysis"
directivetype = 'class'
directivetype = "class"
priority = 10 + ClassDocumenter.priority
option_spec = dict(ClassDocumenter.option_spec)

@classmethod
def can_document_member(
cls, member: Any, membername: str, isattr: bool, parent: Any
) -> bool:
def can_document_member(cls, member: Any, membername: str, isattr: bool, parent: Any) -> bool:
return isinstance(member, BaseAnalysis)

def add_content(self, more_content: Any, no_docstring: bool = False) -> None:
Expand Down
3 changes: 2 additions & 1 deletion docs/_ext/custom_styles/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -161,7 +161,7 @@ def _generate_analysis_ref(
raise Exception(f"Option docstring for analysis_ref is missing.")

analysis_ref_lines = []
for line in lines[analysis_ref_start + 1:]:
for line in lines[analysis_ref_start + 1 :]:
# add lines until hitting to next section
if line.startswith("# section:"):
break
Expand Down Expand Up @@ -202,6 +202,7 @@ def _format_default_options(defaults: Dict[str, Any], indent: str = "") -> List[

def _check_no_indent(method: Callable) -> Callable:
"""Check indent of lines and return if this block is correctly indented."""

def wraps(self, lines: List[str], *args, **kwargs):
if all(l.startswith(" ") for l in lines):
text_block = "\n".join(lines)
Expand Down
95 changes: 48 additions & 47 deletions docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,35 +25,39 @@
#
import os
import sys
sys.path.insert(0, os.path.abspath('.'))

sys.path.insert(0, os.path.abspath("."))
sys.path.append(os.path.abspath("./_ext"))

"""
Sphinx documentation builder
"""

import os

# Set env flag so that we can doc functions that may otherwise not be loaded
# see for example interactive visualizations in qiskit.visualization.
os.environ['QISKIT_DOCS'] = 'TRUE'
os.environ["QISKIT_DOCS"] = "TRUE"

# -- Project information -----------------------------------------------------
project = 'Qiskit Experiments'
copyright = '2021, Qiskit Development Team' # pylint: disable=redefined-builtin
author = 'Qiskit Development Team'
project = "Qiskit Experiments"
copyright = "2021, Qiskit Development Team" # pylint: disable=redefined-builtin
author = "Qiskit Development Team"

# The short X.Y version
version = '0.3'
version = "0.3"
# The full version, including alpha/beta/rc tags
release = '0.3.0'
release = "0.3.0"

rst_prolog = """
.. raw:: html
<br><br><br>
.. |version| replace:: {0}
""".format(release)
""".format(
release
)

nbsphinx_prolog = """
{% set docname = env.doc2path(env.docname, base=None) %}
Expand Down Expand Up @@ -81,32 +85,31 @@
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
'sphinx.ext.napoleon',
'sphinx.ext.autodoc',
'sphinx.ext.autosummary',
'sphinx.ext.mathjax',
'sphinx.ext.viewcode',
'sphinx.ext.extlinks',
'jupyter_sphinx',
'sphinx_autodoc_typehints',
'reno.sphinxext',
'sphinx_panels',
'sphinx.ext.intersphinx',
'nbsphinx',
'autoref',
'autodoc_experiment',
'autodoc_analysis',
"sphinx.ext.napoleon",
"sphinx.ext.autodoc",
"sphinx.ext.autosummary",
"sphinx.ext.mathjax",
"sphinx.ext.viewcode",
"sphinx.ext.extlinks",
"jupyter_sphinx",
"sphinx_autodoc_typehints",
"reno.sphinxext",
"sphinx_panels",
"sphinx.ext.intersphinx",
"nbsphinx",
"autoref",
"autodoc_experiment",
"autodoc_analysis",
]
html_static_path = ['_static']
templates_path = ['_templates']
html_css_files = ['style.css', 'custom.css', 'gallery.css']
html_static_path = ["_static"]
templates_path = ["_templates"]
html_css_files = ["style.css", "custom.css", "gallery.css"]

nbsphinx_timeout = 360
nbsphinx_execute = os.getenv('QISKIT_DOCS_BUILD_TUTORIALS', 'never')
nbsphinx_widgets_path = ''
exclude_patterns = ['_build', '**.ipynb_checkpoints']
nbsphinx_thumbnails = {
}
nbsphinx_execute = os.getenv("QISKIT_DOCS_BUILD_TUTORIALS", "never")
nbsphinx_widgets_path = ""
exclude_patterns = ["_build", "**.ipynb_checkpoints"]
nbsphinx_thumbnails = {}


# -----------------------------------------------------------------------------
Expand All @@ -120,7 +123,7 @@
# -----------------------------------------------------------------------------

autodoc_default_options = {
'inherited-members': None,
"inherited-members": None,
}


Expand All @@ -131,9 +134,7 @@
# A dictionary mapping 'figure', 'table', 'code-block' and 'section' to
# strings that are used for format of figure numbers. As a special character,
# %s will be replaced to figure number.
numfig_format = {
'table': 'Table %s'
}
numfig_format = {"table": "Table %s"}
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#
Expand All @@ -144,10 +145,10 @@
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This pattern also affects html_static_path and html_extra_path.
exclude_patterns = ['_build', '**.ipynb_checkpoints']
exclude_patterns = ["_build", "**.ipynb_checkpoints"]

# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'colorful'
pygments_style = "colorful"

# A boolean that decides whether module names are prepended to all object names
# (for object types where a “module” of some kind is defined), e.g. for
Expand All @@ -158,7 +159,7 @@
# (e.g., if this is set to ['foo.'], then foo.bar is shown under B, not F).
# This can be handy if you document a project that consists of a single
# package. Works only for the HTML builder currently.
modindex_common_prefix = ['qiskit_experiments.']
modindex_common_prefix = ["qiskit_experiments."]

# -- Configuration for extlinks extension ------------------------------------
# Refer to https://www.sphinx-doc.org/en/master/usage/extensions/extlinks.html
Expand All @@ -169,20 +170,20 @@
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
html_theme = 'qiskit_sphinx_theme' # use the theme in subdir 'theme'
html_theme = "qiskit_sphinx_theme" # use the theme in subdir 'theme'

#html_sidebars = {'**': ['globaltoc.html']}
html_last_updated_fmt = '%Y/%m/%d'
# html_sidebars = {'**': ['globaltoc.html']}
html_last_updated_fmt = "%Y/%m/%d"

html_theme_options = {
'logo_only': True,
'display_version': True,
'prev_next_buttons_location': 'bottom',
'style_external_links': True,
"logo_only": True,
"display_version": True,
"prev_next_buttons_location": "bottom",
"style_external_links": True,
}

autoclass_content = 'both'
intersphinx_mapping = {'matplotlib': ('https://matplotlib.org/stable/', None)}
autoclass_content = "both"
intersphinx_mapping = {"matplotlib": ("https://matplotlib.org/stable/", None)}
# Current scipy hosted docs are missing the object.inv file so leaving this
# commented out until the missing file is added back.
# 'scipy': ('https://docs.scipy.org/doc/scipy/reference/', None)}
456 changes: 456 additions & 0 deletions docs/tutorials/t2hahn_characterization.ipynb

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion qiskit_experiments/curve_analysis/curve_fit.py
Original file line number Diff line number Diff line change
Expand Up @@ -140,7 +140,7 @@ def fit_func(x, *params):
yfits = fit_func(xdata, *popt)
residues = (yfits - ydata) ** 2
if sigma is not None:
residues = residues / (sigma ** 2)
residues = residues / (sigma**2)
reduced_chisq = np.sum(residues) / dof

# Compute data range for fit
Expand Down
2 changes: 1 addition & 1 deletion qiskit_experiments/curve_analysis/data_processing.py
Original file line number Diff line number Diff line change
Expand Up @@ -151,7 +151,7 @@ def mean_xy_data(

# Compute sample mean and sum of variance with weights based on shots
y_means[i] = np.sum(weights * ys)
y_sigmas[i] = np.sqrt(np.sum(weights ** 2 * ss ** 2))
y_sigmas[i] = np.sqrt(np.sum(weights**2 * ss**2))
y_shots[i] = np.sum(ns)

return x_means, y_means, y_sigmas, y_shots
Expand Down
14 changes: 7 additions & 7 deletions qiskit_experiments/curve_analysis/fit_function.py
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,7 @@ def gaussian(
.. math::
y = {\rm amp} \cdot \exp \left( - (x - x0)^2 / 2 \sigma^2 \right) + {\rm baseline}
"""
return amp * np.exp(-((x - x0) ** 2) / (2 * sigma ** 2)) + baseline
return amp * np.exp(-((x - x0) ** 2) / (2 * sigma**2)) + baseline


def cos_decay(
Expand Down Expand Up @@ -123,9 +123,9 @@ def bloch_oscillation_x(
where :math:`\omega = \sqrt{p_x^2 + p_y^2 + p_z^2}`. The `p_i` stands for the
measured probability in :math:`i \in \left\{ X, Y, Z \right\}` basis.
"""
w = np.sqrt(px ** 2 + py ** 2 + pz ** 2)
w = np.sqrt(px**2 + py**2 + pz**2)

return (-pz * px + pz * px * np.cos(w * x) + w * py * np.sin(w * x)) / (w ** 2) + baseline
return (-pz * px + pz * px * np.cos(w * x) + w * py * np.sin(w * x)) / (w**2) + baseline


def bloch_oscillation_y(
Expand All @@ -140,9 +140,9 @@ def bloch_oscillation_y(
where :math:`\omega = \sqrt{p_x^2 + p_y^2 + p_z^2}`. The `p_i` stands for the
measured probability in :math:`i \in \left\{ X, Y, Z \right\}` basis.
"""
w = np.sqrt(px ** 2 + py ** 2 + pz ** 2)
w = np.sqrt(px**2 + py**2 + pz**2)

return (pz * py - pz * py * np.cos(w * x) - w * px * np.sin(w * x)) / (w ** 2) + baseline
return (pz * py - pz * py * np.cos(w * x) - w * px * np.sin(w * x)) / (w**2) + baseline


def bloch_oscillation_z(
Expand All @@ -157,6 +157,6 @@ def bloch_oscillation_z(
where :math:`\omega = \sqrt{p_x^2 + p_y^2 + p_z^2}`. The `p_i` stands for the
measured probability in :math:`i \in \left\{ X, Y, Z \right\}` basis.
"""
w = np.sqrt(px ** 2 + py ** 2 + pz ** 2)
w = np.sqrt(px**2 + py**2 + pz**2)

return (pz ** 2 + (px ** 2 + py ** 2) * np.cos(w * x)) / (w ** 2) + baseline
return (pz**2 + (px**2 + py**2) * np.cos(w * x)) / (w**2) + baseline
2 changes: 1 addition & 1 deletion qiskit_experiments/database_service/db_analysis_result.py
Original file line number Diff line number Diff line change
Expand Up @@ -169,7 +169,7 @@ def save(self) -> None:
if db_value is not None:
result_data["value"] = db_value
if isinstance(value.stderr, (int, float)):
result_data["variance"] = self._display_format(value.stderr ** 2)
result_data["variance"] = self._display_format(value.stderr**2)
if isinstance(value.unit, str):
result_data["unit"] = value.unit
else:
Expand Down
2 changes: 1 addition & 1 deletion qiskit_experiments/library/calibration/fine_drag_cal.py
Original file line number Diff line number Diff line change
Expand Up @@ -138,7 +138,7 @@ def update_calibrations(self, experiment_data: ExperimentData):
d_theta = BaseUpdater.get_value(experiment_data, "d_theta", result_index)

# See the documentation in fine_drag.py for the derivation of this rule.
d_beta = -np.sqrt(np.pi) * d_theta * sigmas[0] / target_angle ** 2
d_beta = -np.sqrt(np.pi) * d_theta * sigmas[0] / target_angle**2
old_beta = experiment_data.metadata["cal_param_value"]
new_beta = old_beta + d_beta

Expand Down
4 changes: 4 additions & 0 deletions qiskit_experiments/library/characterization/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@
T1
T2Ramsey
T2Hahn
QubitSpectroscopy
CrossResonanceHamiltonian
EchoedCrossResonanceHamiltonian
Expand Down Expand Up @@ -52,6 +53,7 @@
T1Analysis
T2RamseyAnalysis
T2HahnAnalysis
CrossResonanceHamiltonianAnalysis
DragCalAnalysis
FineHalfAngleAnalysis
Expand All @@ -69,6 +71,7 @@
RamseyXYAnalysis,
T2RamseyAnalysis,
T1Analysis,
T2HahnAnalysis,
CrossResonanceHamiltonianAnalysis,
ReadoutAngleAnalysis,
)
Expand All @@ -77,6 +80,7 @@
from .qubit_spectroscopy import QubitSpectroscopy
from .ef_spectroscopy import EFSpectroscopy
from .t2ramsey import T2Ramsey
from .t2hahn import T2Hahn
from .cr_hamiltonian import CrossResonanceHamiltonian, EchoedCrossResonanceHamiltonian
from .rabi import Rabi, EFRabi
from .half_angle import HalfAngle
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@
from .fine_frequency_analysis import FineFrequencyAnalysis
from .ramsey_xy_analysis import RamseyXYAnalysis
from .t2ramsey_analysis import T2RamseyAnalysis
from .t2hahn_analysis import T2HahnAnalysis
from .t1_analysis import T1Analysis
from .cr_hamiltonian_analysis import CrossResonanceHamiltonianAnalysis
from .readout_angle_analysis import ReadoutAngleAnalysis
Original file line number Diff line number Diff line change
Expand Up @@ -336,7 +336,7 @@ def _extra_database_entry(self, fit_data: curve.FitData) -> List[AnalysisResultD
else:
coef_val = 0.5 * (p0_val.value + p1_val.value) / (2 * np.pi)

coef_err = 0.5 * np.sqrt(p0_val.stderr ** 2 + p1_val.stderr ** 2) / (2 * np.pi)
coef_err = 0.5 * np.sqrt(p0_val.stderr**2 + p1_val.stderr**2) / (2 * np.pi)

extra_entries.append(
AnalysisResultData(
Expand Down
Loading

0 comments on commit 0044db6

Please sign in to comment.