diff --git a/docs/discussions/dyson_magnus.rst b/docs/discussions/dyson_magnus.rst
index 8d1548429..a522357c1 100644
--- a/docs/discussions/dyson_magnus.rst
+++ b/docs/discussions/dyson_magnus.rst
@@ -3,22 +3,20 @@
Time-dependent perturbation theory and multi-variable series expansions review
==============================================================================
-The :mod:`.perturbation` module contains functionality for
-numerically computing perturbation theory expansions used in the study of the
-dynamics of quantum systems. Following :footcite:`puzzuoli_algorithms_2023`,
-this discussion reviews key concepts required
-to understand and utilize the module, including the Dyson series and Magnus expansion,
-their generalization to a multi-variable setting, and the notation used to represent
-multi-variable power series in terms of multisets.
+The :mod:`.perturbation` module contains functionality for numerically computing perturbation theory
+expansions used in the study of the dynamics of quantum systems. Following
+:footcite:`puzzuoli_algorithms_2023`, this discussion reviews key concepts required to understand
+and utilize the module, including the Dyson series and Magnus expansion, their generalization to a
+multi-variable setting, and the notation used to represent multi-variable power series in terms of
+multisets.
The Dyson series and Magnus expansion
-------------------------------------
The Dyson series :footcite:`dyson_radiation_1949` and Magnus expansion
-:footcite:`magnus_exponential_1954,blanes_magnus_2009`
-are time-dependent perturbation theory expansions for solutions of linear matrix differential
-equations (LMDEs). For an LMDE
+:footcite:`magnus_exponential_1954,blanes_magnus_2009` are time-dependent perturbation theory
+expansions for solutions of linear matrix differential equations (LMDEs). For an LMDE
.. math::
@@ -38,9 +36,8 @@ The Magnus expansion alternatively seeks to construct a time-averaged generator,
U(t) = \exp(\Omega(t)).
-The Magnus expansion provides
-a series expansion, which, under certain conditions :footcite:`blanes_magnus_2009`,
-converges to :math:`\Omega(t)`:
+The Magnus expansion provides a series expansion, which, under certain conditions
+:footcite:`blanes_magnus_2009`, converges to :math:`\Omega(t)`:
.. math::
@@ -53,16 +50,15 @@ where explicit expressions for the :math:`\Omega_k(t)` are given in the literatu
Generalizing to the multi-variable case
---------------------------------------
-In applications, these expansions are often used in a *multi-variable* setting, in which
-the generator :math:`G(t)` depends on several variables :math:`c_0, \dots, c_{r-1}`,
-and the Dyson series or Magnus expansion are used to understand how the evolution changes
-under perturbations to several of these parameters simultaneously. For working with
-these expansions algorithmically it is necessary to formalize
-the expression of these expansions in the multi-variable setting.
+In applications, these expansions are often used in a *multi-variable* setting, in which the
+generator :math:`G(t)` depends on several variables :math:`c_0, \dots, c_{r-1}`, and the Dyson
+series or Magnus expansion are used to understand how the evolution changes under perturbations to
+several of these parameters simultaneously. For working with these expansions algorithmically it is
+necessary to formalize the expression of these expansions in the multi-variable setting.
-Mathematically, we explicitly write the generator as a function of these variables
-:math:`G(t, c_0, \dots, c_{r-1})`, and expand :math:`G` in a
-multi-variable power series in the variables :math:`c_i`:
+Mathematically, we explicitly write the generator as a function of these variables :math:`G(t, c_0,
+\dots, c_{r-1})`, and expand :math:`G` in a multi-variable power series in the variables
+:math:`c_i`:
.. math::
@@ -71,21 +67,21 @@ multi-variable power series in the variables :math:`c_i`:
\sum_{k=1}^\infty \sum_{0 \leq i_1 \leq \dots \leq i_k \leq r-1}
c_{i_1} \dots c_{i_k} G_{i_1, \dots, i_k}(t).
-For physical applications we take the existence of such a power series for granted:
-up to constant factors, the coefficients :math:`G_{i_1, \dots, i_k}(t)` are the partial
-derivatives of :math:`G` with respect to the variables :math:`c_i`. Commonly, :math:`G`
-depends linearly on the variables, e.g. when representing couplings between quantum systems.
+For physical applications we take the existence of such a power series for granted: up to constant
+factors, the coefficients :math:`G_{i_1, \dots, i_k}(t)` are the partial derivatives of :math:`G`
+with respect to the variables :math:`c_i`. Commonly, :math:`G` depends linearly on the variables,
+e.g. when representing couplings between quantum systems.
-Before defining the multi-variable Dyson series and Magnus expansions, we transform
-the generator into the *toggling frame* of :math:`G_\emptyset(t)`
+Before defining the multi-variable Dyson series and Magnus expansions, we transform the generator
+into the *toggling frame* of :math:`G_\emptyset(t)`
:footcite:`evans_timedependent_1967,haeberlen_1968`. Denoting
.. math::
V(t) = \mathcal{T}\exp\left(\int_{t_0}^t ds G_\emptyset(s)\right),
-the generator :math:`G` in the toggling frame of :math:`G_\emptyset(t)`,
-the unperturbed generator, is given by:
+the generator :math:`G` in the toggling frame of :math:`G_\emptyset(t)`, the unperturbed generator,
+is given by:
.. math::
@@ -94,8 +90,8 @@ the unperturbed generator, is given by:
c_{i_1} \dots c_{i_k} \tilde{G}_{i_1, \dots, i_k}(t)\textnormal{, with} \\
\tilde{G}_{i_1, \dots, i_k}(t) &= V^{-1}(t) G_{i_1, \dots, i_k}(t)V(t)
-Denoting :math:`U(t, c_0, \dots, c_{r-1})` as the solution of the LMDE with
-generator :math:`\tilde{G}`, note that
+Denoting :math:`U(t, c_0, \dots, c_{r-1})` as the solution of the LMDE with generator
+:math:`\tilde{G}`, note that
.. math::
@@ -104,8 +100,8 @@ generator :math:`\tilde{G}`, note that
and hence solution for :math:`G` and :math:`\tilde{G}` are simply related by :math:`V(t)`.
-Using this, :footcite:`puzzuoli_algorithms_2023` defines the multi-variable Dyson series
-for the generator :math:`\tilde{G}(t, c_0, \dots, c_{r-1})` as:
+Using this, :footcite:`puzzuoli_algorithms_2023` defines the multi-variable Dyson series for the
+generator :math:`\tilde{G}(t, c_0, \dots, c_{r-1})` as:
.. math::
@@ -113,9 +109,9 @@ for the generator :math:`\tilde{G}(t, c_0, \dots, c_{r-1})` as:
\sum_{k=1}^\infty \sum_{0 \leq i_1 \leq \dots \leq i_k \leq r-1}
c_{i_1} \dots c_{i_k} \mathcal{D}_{i_1, \dots, i_k}(t),
-where the :math:`\mathcal{D}_{i_1, \dots, i_k}(t)` are defined implicitly by the above
-equation, and are called the *multi-variable Dyson series terms*. Similarly the
-multi-variable Magnus expansion for :math:`\tilde{G}` is given as:
+where the :math:`\mathcal{D}_{i_1, \dots, i_k}(t)` are defined implicitly by the above equation, and
+are called the *multi-variable Dyson series terms*. Similarly the multi-variable Magnus expansion
+for :math:`\tilde{G}` is given as:
.. math::
@@ -130,41 +126,40 @@ with the :math:`\mathcal{O}_{i_1, \dots, i_k}(t)` again defined implicitly, and
Computing multi-variable Dyson series and Magnus expansion terms
----------------------------------------------------------------
-Given a power series decomposition of the generator as above,
-the function :func:`.solve_lmde_perturbation` computes,
-in the toggling frame of the unperturbed generator, either multi-variable
-Dyson series or Magnus expansion terms via the algorithms in
-:footcite:`puzzuoli_algorithms_2023`. It can also be used to compute Dyson-like terms via
-the algorithm in :footcite:`haas_engineering_2019`. In the presentation here and elsewhere,
-the expansions are phrased as infinite series, but of course in practice truncated
-versions must be specified and computed.
+Given a power series decomposition of the generator as above, the function
+:func:`.solve_lmde_perturbation` computes, in the toggling frame of the unperturbed generator,
+either multi-variable Dyson series or Magnus expansion terms via the algorithms in
+:footcite:`puzzuoli_algorithms_2023`. It can also be used to compute Dyson-like terms via the
+algorithm in :footcite:`haas_engineering_2019`. In the presentation here and elsewhere, the
+expansions are phrased as infinite series, but of course in practice truncated versions must be
+specified and computed.
-Utilizing this function, and working with the other objects in the module, requires
-understanding the notation and data structures used to represent power series.
+Utilizing this function, and working with the other objects in the module, requires understanding
+the notation and data structures used to represent power series.
.. _multiset power series:
Multiset power series notation
------------------------------
-Following :footcite:`puzzuoli_algorithms_2023`, the :mod:`.perturbation`
-module utilizes a *multiset* notation to more compactly represent and work with power series.
+Following :footcite:`puzzuoli_algorithms_2023`, the :mod:`.perturbation` module utilizes a
+*multiset* notation to more compactly represent and work with power series.
Consider the power series expansion above for the generator :math:`G(t, c_0, \dots, c_{r-1})`.
-Structurally, each term in the power series is labelled by the number of times each
-variable :math:`c_0, \dots, c_{r-1}` appears in the product :math:`c_{i_1} \dots c_{i_k}`.
-Equivalently, each term may be indexed by the number of times each variable label
-:math:`0, \dots, r-1` appears. The data structure used to represent these labels in this
-module is that of a *multiset*, i.e. a "set with repeated entries". Denoting multisets
-with round brackets, e.g. :math:`I = (i_1, \dots, i_k)`, we define
+Structurally, each term in the power series is labelled by the number of times each variable
+:math:`c_0, \dots, c_{r-1}` appears in the product :math:`c_{i_1} \dots c_{i_k}`. Equivalently, each
+term may be indexed by the number of times each variable label :math:`0, \dots, r-1` appears. The
+data structure used to represent these labels in this module is that of a *multiset*, i.e. a "set
+with repeated entries". Denoting multisets with round brackets, e.g. :math:`I = (i_1, \dots, i_k)`,
+we define
.. math::
c_I = c_{i_1} \times \dots \times c_{i_k}.
-and similarly denote :math:`G_I = G_{i_1, \dots, i_k}`. This notation is chosen due to
-the simple relationship between algebraic operations and multiset operations. E.g.,
-for two multisets :math:`I, J`, it holds that:
+and similarly denote :math:`G_I = G_{i_1, \dots, i_k}`. This notation is chosen due to the simple
+relationship between algebraic operations and multiset operations. E.g., for two multisets :math:`I,
+J`, it holds that:
.. math::
@@ -179,9 +174,8 @@ Some example usages of this notation are:
- :math:`c_{(1, 1)} = c_1^2`, and
- :math:`c_{(1, 2, 2, 3)} = c_1 c_2^2 c_3`.
-Finally, we denote the set of multisets of size $k$ with elements in :math:`\{0, \dots, r-1\}`
-as :math:`\mathcal{I}_k(r)`. Combining everything, the power series for :math:`G` may be
-rewritten as:
+Finally, we denote the set of multisets of size $k$ with elements in :math:`\{0, \dots, r-1\}` as
+:math:`\mathcal{I}_k(r)`. Combining everything, the power series for :math:`G` may be rewritten as:
.. math::
@@ -202,13 +196,10 @@ and the multi-variable Magnus expansion as:
\Omega(t, c_0, \dots, c_{r-1}) =
\sum_{k=1}^\infty \sum_{I \in \mathcal{I}_k(r)} c_I \mathcal{O}_I(t).
-In the module, multisets are represented using the ``Multiset`` object in the
-`multiset package `_. Arguments to functions
-which must specify a multiset or a list of multisets accept either ``Multiset`` instances
-directly, or a valid argument to the constructor to ``Multiset``, with the restriction that
-the multiset entries must be non-negative integers.
-
-
-
+In the module, multisets are represented using the ``Multiset`` object in the `multiset package
+`_. Arguments to functions which must specify a multiset or a
+list of multisets accept either ``Multiset`` instances directly, or a valid argument to the
+constructor to ``Multiset``, with the restriction that the multiset entries must be non-negative
+integers.
.. footbibliography::
diff --git a/docs/tutorials/optimizing_pulse_sequence.rst b/docs/tutorials/optimizing_pulse_sequence.rst
index 90c571fd9..1d3fd629e 100644
--- a/docs/tutorials/optimizing_pulse_sequence.rst
+++ b/docs/tutorials/optimizing_pulse_sequence.rst
@@ -3,39 +3,29 @@
Gradient optimization of a pulse sequence
=========================================
-Here, we walk through an example of optimizing a single-qubit gate using
-``qiskit_dynamics``. This tutorial requires JAX - see the user guide
-on :ref:`How-to use JAX with qiskit-dynamics `.
+Here, we walk through an example of optimizing a single-qubit gate using Qiskit Dynamics. This
+tutorial requires JAX - see the user guide on :ref:`How-to use JAX with qiskit-dynamics `.
-We will optimize an :math:`X`-gate on a model of a qubit system using
-the following steps:
+We will optimize an :math:`X`-gate on a model of a qubit system using the following steps:
-1. Configure ``qiskit-dynamics`` to work with the JAX backend.
+1. Configure JAX.
2. Setup a :class:`.Solver` instance with the model of the system.
3. Define a pulse sequence parameterization to optimize over.
4. Define a gate fidelity function.
5. Define an objective function for optimization.
6. Use JAX to differentiate the objective, then do the gradient optimization.
-7. Repeat the :math:`X`-gate optimization, alternatively using pulse schedules to specify the control sequence.
+7. Repeat the :math:`X`-gate optimization, alternatively using pulse schedules to specify the
+ control sequence.
-1. Configure to use JAX
------------------------
+1. Configure JAX
+----------------
-First, set JAX to operate in 64-bit mode, and set JAX as the default
-backend using :class:`.Array` for performing array operations.
-This is necessary to enable automatic differentiation of the Qiskit Dynamics code
-in this tutorial. See the user guide entry on using JAX
-for a more detailed explanation of why this step is necessary.
+First, set JAX to operate in 64-bit mode and to run on CPU.
.. jupyter-execute::
- #################################################################################
- # Remove this
- #################################################################################
- import warnings
- warnings.filterwarnings("ignore")
-
import jax
jax.config.update("jax_enable_x64", True)
@@ -48,8 +38,7 @@ for a more detailed explanation of why this step is necessary.
2. Setup the solver
-------------------
-Here we will setup a :class:`.Solver` with a simple model of a qubit. The
-Hamiltonian is:
+Here we will setup a :class:`.Solver` with a simple model of a qubit. The Hamiltonian is:
.. math:: H(t) = 2 \pi \nu \frac{Z}{2} + 2 \pi r s(t) \frac{X}{2}
@@ -62,9 +51,6 @@ In the above:
We will setup the problem to be in the rotating frame of the drift term.
-Also note: The :class:`.Solver` is initialized *without* signals, as we will
-update these and optimize over this later.
-
.. jupyter-execute::
import numpy as np
@@ -91,12 +77,11 @@ We will optimize over signals that are:
- On resonance with piecewise constant envelopes,
- Envelopes bounded between :math:`[-1, 1]`,
-- Envelopes are smooth, in the sense that the change between adjacent
- samples is small, and
+- Envelopes are smooth, in the sense that the change between adjacent samples is small, and
- Envelope starts and ends at :math:`0`.
-In setting up our parameterization, we need t keep in mind that we will
-use the BFGS optimization routine, and hence:
+In setting up our parameterization, we need t keep in mind that we will use the BFGS optimization
+routine, and hence:
- Optimization parameters must be *unconstrained*.
- Parameterization must be JAX-differentiable.
@@ -104,18 +89,16 @@ use the BFGS optimization routine, and hence:
We implement a parameterization as follows:
- Input: Array ``x`` of real values.
-- “Normalize” ``x`` by applying a JAX-differentiable function from
- :math:`\mathbb{R} \rightarrow [-1, 1]`.
+- “Normalize” ``x`` by applying a JAX-differentiable function from :math:`\mathbb{R} \rightarrow
+ [-1, 1]`.
- Pad the normalized ``x`` with a :math:`0.` to start.
- “Smoothen” the above via convolution.
-- Construct the signal using the above as the samples for a
- piecewise-constant envelope, with carrier frequency on resonance.
+- Construct the signal using the above as the samples for a piecewise-constant envelope, with
+ carrier frequency on resonance.
-We remark that there are many other parameterizations that may achieve
-the same ends, and may have more efficient strategies for achieving a
-value of :math:`0` at the beginning and end of the pulse. This is only
-meant to demonstrate the need for such an approach, and one simple
-example of one.
+We remark that there are many other parameterizations that may achieve the same ends, and may have
+more efficient strategies for achieving a value of :math:`0` at the beginning and end of the pulse.
+This is only meant to demonstrate the need for such an approach, and one simple example of one.
.. jupyter-execute::
@@ -149,8 +132,7 @@ example of one.
return output_signal
-Observe, for example, the signal generated when all parameters are
-:math:`10^8`:
+Observe, for example, the signal generated when all parameters are :math:`10^8`:
.. jupyter-execute::
@@ -161,8 +143,8 @@ Observe, for example, the signal generated when all parameters are
4. Define gate fidelity
-----------------------
-We will optimize an :math:`X` gate, and define the fidelity of the unitary :math:`U`
-implemented by the pulse via the standard fidelity measure:
+We will optimize an :math:`X` gate, and define the fidelity of the unitary :math:`U` implemented by
+the pulse via the standard fidelity measure:
.. math:: f(U) = \frac{|\text{Tr}(XU)|^2}{4}
@@ -179,9 +161,8 @@ implemented by the pulse via the standard fidelity measure:
The function we want to optimize consists of:
- Taking a list of input samples and applying the signal mapping.
-- Simulating the Schrodinger equation over the length of the pulse
- sequence.
-- Computing and return the infidelity (we minimize :math:`1-f(U)`).
+- Simulating the Schrodinger equation over the length of the pulse sequence.
+- Computing and return the infidelity (we minimize :math:`1 - f(U)`).
.. jupyter-execute::
@@ -210,15 +191,13 @@ The function we want to optimize consists of:
Finally, we gradient optimize the objective:
-- Use ``jax.value_and_grad`` to transform the objective into a function
- that computes both the objective and the gradient.
-- Use ``jax.jit`` to just-in-time compile the function into optimized
- `XLA `__ code. For the initial cost of
- performing the compilation, this speeds up each call of the function,
- speeding up the optimization.
-- Call ``scipy.optimize.minimize`` with the above, with
- ``method='BFGS'`` and ``jac=True`` to indicate that the passed
- objective also computes the gradient.
+- Use ``jax.value_and_grad`` to transform the objective into a function that computes both the
+ objective and the gradient.
+- Use ``jax.jit`` to just-in-time compile the function into optimized `XLA
+ `__ code. For the initial cost of performing the compilation,
+ this speeds up each call of the function, speeding up the optimization.
+- Call ``scipy.optimize.minimize`` with the above, with ``method='BFGS'`` and ``jac=True`` to
+ indicate that the passed objective also computes the gradient.
.. jupyter-execute::
@@ -235,11 +214,11 @@ Finally, we gradient optimize the objective:
print('Function value: ' + str(opt_results.fun))
-The gate is optimized to an :math:`X` gate, with deviation within the
-numerical accuracy of the solver.
+The gate is optimized to an :math:`X` gate, with deviation within the numerical accuracy of the
+solver.
-We can draw the optimized signal, which is retrieved by applying the
-``signal_mapping`` to the optimized parameters.
+We can draw the optimized signal, which is retrieved by applying the ``signal_mapping`` to the
+optimized parameters.
.. jupyter-execute::
@@ -254,17 +233,16 @@ We can draw the optimized signal, which is retrieved by applying the
)
-Summing the signal samples yields approximately :math:`\pm 50`, which is
-equivalent to what one would expect based on a rotating wave
-approximation analysis.
+Summing the signal samples yields approximately :math:`\pm 50`, which is equivalent to what one
+would expect based on a rotating wave approximation analysis.
.. jupyter-execute::
opt_signal.samples.sum()
-7. Repeat the :math:`X`-gate optimization, alternatively using pulse schedules to specify the control sequence.
-----------------------------------------------------------------------------------------------------------------
+7. Repeat the :math:`X`-gate optimization, alternatively using pulse schedules to specify the control sequence
+---------------------------------------------------------------------------------------------------------------
Here, we perform the optimization again, however now we specify the parameterized control sequence
to optimize as a pulse schedule.
diff --git a/docs/tutorials/qiskit_pulse.rst b/docs/tutorials/qiskit_pulse.rst
index 2725c5a88..6b946265f 100644
--- a/docs/tutorials/qiskit_pulse.rst
+++ b/docs/tutorials/qiskit_pulse.rst
@@ -1,9 +1,8 @@
Simulating Qiskit Pulse Schedules with Qiskit Dynamics
======================================================
-This tutorial shows how to use Qiskit Dynamics to simulate a Pulse schedule
-with a simple model of a qubit. The
-qubit is modeled by the drift hamiltonian
+This tutorial shows how to use Qiskit Dynamics to simulate a Pulse schedule with a simple model of a
+qubit. The qubit is modeled by the drift hamiltonian
.. math::
@@ -16,18 +15,17 @@ to which we apply the drive
H_\text{drive}(t) = \frac{r\,\Omega(t)}{2} X
-Here, :math:`\Omega(t)` is the drive signal which we will create using
-Qiskit pulse. The factor :math:`r` is the strength with which the drive
-signal drives the qubit. We begin by creating a pulse schedule with a
-``sx`` gate followed by a phase shift on the drive so that the following
-pulse creates a ``sy`` rotation. Therefore, if the qubit begins in the
-ground state we expect that this second pulse will not have any effect
-on the qubit. This situation is simulated with the following steps:
+Here, :math:`\Omega(t)` is the drive signal which we will create using Qiskit pulse. The factor
+:math:`r` is the strength with which the drive signal drives the qubit. We begin by creating a pulse
+schedule with a ``sx`` gate followed by a phase shift on the drive so that the following pulse
+creates a ``sy`` rotation. Therefore, if the qubit begins in the ground state we expect that this
+second pulse will not have any effect on the qubit. This situation is simulated with the following
+steps:
-1. Create the pulse schedule
-2. Converting pulse schedules to a :class:`.Signal`
-3. Create the system model, configured to simulate pulse schedules
-4. Simulate the pulse schedule using the model
+1. Create the pulse schedule.
+2. Converting pulse schedules to a :class:`.Signal`.
+3. Create the system model, configured to simulate pulse schedules.
+4. Simulate the pulse schedule using the model.
1. Create the pulse schedule
----------------------------
@@ -66,14 +64,12 @@ First, we use the pulse module in Qiskit to create a pulse schedule.
2. Convert the pulse schedule to a :class:`.Signal`
---------------------------------------------------
-Qiskit Dynamics has functionality for converting pulse schedule to instances
-of :class:`.Signal`. This is done using the pulse instruction to signal
-converter :class:`.InstructionToSignals`. This converter needs to know the
-sample rate of the arbitrary waveform generators creating the signals,
-i.e. ``dt``, as well as the carrier frequency of the signals,
-i.e. ``w``. The plot below shows the envelopes and the signals resulting
-from this conversion. The dashed line shows the time at which the
-virtual ``Z`` gate is applied.
+Qiskit Dynamics has functionality for converting pulse schedule to instances of :class:`.Signal`.
+This is done using the pulse instruction to signal converter :class:`.InstructionToSignals`. This
+converter needs to know the sample rate of the arbitrary waveform generators creating the signals,
+i.e. ``dt``, as well as the carrier frequency of the signals, i.e. ``w``. The plot below shows the
+envelopes and the signals resulting from this conversion. The dashed line shows the time at which
+the virtual ``Z`` gate is applied.
.. jupyter-execute::
@@ -98,11 +94,10 @@ virtual ``Z`` gate is applied.
3. Create the system model
--------------------------
-We now setup a :class:`.Solver` instance with the desired Hamiltonian information,
-and configure it to simulate pulse schedules. This requires specifying
-which channels act on which operators, channel carrier frequencies, and sample width ``dt``.
-Additionally, we setup this solver in the rotating frame and perform the
-rotating wave approximation.
+We now setup a :class:`.Solver` instance with the desired Hamiltonian information, and configure it
+to simulate pulse schedules. This requires specifying which channels act on which operators, channel
+carrier frequencies, and sample width ``dt``. Additionally, we setup this solver in the rotating
+frame and perform the rotating wave approximation.
.. jupyter-execute::
@@ -130,11 +125,11 @@ rotating wave approximation.
4. Simulate the pulse schedule using the model
----------------------------------------------
-In the last step we perform the simulation and plot the results. Note that, as we have
-configured ``hamiltonian_solver`` to simulate pulse schedules, we pass the schedule ``xp``
-directly to the ``signals`` argument of the ``solve`` method. Equivalently, ``signals``
-generated by ``converter.get_signals`` above can also be passed to the ``signals`` argument
-and in this case should produce identical behavior.
+In the last step we perform the simulation and plot the results. Note that, as we have configured
+``hamiltonian_solver`` to simulate pulse schedules, we pass the schedule ``xp`` directly to the
+``signals`` argument of the ``solve`` method. Equivalently, ``signals`` generated by
+``converter.get_signals`` above can also be passed to the ``signals`` argument and in this case
+should produce identical behavior.
.. jupyter-execute::
@@ -162,13 +157,11 @@ and in this case should produce identical behavior.
plt.xlim([0, 2*T])
plt.vlines(T, 0, 1.05, "k", linestyle="dashed")
-The plot below shows the population of the qubit as it evolves during
-the pulses. The vertical dashed line shows the time of the virtual Z
-rotation which was induced by the ``shift_phase`` instruction in the
-pulse schedule. As expected, the first pulse moves the qubit to an
-eigenstate of the ``Y`` operator. Therefore, the second pulse, which
-drives around the ``Y``-axis due to the phase shift, has hardley any
-influence on the populations of the qubit.
+The plot below shows the population of the qubit as it evolves during the pulses. The vertical
+dashed line shows the time of the virtual Z rotation which was induced by the ``shift_phase``
+instruction in the pulse schedule. As expected, the first pulse moves the qubit to an eigenstate of
+the ``Y`` operator. Therefore, the second pulse, which drives around the ``Y``-axis due to the phase
+shift, has hardley any influence on the populations of the qubit.
.. jupyter-execute::
diff --git a/docs/userguide/how_to_use_pulse_schedule_for_jax_jit.rst b/docs/userguide/how_to_use_pulse_schedule_for_jax_jit.rst
index 64c13eb8b..1c6223a44 100644
--- a/docs/userguide/how_to_use_pulse_schedule_for_jax_jit.rst
+++ b/docs/userguide/how_to_use_pulse_schedule_for_jax_jit.rst
@@ -1,42 +1,37 @@
.. _how-to use pulse schedules for jax-jit:
-How-to use pulse schedules generated by ``qiskit-pulse`` with JAX transformations
-=================================================================================
+How-to use pulse schedules generated by Qiskit Pulse with JAX transformations
+=============================================================================
-``Qiskit-pulse`` enables specification of time-dependence in quantum systems as pulse schedules,
-built from sequences of a variety of instructions, including the specification of shaped pulses (see
-the detailed API information about `Qiskit pulse API Reference
-`__). As of ``qiskit`` 0.40.0, JAX support
-was added for the :class:`~qiskit.pulse.library.ScalableSymbolicPulse` class. This user guide entry
-demonstrates the technical elements of utilizing this class within JAX-transformable functions.
+Qiskit Pulse enables specification of time-dependence in quantum systems as pulse schedules, built
+from sequences of a variety of instructions, including the specification of shaped pulses (see the
+detailed API information about `Qiskit pulse API Reference
+`__). As of Qiskit 0.40.0, JAX support was added for
+the :class:`~qiskit.pulse.library.ScalableSymbolicPulse` class. This user guide entry demonstrates
+the technical elements of utilizing this class within JAX-transformable functions.
.. note::
At present, only the :class:`~qiskit.pulse.library.ScalableSymbolicPulse` class is supported by
JAX, as the validation present in other pulse types, such as
:class:`~qiskit.pulse.library.Gaussian`, is not JAX-compatible.
-This guide addresses the following topics. See the :ref:`userguide on using JAX `
-for a more detailed explanation of how to work with JAX in Qiskit Dynamics.
+This guide addresses the following topics. See the :ref:`userguide on using JAX ` for a more detailed explanation of how to work with JAX in Qiskit
+Dynamics.
-1. Configure to use JAX.
+1. Configure JAX.
2. How to define a Gaussian pulse using :class:`~qiskit.pulse.library.ScalableSymbolicPulse`.
3. JAX transforming Pulse to Signal conversion involving
:class:`~qiskit.pulse.library.ScalableSymbolicPulse`.
-1. Configure to use JAX
------------------------
+1. Configure JAX
+----------------
-First, configure Dynamics to use JAX.
+First, configure JAX to run on CPU in 64 bit mode.
.. jupyter-execute::
- #################################################################################
- # Remove this
- #################################################################################
- import warnings
- warnings.filterwarnings("ignore")
-
# configure jax to use 64 bit mode
import jax
jax.config.update("jax_enable_x64", True)
diff --git a/docs/userguide/perturbative_solvers.rst b/docs/userguide/perturbative_solvers.rst
index b6d9614ad..10d5ef489 100644
--- a/docs/userguide/perturbative_solvers.rst
+++ b/docs/userguide/perturbative_solvers.rst
@@ -3,29 +3,28 @@ How-to use Dyson and Magnus based solvers
.. warning::
- This is an advanced topic --- utilizing perturbation-theory based solvers
- requires detailed knowledge of the structure of the differential equations
- involved, as well as manual tuning of the solver parameters.
- See the :class:`.DysonSolver` and :class:`.MagnusSolver` documentation for API details.
- Also, see :footcite:`puzzuoli_algorithms_2023` for a detailed explanation of the solvers,
- which varies and builds on the core idea introduced in :footcite:`shillito_fast_2020`.
+ This is an advanced topic --- utilizing perturbation-theory based solvers requires detailed
+ knowledge of the structure of the differential equations involved, as well as manual tuning of
+ the solver parameters. See the :class:`.DysonSolver` and :class:`.MagnusSolver` documentation
+ for API details. Also, see :footcite:`puzzuoli_algorithms_2023` for a detailed explanation of
+ the solvers, which varies and builds on the core idea introduced in
+ :footcite:`shillito_fast_2020`.
.. note::
- The circumstances under which perturbative solvers outperform
- traditional solvers, and which parameter sets to use, is nuanced.
- Perturbative solvers executed with JAX are setup to use more parallelization within a
- single solver run than typical solvers, and thus it is circumstance-specific whether
- the trade-off between speed of a single run and resource consumption is advantageous.
- Due to the parallelized nature, the comparison of execution times demonstrated in this
- userguide are highly hardware-dependent.
+ The circumstances under which perturbative solvers outperform traditional solvers, and which
+ parameter sets to use, is nuanced. Perturbative solvers executed with JAX are setup to use more
+ parallelization within a single solver run than typical solvers, and thus it is
+ circumstance-specific whether the trade-off between speed of a single run and resource
+ consumption is advantageous. Due to the parallelized nature, the comparison of execution times
+ demonstrated in this userguide are highly hardware-dependent.
-In this tutorial we walk through how to use perturbation-theory based solvers. For
-information on how these solvers work, see the :class:`.DysonSolver` and :class:`.MagnusSolver`
-class documentation, as well as the perturbative expansion background information provided in
-:ref:`Time-dependent perturbation theory and multi-variable
-series expansions review `.
+In this tutorial we walk through how to use perturbation-theory based solvers. For information on
+how these solvers work, see the :class:`.DysonSolver` and :class:`.MagnusSolver` class
+documentation, as well as the perturbative expansion background information provided in
+:ref:`Time-dependent perturbation theory and multi-variable series expansions review `.
We use a simple transmon model:
@@ -33,36 +32,28 @@ We use a simple transmon model:
where:
-- :math:`N`, :math:`a`, and :math:`a^\dagger` are, respectively, the
- number, annihilation, and creation operators.
-- :math:`\nu` is the qubit frequency and :math:`r` is the drive
- strength.
-- :math:`s(t)` is the drive signal, which we will take to be on
- resonance with envelope :math:`f(t) = A \frac{4t (T - t)}{T^2}`
- for a given amplitude :math:`A` and total time :math:`T`.
+- :math:`N`, :math:`a`, and :math:`a^\dagger` are, respectively, the number, annihilation, and
+ creation operators.
+- :math:`\nu` is the qubit frequency and :math:`r` is the drive strength.
+- :math:`s(t)` is the drive signal, which we will take to be on resonance with envelope :math:`f(t)
+ = A \frac{4t (T - t)}{T^2}` for a given amplitude :math:`A` and total time :math:`T`.
We will walk through the following steps:
-1. Configure ``qiskit-dynamics`` to work with JAX.
+1. Configure JAX.
2. Construct the model.
3. How-to construct and simulate using the Dyson-based perturbative solver.
4. Simulate using a traditional ODE solver, comparing speed.
5. How-to construct and simulate using the Magnus-based perturbative solver.
-1. Configure to use JAX
------------------------
+1. Configure JAX
+----------------
-These simulations will be done with JAX array backend to enable
-compilation. See the :ref:`userguide on using JAX ` for a more detailed
-explanation of how to work with JAX in Qiskit Dynamics.
+First, configure JAX to run on CPU in 64 bit mode. See the :ref:`userguide on using JAX ` for a more detailed explanation of how to work with JAX in Qiskit
+Dynamics.
.. jupyter-execute::
-
- #################################################################################
- # Remove this
- #################################################################################
- import warnings
- warnings.filterwarnings("ignore")
# configure jax to use 64 bit mode
import jax
@@ -75,12 +66,11 @@ explanation of how to work with JAX in Qiskit Dynamics.
2. Construct the model
----------------------
-First, we construct the model described in the introduction. We use a relatively
-high dimension for the oscillator system state space to accentuate the speed
-difference between the perturbative solvers and the traditional ODE solver. The higher
-dimensionality introduces higher frequencies into the model, which will
-slow down both the ODE solver and the initial construction of the perturbative solver. However
-after the initial construction, the higher frequencies in the model have no impact
+First, we construct the model described in the introduction. We use a relatively high dimension for
+the oscillator system state space to accentuate the speed difference between the perturbative
+solvers and the traditional ODE solver. The higher dimensionality introduces higher frequencies into
+the model, which will slow down both the ODE solver and the initial construction of the perturbative
+solver. However after the initial construction, the higher frequencies in the model have no impact
on the perturbative solver speed.
.. jupyter-execute::
@@ -112,23 +102,23 @@ on the perturbative solver speed.
3. How-to construct and simulate using the Dyson-based perturbative solver
--------------------------------------------------------------------------
-Setting up a :class:`.DysonSolver` requires more setup than the standard
-:class:`.Solver`, as the user must specify several configuration parameters,
-along with the structure of the differential equation:
+Setting up a :class:`.DysonSolver` requires more setup than the standard :class:`.Solver`, as the
+user must specify several configuration parameters, along with the structure of the differential
+equation:
-- The :class:`.DysonSolver` requires direct specification of the LMDE to the
- solver. If we are simulating the Schrodinger equation, we need to
- multiply the Hamiltonian terms by ``-1j`` when describing the LMDE operators.
-- The :class:`.DysonSolver` is a fixed step solver, with the step size
- being fixed at instantiation. This step size must be chosen in conjunction
- with the ``expansion_order`` to ensure that a suitable accuracy is attained.
-- Over each fixed time-step the :class:`.DysonSolver` solves by computing a
- truncated perturbative expansion.
+- The :class:`.DysonSolver` requires direct specification of the LMDE to the solver. If we are
+ simulating the Schrodinger equation, we need to multiply the Hamiltonian terms by ``-1j`` when
+ describing the LMDE operators.
+- The :class:`.DysonSolver` is a fixed step solver, with the step size being fixed at instantiation.
+ This step size must be chosen in conjunction with the ``expansion_order`` to ensure that a
+ suitable accuracy is attained.
+- Over each fixed time-step the :class:`.DysonSolver` solves by computing a truncated perturbative
+ expansion.
- - To compute the truncated perturbative expansion, the signal envelopes are
- approximated as a linear combination of Chebyshev polynomials.
- - The order of the Chebyshev approximations, along with central carrier frequencies
- for defining the “envelope” of each :class:`.Signal`, must be provided at instantiation.
+ - To compute the truncated perturbative expansion, the signal envelopes are approximated as a
+ linear combination of Chebyshev polynomials.
+ - The order of the Chebyshev approximations, along with central carrier frequencies for defining
+ the “envelope” of each :class:`.Signal`, must be provided at instantiation.
See the :class:`.DysonSolver` API docs for more details.
@@ -153,22 +143,21 @@ For our example Hamiltonian we configure the :class:`.DysonSolver` as follows:
rtol=1e-12
)
-The above parameters are chosen so that the :class:`.DysonSolver` is fast and produces
-high accuracy solutions (measured and confirmed after the fact). The relatively large
-step size ``dt = 0.1`` is chosen for speed: the larger the step size, the fewer steps required.
-To ensure high accuracy given the large step size, we choose a high expansion order,
-and utilize a linear envelope approximation scheme by setting the ``chebyshev_order`` to ``1``
-for the single drive signal.
+The above parameters are chosen so that the :class:`.DysonSolver` is fast and produces high accuracy
+solutions (measured and confirmed after the fact). The relatively large step size ``dt = 0.1`` is
+chosen for speed: the larger the step size, the fewer steps required. To ensure high accuracy given
+the large step size, we choose a high expansion order, and utilize a linear envelope approximation
+scheme by setting the ``chebyshev_order`` to ``1`` for the single drive signal.
-Similar to the :class:`.Solver` interface, the :meth:`.DysonSolver.solve` method can be
-called to simulate the system for a given list of signals, initial state, start time,
-and number of time steps of length ``dt``.
+Similar to the :class:`.Solver` interface, the :meth:`.DysonSolver.solve` method can be called to
+simulate the system for a given list of signals, initial state, start time, and number of time steps
+of length ``dt``.
-To properly compare the speed of :class:`.DysonSolver` to a traditional ODE solver,
-we write JAX-compilable functions wrapping each that, given an amplitude value,
-returns the final unitary over the interval ``[0, (T // dt) * dt]`` for an on-resonance
-drive with envelope shape given by ``envelope_func`` above. Running compiled versions of
-these functions gives a sense of the speeds attainable by these solvers.
+To properly compare the speed of :class:`.DysonSolver` to a traditional ODE solver, we write
+JAX-compilable functions wrapping each that, given an amplitude value, returns the final unitary
+over the interval ``[0, (T // dt) * dt]`` for an on-resonance drive with envelope shape given by
+``envelope_func`` above. Running compiled versions of these functions gives a sense of the speeds
+attainable by these solvers.
.. jupyter-execute::
@@ -196,8 +185,8 @@ First run includes compile time.
%time yf_dyson = dyson_sim(1.).block_until_ready()
-Once JIT compilation has been performance we can benchmark the performance of the
-JIT-compiled solver:
+Once JIT compilation has been performance we can benchmark the performance of the JIT-compiled
+solver:
.. jupyter-execute::
@@ -207,8 +196,8 @@ JIT-compiled solver:
4. Comparison to traditional ODE solver
---------------------------------------
-We now construct the same simulation using a standard solver to compare
-accuracy and simulation speed.
+We now construct the same simulation using a standard solver to compare accuracy and simulation
+speed.
.. jupyter-execute::
@@ -262,17 +251,16 @@ Confirm similar accuracy solution.
np.linalg.norm(yf_low_tol - yf_ode)
-Here we see that, once compiled, the Dyson-based solver has a
-significant speed advantage over the traditional solver, at the expense
-of the initial compilation time and the technical aspect of using the solver.
+Here we see that, once compiled, the Dyson-based solver has a significant speed advantage over the
+traditional solver, at the expense of the initial compilation time and the technical aspect of using
+the solver.
5. How-to construct and simulate using the Magnus-based perturbation solver
---------------------------------------------------------------------------
-Next, we repeat our example using the Magnus-based perturbative solver.
-Setup of the :class:`.MagnusSolver` is similar to the :class:`.DysonSolver`,
-but it uses the Magnus expansion and matrix exponentiation to simulate over
-each fixed time step.
+Next, we repeat our example using the Magnus-based perturbative solver. Setup of the
+:class:`.MagnusSolver` is similar to the :class:`.DysonSolver`, but it uses the Magnus expansion and
+matrix exponentiation to simulate over each fixed time step.
.. jupyter-execute::
@@ -327,7 +315,7 @@ Second run demonstrates speed of the simulation.
np.linalg.norm(yf_magnus - yf_low_tol)
-Observe comparable accuracy at a lower order in the expansion, albeit
-with a modest speed up as compared to the Dyson-based solver.
+Observe comparable accuracy at a lower order in the expansion, albeit with a modest speed up as
+compared to the Dyson-based solver.
.. footbibliography::