Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add exactn cpp binding #1014

Merged
merged 95 commits into from
Dec 11, 2024
Merged
Show file tree
Hide file tree
Changes from 93 commits
Commits
Show all changes
95 commits
Select commit Hold shift + click to select a range
e7bbd43
initial commit
multiphaseCFD Nov 4, 2024
e41be74
Auto update version from '0.39.0-dev51' to '0.39.0-dev53'
ringo-but-quantum Nov 5, 2024
ae9110f
add getMethod public interface
multiphaseCFD Nov 5, 2024
e45b6af
move quantum state creation to tensornetbase class and ensure device …
multiphaseCFD Nov 5, 2024
7df0aea
move getData / getDataVector API from MPS class to TNCudaBase
multiphaseCFD Nov 5, 2024
3fe2b8d
move tensor data from MPSTNCuda to TNCudaBase
multiphaseCFD Nov 5, 2024
4dbcdcf
move basistate set to the base
multiphaseCFD Nov 5, 2024
bd4ba9c
fix typo
multiphaseCFD Nov 5, 2024
8ed779c
method as static data and tidy up
multiphaseCFD Nov 5, 2024
e509ea5
Merge branch 'master' into add_exatn_cpp
multiphaseCFD Nov 6, 2024
cd2c2c6
Auto update version from '0.40.0-dev0' to '0.40.0-dev1'
ringo-but-quantum Nov 6, 2024
594e530
fix exatn initialization
multiphaseCFD Nov 6, 2024
548ebaa
move more methods and data to tncudabase class
multiphaseCFD Nov 6, 2024
46f2e80
tidy up code
multiphaseCFD Nov 6, 2024
aa4e284
rename updateMPSSiteData method
multiphaseCFD Nov 6, 2024
ee4e3f3
skip ci
multiphaseCFD Nov 6, 2024
2825e82
Merge branch 'master' into add_exatn_cpp
multiphaseCFD Nov 8, 2024
57d4536
Auto update version from '0.40.0-dev4' to '0.40.0-dev5'
ringo-but-quantum Nov 8, 2024
061d708
base binding for exatn
LuisAlfredoNu Nov 18, 2024
570b9bd
Merge branch 'master' into add_exatn_cpp
multiphaseCFD Nov 18, 2024
8b75653
Auto update version from '0.40.0-dev11' to '0.40.0-dev12'
ringo-but-quantum Nov 18, 2024
fc07ea3
add changelog
LuisAlfredoNu Nov 28, 2024
c32ca00
Add C++ unit tests for Exact Tensor Network backends (#998)
LuisAlfredoNu Nov 28, 2024
4e6b5d3
Auto update version from '0.40.0-dev12' to '0.40.0-dev22'
ringo-but-quantum Nov 28, 2024
f9488f2
Merge branch 'add_exatn_cpp' into add_exatn_cpp_binding
LuisAlfredoNu Nov 28, 2024
90147a1
Merge branch 'master' into add_exatn_cpp
multiphaseCFD Nov 28, 2024
daf97c8
Auto update version from '0.40.0-dev21' to '0.40.0-dev22'
ringo-but-quantum Nov 28, 2024
2e6e272
fix codefactor complains
multiphaseCFD Nov 28, 2024
85ede43
update testHelpersTNCuda
multiphaseCFD Nov 28, 2024
ca23421
Merge branch 'master' into add_exatn_cpp
multiphaseCFD Nov 28, 2024
263cebb
Auto update version from '0.40.0-dev23' to '0.40.0-dev24'
ringo-but-quantum Nov 28, 2024
11ce837
first verion of bindings
LuisAlfredoNu Nov 28, 2024
c6dba26
apply format
LuisAlfredoNu Nov 28, 2024
232b6d5
add changelog entry
multiphaseCFD Nov 28, 2024
c2f1ea5
Update pennylane_lightning/core/src/simulators/lightning_tensor/tncud…
multiphaseCFD Nov 28, 2024
5f7eda8
Update pennylane_lightning/core/src/simulators/lightning_tensor/tncud…
multiphaseCFD Nov 28, 2024
6f862a7
Update pennylane_lightning/core/src/simulators/lightning_tensor/tncud…
multiphaseCFD Nov 28, 2024
3ec8a24
make format
multiphaseCFD Dec 2, 2024
b000f99
python test almost ready
LuisAlfredoNu Dec 2, 2024
04b5309
remove the setBasisState comment
multiphaseCFD Dec 2, 2024
34393ed
quick fix
multiphaseCFD Dec 2, 2024
26d20f1
default destructor
multiphaseCFD Dec 2, 2024
07648f9
Adding exatn to the unit test
LuisAlfredoNu Dec 2, 2024
b7a4ee4
apply format
LuisAlfredoNu Dec 2, 2024
b25dc6e
trigger CIs
LuisAlfredoNu Dec 3, 2024
ad20b6b
fix serialize testing issue
LuisAlfredoNu Dec 3, 2024
0312341
fix serialize testing issue
LuisAlfredoNu Dec 3, 2024
9c4f52f
apply format
LuisAlfredoNu Dec 3, 2024
51dfa90
fix wheels isue
LuisAlfredoNu Dec 3, 2024
5b9b05c
apply Joseph's suggestions
multiphaseCFD Dec 3, 2024
1407b9c
Auto update version from '0.40.0-dev24' to '0.40.0-dev25'
ringo-but-quantum Dec 3, 2024
eae05cf
Merge branch 'master' into add_exatn_cpp
multiphaseCFD Dec 3, 2024
d7b8897
Auto update version from '0.40.0-dev24' to '0.40.0-dev25'
ringo-but-quantum Dec 3, 2024
dd8b063
add TODOs
multiphaseCFD Dec 3, 2024
a1482b9
exatn->exacttn
multiphaseCFD Dec 3, 2024
879d249
fix bindings
LuisAlfredoNu Dec 3, 2024
0e34cc1
bug fix
multiphaseCFD Dec 3, 2024
3e2cd56
update docstring
multiphaseCFD Dec 3, 2024
ca98aec
fix naming issues Tensornet->TNCuda, TNCudaBase->TNCuda
multiphaseCFD Dec 3, 2024
31e3a13
const, constexpr
multiphaseCFD Dec 3, 2024
50e4a62
fix changelog
multiphaseCFD Dec 3, 2024
005da34
Remove lightning.tensor_mps or exatn
LuisAlfredoNu Dec 3, 2024
c549cd3
repalce exatn by exact
LuisAlfredoNu Dec 3, 2024
659d7ee
expand serialize test
LuisAlfredoNu Dec 3, 2024
c3987e1
apply format
LuisAlfredoNu Dec 3, 2024
b465401
Merge branch 'master' into add_exatn_cpp
multiphaseCFD Dec 4, 2024
178f110
Auto update version from '0.40.0-dev26' to '0.40.0-dev27'
ringo-but-quantum Dec 4, 2024
1c16994
Merge branch 'add_exatn_cpp' into add_exatn_cpp_binding
LuisAlfredoNu Dec 4, 2024
a3a2604
adapt ExactTNCuda
LuisAlfredoNu Dec 4, 2024
c50b22d
apply format
LuisAlfredoNu Dec 4, 2024
3bea1f8
Joseph, Ali and Shuli suggestions
LuisAlfredoNu Dec 5, 2024
6e0efb9
Merge branch 'master' into add_exatn_cpp_binding
LuisAlfredoNu Dec 5, 2024
1a68216
appply format
LuisAlfredoNu Dec 5, 2024
0b30b4a
Auto update version from '0.40.0-dev31' to '0.40.0-dev32'
ringo-but-quantum Dec 5, 2024
c319efa
solve pylint issues
LuisAlfredoNu Dec 5, 2024
5ade2c4
Merge branch 'add_exatn_cpp_binding' of github.com:PennyLaneAI/pennyl…
LuisAlfredoNu Dec 5, 2024
b8d7c2c
solve pylint issues
LuisAlfredoNu Dec 5, 2024
f1e4dea
solve pylint issues
LuisAlfredoNu Dec 5, 2024
6d21410
apply format
LuisAlfredoNu Dec 5, 2024
1a6d843
Shuli comments
LuisAlfredoNu Dec 6, 2024
7644563
apply format
LuisAlfredoNu Dec 6, 2024
73ce36e
Joseph comments
LuisAlfredoNu Dec 6, 2024
8b18ad5
amintor commenst
LuisAlfredoNu Dec 6, 2024
c8da1e1
apply format
LuisAlfredoNu Dec 6, 2024
c4657bf
Auto update version from '0.40.0-dev32' to '0.40.0-dev33'
ringo-but-quantum Dec 6, 2024
5be1e9c
Merge branch 'master' into add_exatn_cpp_binding
LuisAlfredoNu Dec 6, 2024
9022c84
Auto update version from '0.40.0-dev32' to '0.40.0-dev33'
ringo-but-quantum Dec 6, 2024
39a1b4d
ali comments
LuisAlfredoNu Dec 6, 2024
7437358
Update CHANGELOG
LuisAlfredoNu Dec 6, 2024
5c91632
solve pylint
LuisAlfredoNu Dec 6, 2024
96f8a26
apply format
LuisAlfredoNu Dec 6, 2024
4c59a3d
fix CIs
LuisAlfredoNu Dec 6, 2024
cbafb94
trigger CIs
LuisAlfredoNu Dec 7, 2024
b719a3a
Merge branch 'master' into add_exatn_cpp_binding
multiphaseCFD Dec 11, 2024
b387b7d
Auto update version from '0.40.0-dev33' to '0.40.0-dev34'
ringo-but-quantum Dec 11, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions .github/CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,9 @@

### Improvements

* Add Exact Tensor Network cpp binding.
[(#1014)](https://github.com/PennyLaneAI/pennylane-lightning/pull/1014/)

* Catalyst device interfaces support dynamic shots, and no longer parses the device init op's attribute dictionary for a static shots literal.
[(#1017)](https://github.com/PennyLaneAI/pennylane-lightning/pull/1017)

Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/tests_gpu_python.yml
Original file line number Diff line number Diff line change
Expand Up @@ -183,7 +183,7 @@ jobs:
run: |
rm -rf build
PL_BACKEND=lightning_qubit python scripts/configure_pyproject_toml.py || true
PL_BACKEND=lightning_qubit SKIP_COMPILATION=True python -m pip install . -vv
PL_BACKEND=lightning_qubit python -m pip install . -vv
multiphaseCFD marked this conversation as resolved.
Show resolved Hide resolved
rm -rf build
PL_BACKEND=${{ matrix.pl_backend }} python scripts/configure_pyproject_toml.py || true
Expand Down
120 changes: 83 additions & 37 deletions pennylane_lightning/core/_serialize.py
Original file line number Diff line number Diff line change
Expand Up @@ -54,12 +54,18 @@
use_csingle (bool): whether to use np.complex64 instead of np.complex128
use_mpi (bool, optional): If using MPI to accelerate calculation. Defaults to False.
split_obs (Union[bool, int], optional): If splitting the observables in a list. Defaults to False.
tensor_backend (str): If using `lightning.tensor` and select the TensorNetwork backend, mps or exact. Default to ''

"""

# pylint: disable=import-outside-toplevel, too-many-instance-attributes, c-extension-no-member, too-many-branches, too-many-statements
# pylint: disable=import-outside-toplevel, too-many-instance-attributes, c-extension-no-member, too-many-branches, too-many-statements too-many-positional-arguments too-many-arguments
def __init__(
self, device_name, use_csingle: bool = False, use_mpi: bool = False, split_obs: bool = False
self,
device_name,
LuisAlfredoNu marked this conversation as resolved.
Show resolved Hide resolved
use_csingle: bool = False,
use_mpi: bool = False,
split_obs: bool = False,
tensor_backend: str = str(),
):
self.use_csingle = use_csingle
self.device_name = device_name
Expand Down Expand Up @@ -95,43 +101,14 @@
else:
raise DeviceError(f'The device name "{device_name}" is not a valid option.')

if device_name == "lightning.tensor":
self.tensornetwork_c64 = lightning_ops.TensorNetC64
self.tensornetwork_c128 = lightning_ops.TensorNetC128
else:
self.statevector_c64 = lightning_ops.StateVectorC64
self.statevector_c128 = lightning_ops.StateVectorC128

self.named_obs_c64 = lightning_ops.observables.NamedObsC64
self.named_obs_c128 = lightning_ops.observables.NamedObsC128
self.hermitian_obs_c64 = lightning_ops.observables.HermitianObsC64
self.hermitian_obs_c128 = lightning_ops.observables.HermitianObsC128
self.tensor_prod_obs_c64 = lightning_ops.observables.TensorProdObsC64
self.tensor_prod_obs_c128 = lightning_ops.observables.TensorProdObsC128
self.hamiltonian_c64 = lightning_ops.observables.HamiltonianC64
self.hamiltonian_c128 = lightning_ops.observables.HamiltonianC128

if device_name != "lightning.tensor":
self.sparse_hamiltonian_c64 = lightning_ops.observables.SparseHamiltonianC64
self.sparse_hamiltonian_c128 = lightning_ops.observables.SparseHamiltonianC128

self._use_mpi = use_mpi

if self._use_mpi:
self.statevector_mpi_c64 = lightning_ops.StateVectorMPIC64
self.statevector_mpi_c128 = lightning_ops.StateVectorMPIC128
self.named_obs_mpi_c64 = lightning_ops.observablesMPI.NamedObsMPIC64
self.named_obs_mpi_c128 = lightning_ops.observablesMPI.NamedObsMPIC128
self.hermitian_obs_mpi_c64 = lightning_ops.observablesMPI.HermitianObsMPIC64
self.hermitian_obs_mpi_c128 = lightning_ops.observablesMPI.HermitianObsMPIC128
self.tensor_prod_obs_mpi_c64 = lightning_ops.observablesMPI.TensorProdObsMPIC64
self.tensor_prod_obs_mpi_c128 = lightning_ops.observablesMPI.TensorProdObsMPIC128
self.hamiltonian_mpi_c64 = lightning_ops.observablesMPI.HamiltonianMPIC64
self.hamiltonian_mpi_c128 = lightning_ops.observablesMPI.HamiltonianMPIC128
self.sparse_hamiltonian_mpi_c64 = lightning_ops.observablesMPI.SparseHamiltonianMPIC64
self.sparse_hamiltonian_mpi_c128 = lightning_ops.observablesMPI.SparseHamiltonianMPIC128

self._mpi_manager = lightning_ops.MPIManager
if device_name in ["lightning.qubit", "lightning.kokkos", "lightning.gpu"]:
assert tensor_backend == str()
self._set_lightning_state_bindings(lightning_ops)

Check warning on line 108 in pennylane_lightning/core/_serialize.py

View check run for this annotation

Codecov / codecov/patch

pennylane_lightning/core/_serialize.py#L107-L108

Added lines #L107 - L108 were not covered by tests
LuisAlfredoNu marked this conversation as resolved.
Show resolved Hide resolved
else:
LuisAlfredoNu marked this conversation as resolved.
Show resolved Hide resolved
self._tensor_backend = tensor_backend
self._set_lightning_tensor_bindings(tensor_backend, lightning_ops)

@property
def ctype(self):
Expand Down Expand Up @@ -193,6 +170,75 @@
)
return self.sparse_hamiltonian_c64 if self.use_csingle else self.sparse_hamiltonian_c128

def _set_lightning_state_bindings(self, lightning_ops):
LuisAlfredoNu marked this conversation as resolved.
Show resolved Hide resolved
"""Define the variables needed to access the modules from the C++ bindings for state vector."""

self.statevector_c64 = lightning_ops.StateVectorC64
self.statevector_c128 = lightning_ops.StateVectorC128

Check warning on line 177 in pennylane_lightning/core/_serialize.py

View check run for this annotation

Codecov / codecov/patch

pennylane_lightning/core/_serialize.py#L176-L177

Added lines #L176 - L177 were not covered by tests

self.named_obs_c64 = lightning_ops.observables.NamedObsC64
self.named_obs_c128 = lightning_ops.observables.NamedObsC128
self.hermitian_obs_c64 = lightning_ops.observables.HermitianObsC64
self.hermitian_obs_c128 = lightning_ops.observables.HermitianObsC128
self.tensor_prod_obs_c64 = lightning_ops.observables.TensorProdObsC64
self.tensor_prod_obs_c128 = lightning_ops.observables.TensorProdObsC128
self.hamiltonian_c64 = lightning_ops.observables.HamiltonianC64
self.hamiltonian_c128 = lightning_ops.observables.HamiltonianC128

Check warning on line 186 in pennylane_lightning/core/_serialize.py

View check run for this annotation

Codecov / codecov/patch

pennylane_lightning/core/_serialize.py#L179-L186

Added lines #L179 - L186 were not covered by tests

self.sparse_hamiltonian_c64 = lightning_ops.observables.SparseHamiltonianC64
self.sparse_hamiltonian_c128 = lightning_ops.observables.SparseHamiltonianC128

Check warning on line 189 in pennylane_lightning/core/_serialize.py

View check run for this annotation

Codecov / codecov/patch

pennylane_lightning/core/_serialize.py#L188-L189

Added lines #L188 - L189 were not covered by tests

if self._use_mpi:
self.statevector_mpi_c64 = lightning_ops.StateVectorMPIC64
self.statevector_mpi_c128 = lightning_ops.StateVectorMPIC128

Check warning on line 193 in pennylane_lightning/core/_serialize.py

View check run for this annotation

Codecov / codecov/patch

pennylane_lightning/core/_serialize.py#L191-L193

Added lines #L191 - L193 were not covered by tests

self.named_obs_mpi_c64 = lightning_ops.observablesMPI.NamedObsMPIC64
self.named_obs_mpi_c128 = lightning_ops.observablesMPI.NamedObsMPIC128
self.hermitian_obs_mpi_c64 = lightning_ops.observablesMPI.HermitianObsMPIC64
self.hermitian_obs_mpi_c128 = lightning_ops.observablesMPI.HermitianObsMPIC128
self.tensor_prod_obs_mpi_c64 = lightning_ops.observablesMPI.TensorProdObsMPIC64
self.tensor_prod_obs_mpi_c128 = lightning_ops.observablesMPI.TensorProdObsMPIC128
self.hamiltonian_mpi_c64 = lightning_ops.observablesMPI.HamiltonianMPIC64
self.hamiltonian_mpi_c128 = lightning_ops.observablesMPI.HamiltonianMPIC128

Check warning on line 202 in pennylane_lightning/core/_serialize.py

View check run for this annotation

Codecov / codecov/patch

pennylane_lightning/core/_serialize.py#L195-L202

Added lines #L195 - L202 were not covered by tests

self.sparse_hamiltonian_mpi_c64 = lightning_ops.observablesMPI.SparseHamiltonianMPIC64
self.sparse_hamiltonian_mpi_c128 = lightning_ops.observablesMPI.SparseHamiltonianMPIC128

Check warning on line 205 in pennylane_lightning/core/_serialize.py

View check run for this annotation

Codecov / codecov/patch

pennylane_lightning/core/_serialize.py#L204-L205

Added lines #L204 - L205 were not covered by tests

self._mpi_manager = lightning_ops.MPIManager

Check warning on line 207 in pennylane_lightning/core/_serialize.py

View check run for this annotation

Codecov / codecov/patch

pennylane_lightning/core/_serialize.py#L207

Added line #L207 was not covered by tests

def _set_lightning_tensor_bindings(self, tensor_backend, lightning_ops):
LuisAlfredoNu marked this conversation as resolved.
Show resolved Hide resolved
"""Define the variables needed to access the modules from the C++ bindings for tensor network."""
if tensor_backend == "mps":
self.tensornetwork_c64 = lightning_ops.mpsTensorNetC64
self.tensornetwork_c128 = lightning_ops.mpsTensorNetC128

self.named_obs_c64 = lightning_ops.observables.mpsNamedObsC64
self.named_obs_c128 = lightning_ops.observables.mpsNamedObsC128
self.hermitian_obs_c64 = lightning_ops.observables.mpsHermitianObsC64
self.hermitian_obs_c128 = lightning_ops.observables.mpsHermitianObsC128
self.tensor_prod_obs_c64 = lightning_ops.observables.mpsTensorProdObsC64
self.tensor_prod_obs_c128 = lightning_ops.observables.mpsTensorProdObsC128
self.hamiltonian_c64 = lightning_ops.observables.mpsHamiltonianC64
self.hamiltonian_c128 = lightning_ops.observables.mpsHamiltonianC128

elif tensor_backend == "tn":
LuisAlfredoNu marked this conversation as resolved.
Show resolved Hide resolved
self.tensornetwork_c64 = lightning_ops.exactTensorNetC64
self.tensornetwork_c128 = lightning_ops.exactTensorNetC128

self.named_obs_c64 = lightning_ops.observables.exactNamedObsC64
self.named_obs_c128 = lightning_ops.observables.exactNamedObsC128
self.hermitian_obs_c64 = lightning_ops.observables.exactHermitianObsC64
self.hermitian_obs_c128 = lightning_ops.observables.exactHermitianObsC128
self.tensor_prod_obs_c64 = lightning_ops.observables.exactTensorProdObsC64
self.tensor_prod_obs_c128 = lightning_ops.observables.exactTensorProdObsC128
self.hamiltonian_c64 = lightning_ops.observables.exactHamiltonianC64
self.hamiltonian_c128 = lightning_ops.observables.exactHamiltonianC128

else:
raise ValueError(

Check warning on line 238 in pennylane_lightning/core/_serialize.py

View check run for this annotation

Codecov / codecov/patch

pennylane_lightning/core/_serialize.py#L238

Added line #L238 was not covered by tests
f"Unsupported method: {tensor_backend}. Supported methods are 'mps' (Matrix Product State) and 'tn' (Exact Tensor Network)."
)

def _named_obs(self, observable, wires_map: dict = None):
"""Serializes a Named observable"""
wires = [wires_map[w] for w in observable.wires] if wires_map else observable.wires.tolist()
Expand Down
2 changes: 1 addition & 1 deletion pennylane_lightning/core/_version.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,4 +16,4 @@
Version number (major.minor.patch[-label])
"""

__version__ = "0.40.0-dev32"
__version__ = "0.40.0-dev33"
2 changes: 1 addition & 1 deletion pennylane_lightning/core/src/bindings/Bindings.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -85,6 +85,6 @@ PYBIND11_MODULE(
// Register bindings for backend-specific info:
registerBackendSpecificInfo(m);

registerLightningTensorClassBindings<TensorNetBackends>(m);
registerLightningTensorClassBindings<TensorNetworkBackends>(m);
}
#endif
141 changes: 136 additions & 5 deletions pennylane_lightning/core/src/bindings/Bindings.hpp
maliasadi marked this conversation as resolved.
Show resolved Hide resolved
Original file line number Diff line number Diff line change
Expand Up @@ -749,7 +749,7 @@ void registerLightningTensorBackendAgnosticMeasurements(PyClass &pyclass) {
"Variance of an observable object.")
.def("generate_samples", [](MeasurementsT &M,
const std::vector<std::size_t> &wires,
const std::size_t num_shots) {
std::size_t num_shots) {
AmintorDusko marked this conversation as resolved.
Show resolved Hide resolved
constexpr auto sz = sizeof(std::size_t);
const std::size_t num_wires = wires.size();
const std::size_t ndim = 2;
Expand All @@ -769,23 +769,154 @@ void registerLightningTensorBackendAgnosticMeasurements(PyClass &pyclass) {
});
}

/**
* @brief Register observable classes for TensorNetwork.
*
* @tparam LightningBackendT
* @param m Pybind module
LuisAlfredoNu marked this conversation as resolved.
Show resolved Hide resolved
LuisAlfredoNu marked this conversation as resolved.
Show resolved Hide resolved
* @param name backend name of TN (mps, tn)
*/
template <class LightningBackendT>
void registerBackendAgnosticObservablesTensor(py::module_ &m,
LuisAlfredoNu marked this conversation as resolved.
Show resolved Hide resolved
const std::string &name) {
using PrecisionT =
typename LightningBackendT::PrecisionT; // LightningBackendT's's
// precision.
using ComplexT =
typename LightningBackendT::ComplexT; // LightningBackendT's
// complex type.
using ParamT = PrecisionT; // Parameter's data precision
multiphaseCFD marked this conversation as resolved.
Show resolved Hide resolved

const std::string bitsize =
std::to_string(sizeof(std::complex<PrecisionT>) * 8);

using np_arr_c = py::array_t<std::complex<ParamT>, py::array::c_style>;
using np_arr_r = py::array_t<ParamT, py::array::c_style>;

using ObservableT = ObservableTNCuda<LightningBackendT>;
using NamedObsT = NamedObsTNCuda<LightningBackendT>;
using HermitianObsT = HermitianObsTNCuda<LightningBackendT>;
using TensorProdObsT = TensorProdObsTNCuda<LightningBackendT>;
using HamiltonianT = HamiltonianTNCuda<LightningBackendT>;

std::string class_name;

class_name = std::string(name) + "ObservableC" + bitsize;
py::class_<ObservableT, std::shared_ptr<ObservableT>>(m, class_name.c_str(),
py::module_local());

class_name = std::string(name) + "NamedObsC" + bitsize;
py::class_<NamedObsT, std::shared_ptr<NamedObsT>, ObservableT>(
m, class_name.c_str(), py::module_local())
.def(py::init(
[](const std::string &name, const std::vector<std::size_t> &wires) {
return NamedObsT(name, wires);
}))
.def("__repr__", &NamedObsT::getObsName)
.def("get_wires", &NamedObsT::getWires, "Get wires of observables")
.def(
"__eq__",
[](const NamedObsT &self, py::handle other) -> bool {
if (!py::isinstance<NamedObsT>(other)) {
return false;
}
auto &&other_cast = other.cast<NamedObsT>();
return self == other_cast;
},
"Compare two observables");

class_name = std::string(name) + "HermitianObsC" + bitsize;
py::class_<HermitianObsT, std::shared_ptr<HermitianObsT>, ObservableT>(
m, class_name.c_str(), py::module_local())
.def(py::init([](const np_arr_c &matrix,
const std::vector<std::size_t> &wires) {
auto const &buffer = matrix.request();
const auto ptr = static_cast<ComplexT *>(buffer.ptr);
return HermitianObsT(std::vector<ComplexT>(ptr, ptr + buffer.size),
wires);
}))
.def("__repr__", &HermitianObsT::getObsName)
.def("get_wires", &HermitianObsT::getWires, "Get wires of observables")
.def("get_matrix", &HermitianObsT::getMatrix,
"Get matrix representation of Hermitian operator")
.def(
"__eq__",
[](const HermitianObsT &self, py::handle other) -> bool {
if (!py::isinstance<HermitianObsT>(other)) {
return false;
}
auto &&other_cast = other.cast<HermitianObsT>();
return self == other_cast;
},
"Compare two observables");

class_name = std::string(name) + "TensorProdObsC" + bitsize;
py::class_<TensorProdObsT, std::shared_ptr<TensorProdObsT>, ObservableT>(
m, class_name.c_str(), py::module_local())
.def(py::init([](const std::vector<std::shared_ptr<ObservableT>> &obs) {
return TensorProdObsT(obs);
}))
.def("__repr__", &TensorProdObsT::getObsName)
.def("get_wires", &TensorProdObsT::getWires, "Get wires of observables")
.def("get_ops", &TensorProdObsT::getObs, "Get operations list")
.def(
"__eq__",
[](const TensorProdObsT &self, py::handle other) -> bool {
if (!py::isinstance<TensorProdObsT>(other)) {
return false;
}
auto &&other_cast = other.cast<TensorProdObsT>();
return self == other_cast;
},
"Compare two observables");

class_name = std::string(name) + "HamiltonianC" + bitsize;
using ObsPtr = std::shared_ptr<ObservableT>;
py::class_<HamiltonianT, std::shared_ptr<HamiltonianT>, ObservableT>(
m, class_name.c_str(), py::module_local())
.def(py::init(
[](const np_arr_r &coeffs, const std::vector<ObsPtr> &obs) {
auto const &buffer = coeffs.request();
const auto ptr = static_cast<ParamT *>(buffer.ptr);
return HamiltonianT{std::vector<ParamT>(ptr, ptr + buffer.size),
obs};
}))
.def("__repr__", &HamiltonianT::getObsName)
.def("get_wires", &HamiltonianT::getWires, "Get wires of observables")
.def("get_ops", &HamiltonianT::getObs,
"Get operations contained by Hamiltonian")
.def("get_coeffs", &HamiltonianT::getCoeffs,
"Get Hamiltonian coefficients")
.def(
"__eq__",
[](const HamiltonianT &self, py::handle other) -> bool {
if (!py::isinstance<HamiltonianT>(other)) {
return false;
}
auto &&other_cast = other.cast<HamiltonianT>();
return self == other_cast;
},
"Compare two observables");
}

/**
* @brief Templated class to build lightning.tensor class bindings.
*
* @tparam TensorNetT Tensor network type
* @tparam TensorNetT Tensor network type.
* @param m Pybind11 module.
*/
template <class TensorNetT> void lightningTensorClassBindings(py::module_ &m) {
using PrecisionT =
typename TensorNetT::PrecisionT; // TensorNet's precision.
// Enable module name to be based on size of complex datatype
auto name = TensorNetT::method; // TensorNet's backend name [mps, exact].
const std::string bitsize =
std::to_string(sizeof(std::complex<PrecisionT>) * 8);

//***********************************************************************//
// TensorNet
//***********************************************************************//
std::string class_name = "TensorNetC" + bitsize;
std::string class_name = std::string(name) + "TensorNetC" + bitsize;
LuisAlfredoNu marked this conversation as resolved.
Show resolved Hide resolved
auto pyclass =
py::class_<TensorNetT>(m, class_name.c_str(), py::module_local());

Expand All @@ -797,12 +928,12 @@ template <class TensorNetT> void lightningTensorClassBindings(py::module_ &m) {
/* Observables submodule */
py::module_ obs_submodule =
m.def_submodule("observables", "Submodule for observables classes.");
registerBackendAgnosticObservables<TensorNetT>(obs_submodule);
registerBackendAgnosticObservablesTensor<TensorNetT>(obs_submodule, name);

//***********************************************************************//
// Measurements
//***********************************************************************//
class_name = "MeasurementsC" + bitsize;
class_name = std::string(name) + "MeasurementsC" + bitsize;
LuisAlfredoNu marked this conversation as resolved.
Show resolved Hide resolved
auto pyclass_measurements = py::class_<MeasurementsTNCuda<TensorNetT>>(
m, class_name.c_str(), py::module_local());

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ class ExactTNCuda final : public TNCuda<Precision, ExactTNCuda<Precision>> {
using BaseType = TNCuda<Precision, ExactTNCuda>;

public:
constexpr static auto method = "exacttn";
constexpr static auto method = "exact";

using CFP_t = decltype(cuUtil::getCudaType(Precision{}));
using ComplexT = std::complex<Precision>;
Expand Down
Loading
Loading