-
Notifications
You must be signed in to change notification settings - Fork 624
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding decomposition MPSPrep #6896
base: master
Are you sure you want to change the base?
Conversation
Hello. You may have forgotten to update the changelog!
|
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## master #6896 +/- ##
==========================================
- Coverage 99.59% 99.59% -0.01%
==========================================
Files 478 479 +1
Lines 45306 45362 +56
==========================================
+ Hits 45124 45179 +55
- Misses 182 183 +1 ☔ View full report in Codecov by Sentry. |
…I/pennylane into MPSPrep-decomposition
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We worked hard to remove global random number generation from pennylane, and so I'd rather not have it go back in, especially with interfering with the user's choice of seed.
global random generation has been removed. No longer blocking the PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @KetpuntoG
just a few fly-by questions from me
unitary[:, :k] = columns | ||
|
||
# Complete the remaining columns using Gram-Schmidt | ||
rng = np.random.default_rng(42) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this need to be seeded? Won't this ensure that each and every call to this will use the same correlated RNG outputs for the same input?
If None
it will pull entropy from the operating system's default source, which is likely a better source of randomness. Or, allowing the provision of a seed as a function input would be the best option, as this ensure control is at the caller's level. What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had to do it or qml.assert_valid
complained that the decomposition did not match the operator in the queue (two different Unitaries were generated)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Assert valid in the tests? But we should be able to specify the seed to ensure the tests generate the same output? I'd expect numerical differences in the outcomes with generators that are seeded differently.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually these values of the matrices are not used for anything, we generate them just to be able to put the matrix in QubitUnitary. Since it is also a private function, I think it is ok to set it in this case :) User does not need to know about it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @KetpuntoG! Just leaving the first set of comments and will take a deeper look once these are addressed.
ortogonal_projection_matrix = qml.math.eye(d) - qml.math.dot(unitary, qml.math.conj(unitary.T)) | ||
new_columns = qml.math.dot(ortogonal_projection_matrix, new_columns) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we directly be able to use QR decomposition
here? 🤔 I was trying something like this -
n, k = 8, 3 # number of rows, fixed given number of columns
vectors = random_orthonormal_vectors(n, k) # this function gives me random orthonormal vectors
randext = np.random.randn(n, n - k)
Q, _ = np.linalg.qr(np.hstack([vectors, randext]))
assert np.allclose(Q @ Q.T, np.eye(len(unitary))) and np.allclose(Q @ Q.T, Q.T @ Q)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For fixing this sign flip, maybe this should work:
Q, R = np.linalg.qr(np.hstack([vectors, randext]))
Q *= np.sign(np.diag(R))
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome 🤩
|
||
for column in Ai: | ||
|
||
vector = qml.math.zeros(2**n_wires, like=mps[0]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought argument like
should be given a string - "jax", "numpy", etc. Might be nice to check once.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After in person discussion, we keep it as it is right now (jit complains about the change)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After testing something like this should work -
interface, dtype = qml.math.get_interface(mps[0]), mps[0].dtype
vector = qml.math.zeros(2**n_wires, like=interface, dtype=dtype)
work_wires (Sequence[int]): list of extra qubits needed in the decomposition. The bond dimension of the mps | ||
is defined as ``2^len(work_wires)``. Default is ``None`` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we define this as the maximum permissible bond dimension of the provided MPS?
self.hyperparameters["input_wires"] = qml.wires.Wires(wires) | ||
|
||
if work_wires: | ||
self.hyperparameters["work_wires"] = qml.wires.Wires(work_wires) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we require a check that sufficient work wires have been provided?
if i == 0: | ||
Ai = Ai.reshape((1, *Ai.shape)) | ||
elif i == len(mps) - 1: | ||
Ai = Ai.reshape((*Ai.shape, 1)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we do this rank-2
to rank-3
promotion outside the loop itself to prevent checking these conditions for every iteration?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Conditional not needed, agree.
|
||
for column in Ai: | ||
|
||
vector = qml.math.zeros(2**n_wires, like=mps[0]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Co-authored-by: Utkarsh <[email protected]>
Co-authored-by: Utkarsh <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@KetpuntoG Nice work, looks good.
|
||
|
||
Args: | ||
mps (List[Array]): list of arrays of rank-3 and rank-2 tensors representing an MPS state as a | ||
product of site matrices. See the usage details section for more information. | ||
|
||
wires (Sequence[int]): wires that the template acts on | ||
work_wires (Sequence[int]): list of extra qubits needed in the decomposition. The bond dimension of the mps | ||
is defined as ``2^len(work_wires)``. Default is ``None`` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is defined as ``2^len(work_wires)``. Default is ``None`` | |
is defined as ``2^len(work_wires)``. Default is ``None``. |
|
||
|
||
Args: | ||
mps (List[Array]): list of arrays of rank-3 and rank-2 tensors representing an MPS state as a | ||
product of site matrices. See the usage details section for more information. | ||
|
||
wires (Sequence[int]): wires that the template acts on | ||
work_wires (Sequence[int]): list of extra qubits needed in the decomposition. The bond dimension of the mps |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Optional?
This operator is natively supported on the ``lightning.tensor`` device, designed to run MPS structures efficiently. For other devices, implementing this operation uses a gate-based decomposition which requires auxiliary qubits (via ``work_wires``) to prepare the state vector represented by the MPS in a quantum circuit. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just curious, is there a reference to this implementation using a gate-based decomposition?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah I see it is in the compute_decomposition function. not sure where it's preferred.
hyperparameters = ( | ||
("wires", self.hyperparameters["input_wires"]), | ||
("work_wires", self.hyperparameters["work_wires"]), | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we make this a dictionary so we don't have to do hyperparams_dict = dict(metadata)
?
filtered_hyperparameters = { | ||
key: value for key, value in self.hyperparameters.items() if key != "input_wires" | ||
} | ||
return self.compute_decomposition( | ||
self.parameters, wires=self.hyperparameters["input_wires"], **filtered_hyperparameters | ||
) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we store self.hyperparameters["input_wires"]
in a variable, then simply del self.hyperparameters["input_wires"]
, and do **self.hyperparameters
?
|
||
Args: | ||
mps (List[Array]): list of arrays of rank-3 and rank-2 tensors representing an MPS state as a | ||
product of site matrices. See the usage details section for more information. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
product of site matrices. See the usage details section for more information. | |
product of site matrices. |
""" | ||
|
||
if work_wires is None: | ||
raise AssertionError( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
raise AssertionError( | |
raise ValueError( |
IMO, AssertionError
is more for things like:
try:
assert work_wires is None
except AssertionError:
....
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, what if not enough work_wires are provided? Be cautious here.
if i == 0: | ||
Ai = Ai.reshape((1, *Ai.shape)) | ||
elif i == len(mps) - 1: | ||
Ai = Ai.reshape((*Ai.shape, 1)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Conditional not needed, agree.
Before submitting
Please complete the following checklist when submitting a PR:
All new features must include a unit test.
If you've fixed a bug or added code that should be tested, add a test to the
test directory!
All new functions and code must be clearly commented and documented.
If you do make documentation changes, make sure that the docs build and
render correctly by running
make docs
.Ensure that the test suite passes, by running
make test
.Add a new entry to the
doc/releases/changelog-dev.md
file, summarizing thechange, and including a link back to the PR.
The PennyLane source code conforms to
PEP8 standards.
We check all of our code against Pylint.
To lint modified files, simply
pip install pylint
, and thenrun
pylint pennylane/path/to/file.py
.When all the above are checked, delete everything above the dashed
line and fill in the pull request template.
Context:
[SC-83733]
[SC-81348]
MPSPrep only works in "lightning.tensor". With this decomposition, it will work in any device
Description of the Change:
Benefits:
Possible Drawbacks:
Related GitHub Issues: