-
Notifications
You must be signed in to change notification settings - Fork 61
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Autodifferentiation support for PytorchBackend
#1276
Conversation
pytorch
backend
pytorch
backendPytorchBackend
The only differentiation method that was not working for the pytorch backend was the parameter shift rule. |
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## master #1276 +/- ##
=========================================
Coverage 100.00% 100.00%
=========================================
Files 76 76
Lines 10833 10923 +90
=========================================
+ Hits 10833 10923 +90
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
There's also all the classes inside |
ok, I forgot these tests, I will try to fix them tomorrow |
I have encountered a problem while coding the SGD training procedure of the VQE for the pytorch backend. |
Do you mean this cast? Because this is overloaded by the |
That cast is fine. The problem is in the two lines before: cos = self.np.cos(theta / 2.0) + 0j
isin = -1j * self.np.sin(theta / 2.0) when theta is a torch tensor with requires_grad = True you can't compute cos/sin using numpy. |
but then why isn't |
I can try to fix things this way. With Renato we just wanted to have the matrices defined with numpy and then casted to the correct backend. |
This solution should be fine |
@renatomello Fortunately I found a way to let gradients pass... def RX(self, theta):
theta = self._cast_parameter(theta)
cos = self.np.cos(theta / 2.0) + 0j
isin = -1j * self.np.sin(theta / 2.0)
return self._cast([[cos, isin], [isin, cos]], dtype=self.dtype) the cast function is practically a torch.as_tensor and here we loose track of the gradients. def RX(self, theta):
theta = self._cast_parameter(theta)
cos = self.np.cos(theta / 2.0) + 0j
isin = -1j * self.np.sin(theta / 2.0)
return self.np.stack([cos, isin, isin, cos]).reshape(2, 2) this way we don't even need to use the _cast function. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @Simone-Bordoni I went through all the source files, I have just some minor suggestions. I'll move to the tests next.
@renatomello @BrunoLiegiBastonLiegi If you agree on non dup;licating the matrices and you have reviewed the code please accept the PR so that tomorrow we can merge |
Complete pytorch backend, close issue #1266
Checklist: