-
Notifications
You must be signed in to change notification settings - Fork 61
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Tensorflow backend #596
Conversation
@stavros11 could you please fix conflicts? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@stavros11 thanks, I have 2 comments:
- in this branch qibo does not show information about the backend in the first import, only after typing
qibo.set_backend("tensorflow")
. - I cannot switch from CPU to GPU, the code prints "Using tensorflow backend on /CPU:0" but performance looks like GPU.
Thank you for reviewing, done.
This is normal after the refactoring because no backend is initialized during import. If you do an operation that requires backend, such as executing a circuit or creating a Hamiltonian, then the
Indeed the device switcher and default device identification was not implemented here. It should work properly after the latest push. |
This is an attempt to add the tensorflow backend on top of numpy with as little additional code as possible. It is based on
tensorflow.experimental.numpy
and the idea is to useself.np
(instead ofnp
) in every numpy call used in the numpy backend and then replace this withself.np = tnp
for the tensorflow backend. It seems to work fairly well and it includes GPU, since the tnp ND array is an alias fortf.Tensor
and not a numpy array. I have not tested backpropagation but at least according to their docs it is expected to work.One potential disadvantage with this approach is that it requires to have a reference to the numpy module in the numpy backend, by doing
self.np = np
. This breaks the pickle serialization of the backend object because Python'spickle
cannot pickle modules. Note thatdill
does not have this limitation (related stackoverflow discussion), however I am not sure how efficient this is, mostly in terms of the pkl file size. Moreover, this breaks parallel.py because multiprocessing uses pickle.