Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for tensorflow together with ODL #972

Merged
merged 70 commits into from
Aug 29, 2017

Conversation

adler-j
Copy link
Member

@adler-j adler-j commented Apr 13, 2017

TODO: summary

@kohr-h
Copy link
Member

kohr-h commented Apr 13, 2017

Phew.. My review here against your review on #861? :-P

@adler-j
Copy link
Member Author

adler-j commented Apr 16, 2017

It's on its way :) But you wait until I rebase from master, this is actually quite a small PR.

@kohr-h
Copy link
Member

kohr-h commented May 18, 2017

Is this ready for review or should I wait?

@adler-j
Copy link
Member Author

adler-j commented May 19, 2017

Still havn't merged from master, but you can easily get started if you want to.

Copy link
Member

@kohr-h kohr-h left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess since this code is battle-tested there's not very much to say about the implementation itself. I just wonder if temporaries would make sense to speed up things, but that might quickly become infeasible. Dunno.

In general I find the wrapping code hard to digest, not because of complexity (it's really not that elaborate) but more because of all the inline function definitions that take up quite some space. If there's a way to make this a bit less nested, I'd appreciate it.

Anyway, docs could use some cleanup. Other than that it looks good (nice detective work on the ninja-style workarounds).



What is vectorization?
======================
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Has this stuff moved here from some other file?

In any case, can you convert it to the "one sentence - one line" RST style?

============

Python functions are in most cases used as input to a discretization process. For example, we may
want to discretize a two-dimensional Gaussian function:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Gaussian function ::

@@ -0,0 +1,38 @@
"""Example of how to convert an ODL operator to a tensorflow layer."""

import tensorflow as tf
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are we sure that odl.contrib.tensorflow is never confused with tensorflow (e.g. odl.contrib.tensorflow.tensorflow)?

If tensorflow wasn't already so long I'd suggest using a bindings suffix.. Maybe a shorter package name like tf_bindings will cut it?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm quite sure that is a nonproblem. With that said I'll change this whole thing to a submodule and split it, that allows better long-term extensions.


# Create tensorflow layer from odl operator
odl_op_layer = odl.contrib.tensorflow.as_tensorflow_layer(
odl_op, 'MatrixOperator')
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

indentation

print(odl_op(x))

# Evaluate the adjoint of the derivative, called gradient in tensorflow
print(tf.gradients(y_tf, [x_tf], z_tf)[0].eval().ravel())
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps use keyword arguments to make the code more self-explanatory. Now it's a bit of guesswork as to what variable stands for what.

self.name = name

def _lincomb(self, a, x1, b, x2, out):
with tf.name_scope('{}_lincomb'.format(self.name)):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will this clash if you use the same name twice?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, it becomes _lincomb_2 etc

return isinstance(other, TensorflowSpace) and other.shape == self.shape

def __repr__(self):
return 'TensorflowSpace({})'.format(self.shape)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider subclassers

self._data = value

def __repr__(self):
return '{}.element({})'.format(self.space, self.data)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

{!r}


"""Wrap ODL operator so that it acts on TensorflowSpace elements."""

def __init__(self, domain, range, func, adjoint=None, linear=False):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Better use the same name for parameter and stored attribute.

return tensorflow_layer


class TensorflowSpace(LinearSpace):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A bit more docs for the fellas here and below wouldn't hurt.

@kohr-h
Copy link
Member

kohr-h commented Jun 20, 2017

And, to repeat one of the review comments, a top-level README is quite important for this stuff.

@adler-j
Copy link
Member Author

adler-j commented Jun 21, 2017

Thanks for the review, I'll try to get this done and merged rather soon but I'm waiting for the FOM pull (and more free time for me) since I'll move some stuff into there.

@kohr-h kohr-h changed the title Add support for tensorflow togeather with ODL Add support for tensorflow together with ODL Jun 21, 2017
@adler-j adler-j mentioned this pull request Jul 3, 2017
20 tasks
@adler-j adler-j force-pushed the tensorflow_support branch from 77d83d0 to b563410 Compare August 1, 2017 12:08
@adler-j
Copy link
Member Author

adler-j commented Aug 22, 2017

Fixed the comments, added some tests etc. IMO this can go in now.

Copy link
Member

@kohr-h kohr-h left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some final comments, merge when done.

Clearly a very cool addition!


## Example usage

The [examples](examples) folder contains example on how to use the above functionality.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

examples

* [tensorflow_layer_matrix.py](examples/tensorflow_layer_matrix.py) shows how an ODL `MatrixOperator` can be converted to a tensorflow layer.
* [tensorflow_layer_productspace.py](examples/tensorflow_layer_productspace.py) shows how an ODL operator acting on `ProductSpace`s can be converted to a tensorflow layer.
* [tensorflow_layer_ray_transform.py](examples/tensorflow_layer_ray_transform.py) shows how a `RayTransform` can be converted to a tensorflow layer.
* [tensorflow_operator_matrix.py](examples/tensorflow_operator_matrix.py) shows how `tf.matmul` can be used as a ODL operator.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

an ODL


step = learning_rate * np.sqrt(1 - beta2) / (1 - beta1)

x.lincomb(1, x, -step, m / (np.sqrt(v) + eps))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did you check this with the paper? Regarding your earlier answer to my review comment, yes, I read the paper, and I couldn't match the expressions there with the implementation here.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll review this once more then I guess!

@@ -62,6 +63,10 @@ def __init__(self, geometry, reco_space, proj_space):

self.create_ids()

# Create a mutually exclusive lock so that two callers cant use the
# same shared resource at the same time.
self.mutex = Lock()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

bump (prefer this a bit more hidden, _mutex)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree, will fix that.

@@ -802,6 +803,42 @@ def _apply_padding(lhs_arr, rhs_arr, offset, pad_mode, direction):
working_slc[axis] = intersec_slc[axis]


def zscore(arr):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is that your naming or is it a standard notion?

Copy link
Member Author

@adler-j adler-j Aug 24, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@adler-j
Copy link
Member Author

adler-j commented Aug 29, 2017

Merge after CI

@adler-j adler-j merged commit ea016dc into odlgroup:master Aug 29, 2017
@adler-j adler-j deleted the tensorflow_support branch August 29, 2017 17:26
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants