Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: Remove element classes #1475

Draft
wants to merge 136 commits into
base: master
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
136 commits
Select commit Hold shift + click to select a range
90434d5
Step 1: remove obvious parts of npy_tensors.py and other modules
kohr-h Feb 20, 2019
8e1cc44
Fix doctests of NumpyTensorSpace
kohr-h Feb 20, 2019
f95bd77
Fix doctests in base_tensors.py, make `show` a space method
kohr-h Feb 20, 2019
12def9c
Step 2: remove DiscreteLpElement and other obvious code
kohr-h Feb 20, 2019
85daddf
Fix doctests of DiscreteLp, make show a method of DiscreteLp
kohr-h Feb 20, 2019
68dd506
Fix convolution example
kohr-h Feb 20, 2019
e1cbff1
Fix scipy solvers example and some diff operators
kohr-h Feb 20, 2019
173a0a6
Step 3: remove ProductSpaceElement, use np.ndarray with object dtype …
kohr-h Feb 20, 2019
e87036f
Fix DiscreteLp.__contains__
kohr-h Feb 20, 2019
63b2b50
Probably fix diff_ops doctests
kohr-h Feb 20, 2019
4c7f14f
Fix ProductSpace.show and example show_productspace
kohr-h Feb 20, 2019
c767169
Step 4: Replace pspace weightings by functions
kohr-h Feb 21, 2019
5b5e011
Fix deformation operators
kohr-h Feb 21, 2019
e9fb81f
Step 5: Remove LinearSpaceElement and its imports
kohr-h Feb 21, 2019
08d728c
Temporarily fix ProductSpace.__repr__
kohr-h Feb 21, 2019
c6448b2
Fix deformation examples
kohr-h Feb 21, 2019
31b73af
Fix space diagnostics
kohr-h Feb 22, 2019
26548d6
Fix default_ops doctests
kohr-h Feb 22, 2019
9cd0d11
Fix doctests in operator.py
kohr-h Feb 22, 2019
e9400f9
Fix pspace element creation
kohr-h Feb 22, 2019
a480ed9
Fix pspace_ops doctests
kohr-h Feb 22, 2019
ce443eb
Fix doctests of tensor_ops.py up to weighting issue
kohr-h Feb 22, 2019
05c919a
Step 6: remove remaining weighting classes
kohr-h Feb 22, 2019
d71d905
Remove __array_ufunc__ functions
kohr-h Feb 22, 2019
74e7065
Fix lp_discr.py doctests
kohr-h Feb 23, 2019
53d4ff2
Fix doctests in discr_ops.py
kohr-h Feb 23, 2019
44c352d
Fix doctests in tensor_ops.py, add weighting_type to ProductSpace
kohr-h Feb 23, 2019
6be2e46
Remove `vector` function
kohr-h Feb 23, 2019
1f9f562
Fix doctests in pspace.py
kohr-h Feb 23, 2019
6f69636
Fix doctests in emission.py
kohr-h Feb 23, 2019
c9953f5
Fix doctests in geometric.py
kohr-h Feb 23, 2019
ebaef61
Fix doctests in misc_phantoms.py
kohr-h Feb 23, 2019
bee4cf2
Fix doctests in noise.py
kohr-h Feb 23, 2019
6bd5e2b
Fix doctests in transmission.py
kohr-h Feb 23, 2019
97d478b
Fix doctests in utility.py
kohr-h Feb 23, 2019
c15d65d
Fix doctests in default_functionals.py and proximal_operators.py
kohr-h Feb 23, 2019
13c9c53
Fix doctests in functional.py
kohr-h Feb 23, 2019
cdc6c30
Fix operator default in-place call
kohr-h Feb 23, 2019
830c38c
Fix doctests in derivatives.py
kohr-h Feb 23, 2019
e9b62dd
Fix doctests in callback.py
kohr-h Feb 23, 2019
d0803e0
Fix doctests in steplen.py
kohr-h Feb 23, 2019
6d35178
Fix ray transform and doctests in filtered_back_projection.py
kohr-h Feb 23, 2019
496b957
Fix doctests in diagnostics/operator.py
kohr-h Feb 23, 2019
8e79aca
Fix wrong weighting in PointwiseInnerAdjoint
kohr-h Feb 23, 2019
3040464
Fix Fourier transforms
kohr-h Feb 24, 2019
c3db9c3
Fix fourier_trafo example
kohr-h Feb 24, 2019
24f3746
Fix wavelet_trafo example
kohr-h Feb 24, 2019
2647c32
Fix simple_r example
kohr-h Feb 24, 2019
6fe1ca8
Fix simple_rn example
kohr-h Feb 24, 2019
3d38d2a
Add space argument to CallbackShow
kohr-h Feb 24, 2019
0e4137a
Fix show_1d example
kohr-h Feb 24, 2019
a684460
Fix show_2d example
kohr-h Feb 24, 2019
2f5b61b
Fix show_2d_complex example
kohr-h Feb 24, 2019
76536f5
Fix show_callback example
kohr-h Feb 24, 2019
13f2718
Fix show_update_1d example
kohr-h Feb 24, 2019
e83fe84
Fix show_update_2d example
kohr-h Feb 24, 2019
142fda1
Fix show_update_in_place_2d example
kohr-h Feb 24, 2019
4143a92
Fix show_vector example
kohr-h Feb 24, 2019
50a2151
Rename show_examples example
kohr-h Feb 24, 2019
e2d8231
Fix several smooth solvers
kohr-h Feb 24, 2019
24fc079
Fix Rosenbrock example
kohr-h Feb 24, 2019
e8cc5a7
Fix proximal_translation
kohr-h Feb 24, 2019
6f28e14
Fix proximal gradient solvers
kohr-h Feb 24, 2019
22f4b06
Fix proximal_gradient_denoising.py example
kohr-h Feb 24, 2019
97c0094
Add _multiply and _divide to DiscreteLp
kohr-h Feb 24, 2019
e611fbb
Allow field elements in multiply and divide
kohr-h Feb 24, 2019
a09ea15
Remove ufuncs from pytest_config
kohr-h Feb 24, 2019
7b8d941
Fix space_test unittests
kohr-h Feb 24, 2019
29c19a5
Remove space_utils_test unittests
kohr-h Feb 24, 2019
24e450d
WIP: fix PDHG and prox ops
kohr-h Feb 24, 2019
b76d577
WIP: fix tensors_test unittests and weighting
kohr-h Feb 24, 2019
eb9612a
Improve fixture printing, remove reduction fixture
kohr-h Feb 25, 2019
93bc9c4
Fix weighted dist on NumpyTensorSpace
kohr-h Feb 25, 2019
f9a890d
In tensors_test.py, remove obsolete tests and fix remaining ones
kohr-h Feb 25, 2019
6f55693
Minor fixes in tensors_test
kohr-h Feb 25, 2019
ba41a1a
WIP: fix pspace element creation and pspace_test unittests
kohr-h Feb 25, 2019
e368e52
Fix wrong LinearSpace.__pow__
kohr-h Feb 26, 2019
c2c18ed
WIP: fix pspace elements and shape
kohr-h Feb 26, 2019
c28b335
Silence DeprecationWarning in testutils.py
kohr-h Mar 7, 2019
627e529
Add ProductSpace.apply
kohr-h Mar 7, 2019
e005ece
Fix/remove old pspace tests
kohr-h Mar 7, 2019
637d0b0
ENH: implement base_space for power spaces
kohr-h Mar 8, 2019
c2b0972
Fix default_functionals doctests
kohr-h Mar 8, 2019
1e950a0
Remove product space version of matrix_representation, fix as_scipy_o…
kohr-h Mar 9, 2019
a769624
Fix pspace_ops doctests wrt array dtypes
kohr-h Mar 9, 2019
a594cdb
Fix NumpyTensorSpace __eq__ and __hash__
kohr-h Mar 9, 2019
eb21915
Rework ufunc operators and functionals
kohr-h Mar 10, 2019
0a4a782
Add back product space dtype to make ufunc_ops work
kohr-h Mar 11, 2019
840d3a3
Repair __repr__ of deformation operators and displacement storage
kohr-h Mar 11, 2019
3037696
Fix or xfail displacement unittests
kohr-h Mar 11, 2019
719b762
Warn instead of raise for unknown ufuncs
kohr-h Mar 11, 2019
90f9a7d
Remove ridiculous old classes
kohr-h Mar 11, 2019
5640270
ENH: make space methods more liberal in accepted input
kohr-h Mar 11, 2019
7fa1d69
Fix some of the smooth solvers
kohr-h Mar 11, 2019
4526715
Repair fixtures of smooth solvers tests
kohr-h Mar 11, 2019
8609a8b
Add pspace apply2 and restrict membership to object dtype
kohr-h Mar 11, 2019
42bbd92
Fix one remaining ufuncs occurrence
kohr-h Mar 11, 2019
d5837ca
Fix adam optimizer
kohr-h Mar 11, 2019
f123ef6
Make all_almost_equal work for object arrays
kohr-h Mar 11, 2019
b991804
Obviousl fixes to functional tests
kohr-h Mar 11, 2019
9683721
WIP: make functionals and prox ops work for product spaces
kohr-h Mar 11, 2019
a719e65
Remove custom space exceptions, fix/remove obsolete tensor tests
kohr-h Mar 19, 2019
76d2ded
Fix and clean up lp_discr_test
kohr-h Mar 19, 2019
e426057
Fix diff_ops_test
kohr-h Mar 19, 2019
046bfee
Fix discr_ops_test
kohr-h Mar 19, 2019
1b20430
Fix/xfail oputils_test
kohr-h Mar 20, 2019
3413f60
Fix some issues in odl.operator, and fix/xfail tests in operator_test
kohr-h Mar 20, 2019
998dead
Fix some bugs in tensor_ops, and fix tensor_ops_test
kohr-h Mar 20, 2019
d3f1c20
Re-instantiate assign and set_zero, fix pspace_ops_test
kohr-h Mar 21, 2019
e3158f5
Big cleanup (space, inner, norm, dist), make CPU projectors work
kohr-h Mar 22, 2019
1b0f26f
Fix astra_setup_test
kohr-h Mar 22, 2019
d23616b
Fix iterative solvers and iterative_test
kohr-h Mar 22, 2019
d0ede57
WIP: fix non-smooth solvers
kohr-h Mar 23, 2019
d623bbb
Implement ufuncs on product spaces
kohr-h Mar 24, 2019
b7d6fc1
Add reductions to spaces
kohr-h Mar 24, 2019
fcde4fc
Port functionals to ufuncs and reductions
kohr-h Mar 25, 2019
f94c504
Fix further proximals and functionals
kohr-h Mar 26, 2019
504297c
WIP: fix functionals and tests
kohr-h Mar 28, 2019
d82f272
Fix some more functionals
kohr-h Mar 30, 2019
2cd4fa4
Fix rest of default_functionals_test
kohr-h Apr 1, 2019
682a8bc
Add LinearSpace.copy method
kohr-h Apr 1, 2019
94bd49c
Fix proximal operators and simplify tests
kohr-h Apr 1, 2019
61aaf2b
Fix KL cross entropy
kohr-h Apr 4, 2019
d2fb5fe
Fix unittests in functional_test
kohr-h Apr 4, 2019
7f5a2ab
Fix test of forward_backward and pdhg
kohr-h Apr 9, 2019
c5b0671
Remove FunctionalLeftVectorMult, no more use for it
kohr-h Apr 14, 2019
b9e7387
Replace .copy() method calls by copy function calls
kohr-h Apr 14, 2019
b369725
Make PointwiseNorm.derivative a bit more efficient and readable
kohr-h Apr 14, 2019
ed8d70d
Keep KL functionals from raising warnings
kohr-h Apr 14, 2019
7864ab3
Extend QuadraticForm with proximal and CC with operator
kohr-h Apr 14, 2019
eba5786
Improve memory efficiency of Douglas-Rachford
kohr-h Apr 14, 2019
68822b5
Make default_functionals tests more comprehensive
kohr-h Apr 14, 2019
5a52dff
Misc fixes
kohr-h Apr 14, 2019
756d073
Put __all__ amendments last in __init__.py files
kohr-h Apr 2, 2020
fed4e0b
Various fixes
kohr-h Apr 2, 2020
9a6aaf8
Remove references to DiscretizedSpaceElement
kohr-h Apr 13, 2020
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
29 changes: 15 additions & 14 deletions doc/source/getting_started/code/getting_started_convolution.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,17 +10,18 @@ class Convolution(odl.Operator):
The operator inherits from ``odl.Operator`` to be able to be used with ODL.
"""

def __init__(self, kernel):
def __init__(self, space, kernel):
"""Initialize a convolution operator with a known kernel."""

# Store the kernel
self.kernel = kernel

# Initialize the Operator class by calling its __init__ method.
# This sets properties such as domain and range and allows the other
# operator convenience functions to work.
super(Convolution, self).__init__(
domain=kernel.space, range=kernel.space, linear=True)
domain=space, range=space, linear=True
)

# Store the kernel
self.kernel = kernel

def _call(self, x):
"""Implement calling the operator by calling scipy."""
Expand All @@ -39,7 +40,7 @@ def adjoint(self):
kernel = odl.phantom.cuboid(space, [-0.05, -0.05], [0.05, 0.05])

# Create convolution operator
A = Convolution(kernel)
A = Convolution(space, kernel)

# Create phantom (the "unknown" solution)
phantom = odl.phantom.shepp_logan(space, modified=True)
Expand All @@ -48,9 +49,9 @@ def adjoint(self):
g = A(phantom)

# Display the results using the show method
kernel.show('kernel')
phantom.show('phantom')
g.show('convolved phantom')
space.show(kernel, 'kernel')
space.show(phantom, 'phantom')
space.show(g, 'convolved phantom')

# Landweber

Expand All @@ -59,13 +60,13 @@ def adjoint(self):

f = space.zero()
odl.solvers.landweber(A, f, g, niter=100, omega=1 / opnorm ** 2)
f.show('landweber')
space.show(f, 'landweber')

# Conjugate gradient

f = space.zero()
odl.solvers.conjugate_gradient_normal(A, f, g, niter=100)
f.show('conjugate gradient')
space.show(f, 'conjugate gradient')

# Tikhonov with identity

Expand All @@ -76,7 +77,7 @@ def adjoint(self):

f = space.zero()
odl.solvers.conjugate_gradient(T, f, b, niter=100)
f.show('Tikhonov identity conjugate gradient')
space.show(f, 'Tikhonov identity conjugate gradient')

# Tikhonov with gradient

Expand All @@ -87,7 +88,7 @@ def adjoint(self):

f = space.zero()
odl.solvers.conjugate_gradient(T, f, b, niter=100)
f.show('Tikhonov gradient conjugate gradient')
space.show(f, 'Tikhonov gradient conjugate gradient')

# Douglas-Rachford

Expand All @@ -114,4 +115,4 @@ def adjoint(self):
x = space.zero()
odl.solvers.douglas_rachford_pd(x, f, g_funcs, lin_ops,
tau=tau, sigma=sigma, niter=100)
x.show('TV Douglas-Rachford', force_show=True)
space.show(x, 'TV Douglas-Rachford', force_show=True)
20 changes: 11 additions & 9 deletions doc/source/getting_started/first_steps.rst
Original file line number Diff line number Diff line change
Expand Up @@ -45,17 +45,19 @@ and create a wrapping `Operator` for it in ODL.
The operator inherits from ``odl.Operator`` to be able to be used with ODL.
"""

def __init__(self, kernel):
def __init__(self, space, kernel):
"""Initialize a convolution operator with a known kernel."""

# Store the kernel
self.kernel = kernel

# Initialize the Operator class by calling its __init__ method.
# This sets properties such as domain and range and allows the other
# operator convenience functions to work.
super(Convolution, self).__init__(
domain=kernel.space, range=kernel.space, linear=True)
domain=space, range=space, linear=True
)

# Store the kernel
self.kernel = kernel


def _call(self, x):
"""Implement calling the operator by calling scipy."""
Expand All @@ -75,7 +77,7 @@ ODL also provides a nice range of standard phantoms such as the `cuboid` and `sh
kernel = odl.phantom.cuboid(space, [-0.05, -0.05], [0.05, 0.05])

# Create convolution operator
A = Convolution(kernel)
A = Convolution(space, kernel)

# Create phantom (the "unknown" solution)
phantom = odl.phantom.shepp_logan(space, modified=True)
Expand All @@ -84,9 +86,9 @@ ODL also provides a nice range of standard phantoms such as the `cuboid` and `sh
g = A(phantom)

# Display the results using the show method
kernel.show('kernel')
phantom.show('phantom')
g.show('convolved phantom')
space.show(kernel, title='kernel')
space.show(phantom, title='phantom')
space.show(g, title='convolved phantom')

.. image:: figures/getting_started_kernel.png

Expand Down
4 changes: 2 additions & 2 deletions doc/source/guide/code/functional_indepth_example.py
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ def __init__(self, space, y):
# the functional and always needs to be implemented.
def _call(self, x):
"""Evaluate the functional."""
return x.norm() ** 2 + x.inner(self.y)
return self.domain.norm(x) ** + self.domain.inner(x, self.y)

# Next we define the gradient. Note that this is a property.
@property
Expand Down Expand Up @@ -89,7 +89,7 @@ def __init__(self, space, y):

def _call(self, x):
"""Evaluate the functional."""
return (x - self.y).norm()**2 / 4.0
return self.domain.norm(x - self.y) ** 2 / 4


# Create a functional
Expand Down
36 changes: 0 additions & 36 deletions doc/source/guide/faq.rst
Original file line number Diff line number Diff line change
Expand Up @@ -49,42 +49,6 @@ General errors
This will yield a specific error message for an erroneous module that helps you debugging your
changes.

#. **Q:** When adding two space elements, the following error is shown::

TypeError: unsupported operand type(s) for +: 'DiscretizedSpaceElement' and 'DiscretizedSpaceElement'

This seems completely illogical since it works in other situations and clearly must be supported.
Why is this error shown?

**P:** The elements you are trying to add are not in the same space.
For example, the following code triggers the same error:

>>> x = odl.uniform_discr(0, 1, 10).one()
>>> y = odl.uniform_discr(0, 1, 11).one()
>>> x - y

In this case, the problem is that the elements have a different number of entries.
Other possible issues include that they are discretizations of different sets,
have different data types (:term:`dtype`), or implementation (for example CUDA/CPU).

**S:** The elements need to somehow be cast to the same space.
How to do this depends on the problem at hand.
To find what the issue is, inspect the ``space`` properties of both elements.
For the above example, we see that the issue lies in the number of discretization points:

>>> x.space
odl.uniform_discr(0, 1, 10)
>>> y.space
odl.uniform_discr(0, 1, 11)

* In the case of spaces being discretizations of different underlying spaces,
a transformation of some kind has to be applied (for example by using an operator).
In general, errors like this indicates a conceptual issue with the code,
for example a "we identify X with Y" step has been omitted.

* If the ``dtype`` or ``impl`` do not match, they need to be cast to each one of the others.
The most simple way to do this is by using the `DiscretizedSpaceElement.astype` method.

#. **Q:** I have installed ODL with the ``pip install --editable`` option, but I still get an
``AttributeError`` when I try to use a function/class I just implemented. The use-without-reinstall
thing does not seem to work. What am I doing wrong?
Expand Down
3 changes: 1 addition & 2 deletions doc/source/guide/functional_guide.rst
Original file line number Diff line number Diff line change
Expand Up @@ -177,8 +177,7 @@ All available functional arithmetic, including which properties and methods that
| ``(S * a)(x)`` | ``S(a * x)`` | `FunctionalRightScalarMult` |
| | | - Retains all properties. |
+---------------------+-----------------+--------------------------------------------------------------------------------+
| ``(v * S)(x)`` | ``v * S(x)`` | `FunctionalLeftVectorMult` |
| | | - Results in an operator rather than a functional. |
| ``(v * S)(x)`` | ``v * S(x)`` | Not supported |
+---------------------+-----------------+--------------------------------------------------------------------------------+
| ``(S * v)(x)`` | ``S(v * x)`` | `FunctionalRightVectorMult` |
| | | - Retains gradient and convex conjugate. |
Expand Down
12 changes: 6 additions & 6 deletions doc/source/guide/numpy_guide.rst
Original file line number Diff line number Diff line change
Expand Up @@ -123,14 +123,14 @@ The convolution operation, written as ODL operator, could look like this::
>>> class MyConvolution(odl.Operator):
... """Operator for convolving with a given kernel."""
...
... def __init__(self, kernel):
... def __init__(self, space, kernel):
... """Initialize the convolution."""
... self.kernel = kernel
...
... # Initialize operator base class.
... # This operator maps from the space of vector to the same space and is linear
... super(MyConvolution, self).__init__(
... domain=kernel.space, range=kernel.space, linear=True)
... domain=space, range=space, linear=True)
...
... self.kernel = kernel
...
... def _call(self, x):
... # The output of an Operator is automatically cast to an ODL object
Expand All @@ -139,7 +139,7 @@ The convolution operation, written as ODL operator, could look like this::
This operator can then be called on its domain elements::

>>> kernel = odl.rn(3).element([1, 2, 1])
>>> conv_op = MyConvolution(kernel)
>>> conv_op = MyConvolution(r3, kernel)
>>> conv_op([1, 2, 3])
rn(3).element([ 4., 8., 8.])

Expand All @@ -149,7 +149,7 @@ It can be also be used with any of the ODL operator functionalities such as mult
>>> scaled_op([1, 2, 3])
rn(3).element([ 8., 16., 16.])
>>> y = odl.rn(3).element([1, 1, 1])
>>> inner_product_op = odl.InnerProductOperator(y)
>>> inner_product_op = odl.InnerProductOperator(r3, y)
>>> # Create composition with inner product operator with [1, 1, 1].
>>> # When called on a vector, the result should be the sum of the
>>> # convolved vector.
Expand Down
48 changes: 23 additions & 25 deletions doc/source/guide/operator_guide.rst
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ example::
class MatrixOperator(odl.Operator):
...
def _call(self, x, out):
self.matrix.dot(x, out=out.asarray())
self.matrix.dot(x, out=out)

In-place evaluation is usually more efficient and should be used
*whenever possible*.
Expand Down Expand Up @@ -86,8 +86,7 @@ avoided*.
# In-place evaluation
operator(x, out=y)

This public calling interface is (duck-)type-checked, so the private methods
can safely assume that their input data is of the operator domain element type.
This public calling interface is (duck-)type-checked, so the private methods can safely assume that their input data is of the operator domain element type.

Operator arithmetic
-------------------
Expand All @@ -108,9 +107,7 @@ Another example is matrix multiplication, which corresponds to operator composit

.. _functional: https://en.wikipedia.org/wiki/Functional_(mathematics)

All available operator arithmetic is shown below. ``A``, ``B`` represent arbitrary `Operator`'s,
``f`` is an `Operator` whose `Operator.range` is a `Field` (sometimes called a functional_), and
``a`` is a scalar.
All available operator arithmetic is shown below. ``A``, ``B`` represent arbitrary `Operator`'s, ``f`` is an `Operator` whose `Operator.range` is a `Field` (sometimes called a functional_), and ``a`` is a scalar.

+------------------+-----------------+----------------------------+
| Code | Meaning | Class |
Expand All @@ -123,7 +120,7 @@ All available operator arithmetic is shown below. ``A``, ``B`` represent arbitra
+------------------+-----------------+----------------------------+
| ``(A * a)(x)`` | ``A(a * x)`` | `OperatorRightScalarMult` |
+------------------+-----------------+----------------------------+
| ``(v * f)(x)`` | ``v * f(x)`` | `FunctionalLeftVectorMult` |
| ``(v * f)(x)`` | ``v * f(x)`` | Not supported (*) |
+------------------+-----------------+----------------------------+
| ``(v * A)(x)`` | ``v * A(x)`` | `OperatorLeftVectorMult` |
+------------------+-----------------+----------------------------+
Expand All @@ -132,23 +129,24 @@ All available operator arithmetic is shown below. ``A``, ``B`` represent arbitra
| not available | ``A(x) * B(x)`` | `OperatorPointwiseProduct` |
+------------------+-----------------+----------------------------+

(*) The range of such an expression, if interpreted as operator, cannot be inferred.

There are also a few derived expressions using the above:

+------------------+--------------------------------------+
| Code | Meaning |
+==================+======================================+
| ``(+A)(x)`` | ``A(x)`` |
+------------------+--------------------------------------+
| ``(-A)(x)`` | ``(-1) * A(x)`` |
+------------------+--------------------------------------+
| ``(A - B)(x)`` | ``A(x) + (-1) * B(x)`` |
+------------------+--------------------------------------+
| ``A**n(x)`` | ``A(A**(n-1)(x))``, ``A^1(x) = A(x)``|
+------------------+--------------------------------------+
| ``(A / a)(x)`` | ``A((1/a) * x)`` |
+------------------+--------------------------------------+
| ``(A @ B)(x)`` | ``(A * B)(x)`` |
+------------------+--------------------------------------+

Except for composition, operator arithmetic is generally only defined when `Operator.domain` and
`Operator.range` are either instances of `LinearSpace` or `Field`.
+------------------+-------------------------------------------+
| Code | Meaning |
+==================+===========================================+
| ``(+A)(x)`` | ``A(x)`` |
+------------------+-------------------------------------------+
| ``(-A)(x)`` | ``(-1) * A(x)`` |
+------------------+-------------------------------------------+
| ``(A - B)(x)`` | ``A(x) + (-1) * B(x)`` |
+------------------+-------------------------------------------+
| ``(A **n)(x)`` | ``A((A ** (n-1))(x))``, ``A^1(x) = A(x)`` |
+------------------+-------------------------------------------+
| ``(A / a)(x)`` | ``A((1/a) * x)`` |
+------------------+-------------------------------------------+
| ``(A @ B)(x)`` | ``(A * B)(x)`` |
+------------------+-------------------------------------------+

Except for composition, operator arithmetic is generally only defined when `Operator.domain` and `Operator.range` are either instances of `LinearSpace` or `Field`.
12 changes: 7 additions & 5 deletions examples/deform/linearized_fixed_displacement.py
Original file line number Diff line number Diff line change
Expand Up @@ -44,15 +44,15 @@
disp_field = disp_field_space.element(disp_func)

# Show template and displacement field
template.show('Template')
disp_field.show('Displacement field')
templ_space.show(template, 'Template')
disp_field_space.show(disp_field, 'Displacement field')


# --- Apply LinDeformFixedDisp and its adjoint --- #


# Initialize the deformation operator with fixed displacement
deform_op = odl.deform.LinDeformFixedDisp(disp_field)
deform_op = odl.deform.LinDeformFixedDisp(templ_space, disp_field)

# Apply the deformation operator to get the deformed template.
deformed_template = deform_op(template)
Expand All @@ -61,5 +61,7 @@
adj_result = deform_op.adjoint(template)

# Show results
deformed_template.show('Deformed template')
adj_result.show('Adjoint applied to the template', force_show=True)
templ_space.show(deformed_template, 'Deformed template')
templ_space.show(
adj_result, 'Adjoint applied to the template', force_show=True
)
17 changes: 10 additions & 7 deletions examples/deform/linearized_fixed_template.py
Original file line number Diff line number Diff line change
Expand Up @@ -43,15 +43,15 @@
disp_field = disp_field_space.element(disp_func)

# Show template and displacement field
template.show('Template')
disp_field.show('Displacement field')
templ_space.show(template, 'Template')
disp_field_space.show(disp_field, 'Displacement field')


# --- Apply LinDeformFixedTempl, derivative and its adjoint --- #


# Initialize the deformation operator with fixed template
deform_op = odl.deform.LinDeformFixedTempl(template)
deform_op = odl.deform.LinDeformFixedTempl(templ_space, template)

# Apply the deformation operator to get the deformed template.
deformed_template = deform_op(disp_field)
Expand All @@ -68,7 +68,10 @@
deriv_adj_result = deform_op_deriv.adjoint(templ_space.one())

# Show results
deformed_template.show('Deformed template')
deriv_result.show('Operator derivative applied to one()')
deriv_adj_result.show('Adjoint of the derivative applied to one()',
force_show=True)
templ_space.show(deformed_template, 'Deformed template')
templ_space.show(deriv_result, 'Operator derivative applied to one()')
disp_field_space.show(
deriv_adj_result,
'Adjoint of the derivative applied to one()',
force_show=True,
)
Loading