Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Representing vector- and tensor-valued functions #908

Open
kohr-h opened this issue Feb 15, 2017 · 11 comments
Open

Representing vector- and tensor-valued functions #908

kohr-h opened this issue Feb 15, 2017 · 11 comments
Assignees

Comments

@kohr-h
Copy link
Member

kohr-h commented Feb 15, 2017

When #861 is in, we'll have to deal with the issue of how to represent vector- and tensor-valued functions properly. Questions are:

  • How to effectively deal with continuous functions with vector or tensor range? In particular vectorization?
  • In DiscreteLp: Where to put the additional axes for the function range?
  • Do we need properties for the two different parts of shape and ndim? If yes, how to name them?
  • How do we, in the long run, deal with maintaining compatibility with ProductSpace based implementations of the same thing?
  • More that I've forgotten now

Answer to 1: Clearly first, to keep compatibility with ProductSpace based versions of vector-valued functions.

Otherwise: ?

@adler-j
Copy link
Member

adler-j commented Feb 15, 2017

How to effectively deal with continuous functions with vector or tensor range? In particular vectorization?

My suggestion is to use np.dtype(('float', (3,))) to represent vector valued functions.
with this syntax we would have:

>>> spc = odl.uniform_discr(0, 1, 10, dtype=(float, (3,)))
>>> spc.shape
(3, 10)
>>> spc.dspace.shape
(3, 10)
>>> spc.dtype
dtype(('<f8', (3,)))
>>> spc.dtype.shape
(3,)

pros: No need for another parameter, no redundancy
cons: Confusing to users not intimately familiar with numpy. Unsure if it works with pygpuarray

In DiscreteLp: Where to put the additional axes for the function range?

First is i.m.o. the only option, sadly. This needs to match productspace

Do we need properties for the two different parts of shape and ndim? If yes, how to name them?

Certainly, but I'm not sure what. Perhaps call the "full" shape and "full" ndim as now, to match numpy. Then perhaps value_shape/domain_shape etc?

How do we, in the long run, deal with maintaining compatibility with ProductSpace based implementations of the same thing?

No ideal solution here, but overall I'd say not to add too much to productspace, e.g. do not port everything but add as needed. Also try to make sure most things are transparent. We may want to reopen and solve #157 (Multidimensional product spaces) in order to make the syntax more similar for the basic cases.

More that I've forgotten now

I have trust in you!

@kohr-h
Copy link
Member Author

kohr-h commented Feb 15, 2017

How to effectively deal with continuous functions with vector or tensor range? In particular vectorization?

My suggestion is to use np.dtype(('float', (3,))) to represent vector valued functions.
pros: No need for another parameter, no redundancy
cons: Confusing to users not intimately familiar with numpy. Unsure if it works with pygpuarray

That's actually a very nice idea, makes total sense. It will also be a perfect match for continuous function spaces with out_dtype = np.dtype(float, (3,)).
I agree it's not obvious even to Numpy users (I'd guess the number of Numpy users that know this is around 20 % max) but it makes sense in every respect, and the value of not having to add a parameter is actually quite significant.
I'd also not worry too much about gpuarray, if there's no native support there we can do the conversion ourselves.

Do we need properties for the two different parts of shape and ndim? If yes, how to name them?

Certainly, but I'm not sure what. Perhaps call the "full" shape and "full" ndim as now, to match numpy. Then perhaps value_shape/domain_shape etc?

I had also thought of domain_shape so let's check that in. Actually value_shape is not bad either, it describes very closely the actual construction.

How do we, in the long run, deal with maintaining compatibility with ProductSpace based implementations of the same thing?

No ideal solution here, but overall I'd say not to add too much to productspace, e.g. do not port everything but add as needed. Also try to make sure most things are transparent. We may want to reopen and solve #157 (Multidimensional product spaces) in order to make the syntax more similar for the basic cases.

In some sense, the space ** value_shape implementation already does this, since that's what corresponds to the thing described above. So at least for this purpose, #157 is solved (although not multi-dim. indexing and such, but that gets messy).

@kohr-h
Copy link
Member Author

kohr-h commented Mar 16, 2017

This is now largely done in #861, at least on the level of FunctionSpace, but that's where the hard work goes. I'm quite happy with the syntax that it enables:

>>> intv = odl.IntervalProd(0, 1)
>>> scal_spc = odl.FunctionSpace(intv)
>>> elem = scal_spc.element(lambda x: x + 1)
>>> elem(0)
1.0
>>> elem([0, 1])
array([ 1.,  2.])
>>>
>>> vec_spc = odl.FunctionSpace(intv, out_dtype=(float, (2,)))  # 2 components
>>> # Possibility 1: sequence of functions
>>> elem1 = vec_spc.element([lambda x: x + 1, lambda x: x - 1])
>>> elem1(0)
array([ 1., -1.])
>>> elem1([0, 1])
array([[ 1.,  2.],
       [-1.,  0.]])
>>>
>>> # Possibility 2: function returning a sequence
>>> elem2 = vec_spc.element(lambda x: (x + 1, x - 1))
>>> elem2(0)
array([ 1., -1.])
>>> elem1([0, 1])
array([[ 1.,  2.],
       [-1.,  0.]])

Of course, things also work with higher order tensors, and you can use constants in sequences:

>>> tens_spc = odl.FunctionSpace(intv, out_dtype=(float, (2, 3)))
>>> elem = tens_spc.element([[lambda x: x, np.abs, np.negative],
                             [lambda x: x + 1, 0, 0]])
>>> elem1(0)
array([[ 0.,  0.,  0.],
       [ 1.,  0.,  0.]])
>>> elem([0, 1])
array([[[ 0.,  1.],
        [ 0.,  1.],
        [ 0., -1.]],

       [[ 1.,  2.],
        [ 0.,  0.],
        [ 0.,  0.]]])

Support for out is still patchy, and the linear space methods may not work. Of course, no extra tests yet, but we're getting there.

@adler-j
Copy link
Member

adler-j commented Mar 17, 2017

Looking great to me!

@adler-j
Copy link
Member

adler-j commented Jul 3, 2017

This is very related to #342.

@adler-j adler-j mentioned this issue Jul 3, 2017
20 tasks
@kohr-h
Copy link
Member Author

kohr-h commented Aug 10, 2017

I'll keep this issue open after the tensor branch is in, for the work on the discretized version of vector- and tensor-valued functions. That will remain as TODO.

@adler-j
Copy link
Member

adler-j commented Aug 10, 2017

I agree, but that should be done ASAP once the branch is in.

@kohr-h kohr-h mentioned this issue Aug 11, 2017
29 tasks
@adler-j
Copy link
Member

adler-j commented Nov 14, 2017

I guess this is on the table now

@kohr-h kohr-h self-assigned this Nov 14, 2017
@kohr-h
Copy link
Member Author

kohr-h commented Nov 14, 2017

I'm working on it, have a WIP branch.

@adler-j
Copy link
Member

adler-j commented Nov 14, 2017

Might as well make a PR to make it visible then!

@kohr-h
Copy link
Member Author

kohr-h commented Nov 15, 2017

#1238

kohr-h pushed a commit to kohr-h/odl that referenced this issue Dec 11, 2017
Changes in detail:
- Add dtype with shape to DiscreteLp (mostly __repr__, factory
  functions and some downstream methods). As a consequence,
  `shape_[in,out]` and `ndim_[in,out]` are added for the
  different types of axes, as well as `scalar_dtype`.
- Add `PerAxisWeighting` and make it the default for
  `DiscreteLp` type spaces. Reason: this way the `tspace`
  knows how to deal with removed axes etc. This is important
  for a smooth experience with indexing and reductions over
  axes.
  Helpers for slicing weightings help structure this task.
- Implement `__getitem__` for `TensorSpace` and `DiscreteLp`,
  including (hopefully) reasonable propagation of weights.
  The new `simulate_slicing` utility function simplifies
  this task.
- Allow indexing with ODL tensors of boolean or integer dtype.
- Implement correct weighting for backprojections with
  non-uniform angles, using per-axis weighting and a new
  helper `adjoint_weightings` to apply the weightings in an
  efficient way.
  The correct weighting from the range of `RayTransform`
  is determined by the new `proj_space_weighting` helper.
- Change the space `_*_impl` methods to always expect and
  return Numpy arrays, and adapt the calling code.
- Change behavior of `norm` and `dist` to ignoring weights
  for `exponent=inf`, in accordance with math.
- Improve speed of `all_equal` for comparison of arrays.
- Account for `None` entries in indices in the
  `normalized_index_expression` helper, thus allowing
  creation of new axes.
- Remove `dicsr_sequence_space`, it was largely unused and
  just a maintenance burden. Use a regular `uniform-discr`
  from zero to `shape` instead.
- Remove `Weighting.equiv()` mehtods, never used and hard
  to maintain (n^2 possibilities).
- Remove the (largely useless) `_weighting` helper to create
  weighting instances since it would have been ambiguous
  with sequences of scalars (array or per axis?). Also remove
  the `npy_weighted_*` functions, they were useless, too.
- Remove some dead code from tomo/util.
- A bunch of minor fixes, as usual.

Closes: odlgroup#908, odlgroup#907, odlgroup#1113, odlgroup#965, odlgroup#286, odlgroup#267, odlgroup#1001
kohr-h pushed a commit to kohr-h/odl that referenced this issue Dec 11, 2017
Changes in detail:
- Add dtype with shape to DiscreteLp (mostly __repr__, factory
  functions and some downstream methods). As a consequence,
  `shape_[in,out]` and `ndim_[in,out]` are added for the
  different types of axes, as well as `scalar_dtype`.
- Add `PerAxisWeighting` and make it the default for
  `DiscreteLp` type spaces. Reason: this way the `tspace`
  knows how to deal with removed axes etc. This is important
  for a smooth experience with indexing and reductions over
  axes.
  Helpers for slicing weightings help structure this task.
- Implement `__getitem__` for `TensorSpace` and `DiscreteLp`,
  including (hopefully) reasonable propagation of weights.
  The new `simulate_slicing` utility function simplifies
  this task.
- Allow indexing with ODL tensors of boolean or integer dtype.
- Implement correct weighting for backprojections with
  non-uniform angles, using per-axis weighting and a new
  helper `adjoint_weightings` to apply the weightings in an
  efficient way.
  The correct weighting from the range of `RayTransform`
  is determined by the new `proj_space_weighting` helper.
- Change the space `_*_impl` methods to always expect and
  return Numpy arrays, and adapt the calling code.
- Change behavior of `norm` and `dist` to ignoring weights
  for `exponent=inf`, in accordance with math.
- Improve speed of `all_equal` for comparison of arrays.
- Account for `None` entries in indices in the
  `normalized_index_expression` helper, thus allowing
  creation of new axes.
- Remove `dicsr_sequence_space`, it was largely unused and
  just a maintenance burden. Use a regular `uniform-discr`
  from zero to `shape` instead.
- Remove `Weighting.equiv()` mehtods, never used and hard
  to maintain (n^2 possibilities).
- Remove the (largely useless) `_weighting` helper to create
  weighting instances since it would have been ambiguous
  with sequences of scalars (array or per axis?). Also remove
  the `npy_weighted_*` functions, they were useless, too.
- Remove some dead code from tomo/util.
- A bunch of minor fixes, as usual.

Closes: odlgroup#908
Closes: odlgroup#907
Closes: odlgroup#1113
Closes: odlgroup#965
Closes: odlgroup#286
Closes: odlgroup#267
Closes: odlgroup#1001
kohr-h pushed a commit to kohr-h/odl that referenced this issue Jun 30, 2018
Changes in detail:
- Add dtype with shape to DiscreteLp (mostly __repr__, factory
  functions and some downstream methods). As a consequence,
  `shape_[in,out]` and `ndim_[in,out]` are added for the
  different types of axes, as well as `scalar_dtype`.
- Add `PerAxisWeighting` and make it the default for
  `DiscreteLp` type spaces. Reason: this way the `tspace`
  knows how to deal with removed axes etc. This is important
  for a smooth experience with indexing and reductions over
  axes.
  Helpers for slicing weightings help structure this task.
- Implement `__getitem__` for `TensorSpace` and `DiscreteLp`,
  including (hopefully) reasonable propagation of weights.
  The new `simulate_slicing` utility function simplifies
  this task.
- Allow indexing with ODL tensors of boolean or integer dtype.
- Implement correct weighting for backprojections with
  non-uniform angles, using per-axis weighting and a new
  helper `adjoint_weightings` to apply the weightings in an
  efficient way.
  The correct weighting from the range of `RayTransform`
  is determined by the new `proj_space_weighting` helper.
- Change the space `_*_impl` methods to always expect and
  return Numpy arrays, and adapt the calling code.
- Change behavior of `norm` and `dist` to ignoring weights
  for `exponent=inf`, in accordance with math.
- Improve speed of `all_equal` for comparison of arrays.
- Account for `None` entries in indices in the
  `normalized_index_expression` helper, thus allowing
  creation of new axes.
- Remove `dicsr_sequence_space`, it was largely unused and
  just a maintenance burden. Use a regular `uniform-discr`
  from zero to `shape` instead.
- Remove `Weighting.equiv()` mehtods, never used and hard
  to maintain (n^2 possibilities).
- Remove the (largely useless) `_weighting` helper to create
  weighting instances since it would have been ambiguous
  with sequences of scalars (array or per axis?). Also remove
  the `npy_weighted_*` functions, they were useless, too.
- Remove some dead code from tomo/util.
- A bunch of minor fixes, as usual.

Closes: odlgroup#908
Closes: odlgroup#907
Closes: odlgroup#1113
Closes: odlgroup#965
Closes: odlgroup#286
Closes: odlgroup#267
Closes: odlgroup#1001
kohr-h pushed a commit to kohr-h/odl that referenced this issue Sep 12, 2018
Changes in detail:
- Add dtype with shape to DiscreteLp (mostly __repr__, factory
  functions and some downstream methods). As a consequence,
  `shape_[in,out]` and `ndim_[in,out]` are added for the
  different types of axes, as well as `scalar_dtype`.
- Add `PerAxisWeighting` and make it the default for
  `DiscreteLp` type spaces. Reason: this way the `tspace`
  knows how to deal with removed axes etc. This is important
  for a smooth experience with indexing and reductions over
  axes.
  Helpers for slicing weightings help structure this task.
- Implement `__getitem__` for `TensorSpace` and `DiscreteLp`,
  including (hopefully) reasonable propagation of weights.
  The new `simulate_slicing` utility function simplifies
  this task.
- Allow indexing with ODL tensors of boolean or integer dtype.
- Implement correct weighting for backprojections with
  non-uniform angles, using per-axis weighting and a new
  helper `adjoint_weightings` to apply the weightings in an
  efficient way.
  The correct weighting from the range of `RayTransform`
  is determined by the new `proj_space_weighting` helper.
- Change the space `_*_impl` methods to always expect and
  return Numpy arrays, and adapt the calling code.
- Change behavior of `norm` and `dist` to ignoring weights
  for `exponent=inf`, in accordance with math.
- Improve speed of `all_equal` for comparison of arrays.
- Account for `None` entries in indices in the
  `normalized_index_expression` helper, thus allowing
  creation of new axes.
- Remove `dicsr_sequence_space`, it was largely unused and
  just a maintenance burden. Use a regular `uniform-discr`
  from zero to `shape` instead.
- Remove `Weighting.equiv()` mehtods, never used and hard
  to maintain (n^2 possibilities).
- Remove the (largely useless) `_weighting` helper to create
  weighting instances since it would have been ambiguous
  with sequences of scalars (array or per axis?). Also remove
  the `npy_weighted_*` functions, they were useless, too.
- Remove some dead code from tomo/util.
- A bunch of minor fixes, as usual.

Closes: odlgroup#908
Closes: odlgroup#907
Closes: odlgroup#1113
Closes: odlgroup#965
Closes: odlgroup#286
Closes: odlgroup#267
Closes: odlgroup#1001
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants