You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The geometries differentiate between abstract "motion parameters" (name could be better, maybe "system parameters") and detector parameters.
Motion parameters are usually one or two rotation angles in simple geometries or a curve parameter describing positions on an acquisition curve, e.g. a helix. Physically they typically describe how the imaging system "moves" over time relative to the static object coordinate system. But they could also mean something completely different, i.e. describe the distribution of transducers in an ultrasound experiment, where no movement of an imaging system is involved. I found it hard to come up with a name short enough and general enough to be adequate.
Anyway, while the first set of parameters describes the imaging system as a whole, the second one describes the detector, and there are mappings from these parameters to the locations of detector points in space, given in the local coordinate system of the detector (i.e. for a fixed reference detector).
This corresponds to the splitting of the parameters into, e.g., angle and detector position.
However, mainly for convenience and without particular reasoning, these parameter sets are lumped together, and the ASTRA backed projectors use a discretized function space on that "lump" as their range.
Concrete example:
>>>angle_range=odl.Interval(0, np.pi)
>>>det_params=odl.Interval(-0.5, 0.5) # total width of 1 (meter?)>>>lumped_params=angle_range.insert(det_params)
>>>lumped_paramsRectangle([0.0, -0.5], [3.1415926535897931, 0.5])
So the operator range will be a discretization of a function space based on this rectangle.
Alternative
Another viable way would be the following: given a list of angles, make the projector a ProductSpaceOperator with one operator part per angle (or group of angles). Something like that will be needed anyway if we want to implement Kaczmarz-type methods with split operators.
Follow-up issues
Even in the case where we make a lump, we may still want to treat different dimensions differently when it comes to discretization. We may want to use linear interpolation on the detector while applying nearest in the angles. Hence, there should be a possibility to define an interpolation scheme per dimension (shouldn't be all that hard as long as things stay tensor).
In the long run, we need more flexibility in defining norms and inner products on product spaces. Standard p-norms are all good but not the end of the story.
Our current product space layout is not prepared for the situation where one e.g. has a space of functions in 3 variables and wants to discretize the first two only, thus making a product space with the number of sampling points in the first two variables as number of parts. And all that with the possibility to use some (matched) norm in the first two components and something else in the third. In some sense this would require a partial (or generalized) discretization where the dspace is not an NtuplesBase but rather some product space, if possible equipped with a multi-dimensional structure. The corresponding mathematical structure is a Bochner space, which is just a fancy way of expressing different behavior of different variables of a function.
The text was updated successfully, but these errors were encountered:
I think this is superseded by the way we think about Kaczmarz now, i.e. as something taking a list of operators rather than a single product space operator. Also the range representation seems to work well. Some specific cases like #152 and #671 still have to be sorted out, but we're getting there.
Current situation
The geometries differentiate between abstract "motion parameters" (name could be better, maybe "system parameters") and detector parameters.
Motion parameters are usually one or two rotation angles in simple geometries or a curve parameter describing positions on an acquisition curve, e.g. a helix. Physically they typically describe how the imaging system "moves" over time relative to the static object coordinate system. But they could also mean something completely different, i.e. describe the distribution of transducers in an ultrasound experiment, where no movement of an imaging system is involved. I found it hard to come up with a name short enough and general enough to be adequate.
Anyway, while the first set of parameters describes the imaging system as a whole, the second one describes the detector, and there are mappings from these parameters to the locations of detector points in space, given in the local coordinate system of the detector (i.e. for a fixed reference detector).
This corresponds to the splitting of the parameters into, e.g., angle and detector position.
However, mainly for convenience and without particular reasoning, these parameter sets are lumped together, and the ASTRA backed projectors use a discretized function space on that "lump" as their range.
Concrete example:
So the operator range will be a discretization of a function space based on this rectangle.
Alternative
Another viable way would be the following: given a list of angles, make the projector a
ProductSpaceOperator
with one operator part per angle (or group of angles). Something like that will be needed anyway if we want to implement Kaczmarz-type methods with split operators.Follow-up issues
dspace
is not anNtuplesBase
but rather some product space, if possible equipped with a multi-dimensional structure. The corresponding mathematical structure is a Bochner space, which is just a fancy way of expressing different behavior of different variables of a function.The text was updated successfully, but these errors were encountered: