-
Notifications
You must be signed in to change notification settings - Fork 80
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support functions for spreading and interpolating with scaling in cufinufft #308
Add support functions for spreading and interpolating with scaling in cufinufft #308
Conversation
This support is only added for cufinufft at the moment, adding to finufft soon. |
2e637b9
to
0e5f3f3
Compare
fe8abc5
to
7273f68
Compare
I have rebased all my changes to the latest version. I still will need the cufinufft_spread and interp functions for exposing them. @blackwer can you please comment on this and if the use of cuda_array_interface essentially makes this PR null? |
I'm not sure how the |
If it does come up in the meeting do mention that spread and interpolate functions are very important for the below 2 important reasons:
|
I second @chaithyagr's message about the need for spreading/interp interface exposure. Also for me, the interest is in density compensation MRI. Perhaps it also nicely ties into any wishes for NUFFT inverse documentation or examples, #264, maybe a Pipe Menon example or similar. As a start simply exposing the spreading/interp would be greatly appreciated! Great work! |
Hello there, more pragmatically, the exposed API can consist of a new argument/ flags in the (cu)finufft plan struct (e.g., |
@@ -14,6 +14,52 @@ using namespace cufinufft::memtransfer; | |||
namespace cufinufft { | |||
namespace spreadinterp { | |||
|
|||
template <typename T> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm confused where this code came from (surely it already exists in cufinufft?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It did indeed, it was however removed during cleanup as a part of 27c3621
This was perhaps done as it was used to test internal modules, but then it was this very functions which I chose to expose (with a wrapper)
@@ -354,6 +354,115 @@ Notes: the type T means either single or double, matching the | |||
return ier; | |||
} | |||
|
|||
template<typename T> | |||
T calculate_scale_factor(int rank, finufft_spread_opts opts) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have no idea what this code does without a mathematical description, and documentation of inputs/outputs.
T calculate_scale_factor(int rank, finufft_spread_opts opts) | ||
{ | ||
// Calculate the additional scalar for multiplying just for spread and interpolate | ||
int n = 100; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is n here? This routine looks like a crude quadrature approximation to the integral of the spread kernel. But it is not accurate. A Gauss-Legendre rule would be needed.
sum *= h; | ||
sum *= sqrt(1.0 / opts.ES_c); | ||
T scale = sum; | ||
if (rank > 1) { scale *= sum; } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
rank
would need to be documented. This all suggests that this code should reside outside of (cu)FINUFFT, since it's not part of the NUFFT task.
Dear MRI users, We are glad you find (cu)FINUFFT useful, and want that to remain the case! But we (or at least I) am confused what additional features are wanted from our library going forward. Hence I moved #306 to a Discussion. Thus what you're requesting (spread/interp and deconvolution) is not part of the interface to a NUFFT library, is unstable, and should remain under the hood. (Eg, one would not expect access to the internals of an FFT library.) Digging into the MRI application, here's why I'm confused re the requests in this PR. In standard iterative recon papers (eg Fessler and Noll, ISMRM 2007, or more recent papers of Lustig et al.), the imaging forward model is a 2D type 2 NUFFT, and usually given the matrix symbol A. Thus one wants to minimize ||Ax-y|| in some norm plus some regularization term. In these works, the algorithms all involve applications of A or A^H, ie type 2 and type 1 NUFFTs. Those are what you should use FINUFFT for. (I should check that those are in fact what you are using it for - right?) In contrast, density compensation weights appear to affect convergence rates, and are computed in some distinct offline step (which could also happen to involve NUFFTs, eg sinc^2 weights of Greengard, Inati, et al). BTW, I should add: if you take the output of the type 1 (ie, the regular array) and send that to a sized-N FFT (which is very cheap), you get an interpolator from scattered data that makes no reference to window functions or density compensation. Maybe that is what you want? Sorry about the length of this, but we have to all communicate about it successfully to know how to proceed. In conclusion: if you feel strongly about wanting these features that fall outside of the NUFFT interface, if would be great if you could please give us a literature reference that explains the precise mathematical formula desired, in a similar way to the A matrix in the above iterative MRI papers. |
Please see the 2.2.0 docs which now have a simple 1D inverse type 2 tutorial: |
Apologies to the users who want interfaces from spreadinterp.cpp exposed directly - see discussion for why we're not going to be able to do it. Hwoever, it will be easy for you to do in your forks. PS spreadinterp just got 10-80% faster due to XSIMD vecotrization... Best, Alex |
This resolves #306
Also I have a bunch of fixes which did not make through for the earlier refactoring PR.