-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Densities, random walks & travelling salesman #206
Conversation
@paquiteau I need to add a dependency to |
Whoa there is a lot to process, but this look awesome. @philouc or @chaithyagr you might want to take a look ;) For pywavelets, you can add it as an optional depedency (and give proper error message when not found) . |
@Daval-G did you make sure to define the optimal target sampling density (TSD) for TSP solutions as a function of the dimensions of the ambient space? In 2D (d=2), if \pi is supposed to be the optimal TSD, the TSD for TSP should be parameterized as \pi^2 as the TSP itself works as \pi^{d-1/d} [see section 4.3, p. 1976 in Chauffert et al, SIAM IS 2014] whereas in 3D the target TSD should be initialized as \pi^{3/2}. |
@philouc Good catch, I overlooked that point. I followed the experiment proposed in the paper and presented in Fig 4.1, a.k.a I made 10 000 trajectories/sampling (Nc=10, Ns=100) in different situations and then made a 2D histogram to find if the target density was actually respected or not. I based that experiment on a dummy TSD as according to the paper it is not specific to Nicolas's optimal TSD. In the figure below, (a) is the TSD and (b-f) are normalized histogram corresponding to the empirical distributions: Note that:
Overall it seems that the TSD is already respected without additional correction (except for (e) but it was not expected anyway). I am not sure why it is the case here and not in the paper, I am still looking into it. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A first pass, the code is looking good to me, I let @chaithyagr or @philouc comment on the maths.
I guess you already plan to add more documentation and examples?
Also, some of the internal maths functions can be quite complex, so if you have some debugging setup that you wish to promote to unit-tests, feel free to do so as well.
@paquiteau Yes I made a todo list in the original PR post about docstrings, examples & co. I converted it to draft |
You're right @Daval-G. Illustration on a dummy TSD is fine. I was just referring to the math relation allowing us to compute the sampling density on the TSP (points are joined by straight lines) vs the original TSD which could be sampled independently. Based on this comment I'm not sure whether TSPx4 shown in panel (f) fulfills the equation because if you start from the TSD in (a) you should end up with the sqrt of (a) in (f). Fig. 6 in Nicolas' paper illustrates pretty well that phenomemon when contrasting panes (d) and (g). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Starting my review...
return locations | ||
|
||
|
||
def create_cutoff_decay_density(shape, cutoff, decay, resolution=None): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about a more generic density defined in Eq. 10 in https://onlinelibrary.wiley.com/doi/full/10.1002/mrm.29702 ?
Its a very short update.
Also, having support for elliptical sampling enabled or not can be helpful (we soon plan to disable it :P )
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am confused, this density is already based on that equation. What difference do you see ?
And what do you mean by elliptical sampling ? Anisotropic ? Why do you plan to disable it and from where?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Anisotropic and more importantly elliptical sampling, a way to enable or disable mask.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Somehow that dodges all of my questions xD Could you reformulate pls ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to support elliptical or non elliptical density
We also need to support anisotropy, but I think that we already do?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had to fix it but I think it can now handle anisotropy and elliptical densities
fourier_vector = nf.ifftn(unit_vector) | ||
coeffs = pw.wavedecn( | ||
fourier_vector, wavelet=wavelet_basis, level=nb_wavelet_scales | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You seem to be doing \Psi F*, the paper describes F* \psi right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I based this on the original code from Nicolas:
- https://github.com/philouc/mri_acq_recon_tutorial/blob/master/matlab/TSP_Markov/tools/compute_optimal_distrib.m
- https://github.com/philouc/mri_acq_recon_tutorial/blob/master/matlab/TSP_Markov/tools/perform_wavortho_transfAnis.m
I checked that the code had an identical behavior, and it is the case except for one major difference: he used hardcoded wavelets banks and I use pywavelets
instead, and their values seem to differ.
for k, s in enumerate(shape): | ||
unit_vector = np.zeros(s) | ||
density_1d = np.zeros(s) | ||
|
||
for i in range(s): | ||
unit_vector[i] = 1 | ||
fourier_vector = nf.ifft(unit_vector) | ||
coeffs = pw.wavedec( | ||
fourier_vector, wavelet=wavelet_basis, level=nb_wavelet_scales | ||
) | ||
coeffs, _ = pw.coeffs_to_array(coeffs) | ||
density_1d[i] = np.max(np.abs(coeffs)) ** 2 | ||
unit_vector[i] = 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it possible to vectorize this? How slow is it right now for 3D?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@paquiteau , maybe numba?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
create_fast_chauffert_density
here is super fast for any matrix size. It is not the same for create_chauffert_density
above
pywavelets
allows the vectorization, but for create_chauffert_density
we are speaking about an NxN matrix with N the total number of voxels. We explicit the linear operator to find for each row the max L2 norm. There are probably smart ways to save some computations though.
return nf.ifftshift(density) | ||
|
||
|
||
def create_fast_chauffert_density(shape, wavelet_basis, nb_wavelet_scales): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To keep in mind: https://arxiv.org/pdf/1910.14513, some other interesting results with Anna.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there something specific in the paper you would like to highlight ? Should we open an issue to create some extension in the future ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also @chaithyagr I note in the paper figure that there is a TWIRL implementation sleeping somewhere in our cardboards. Do you know where it is so it can be added to the package ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isnt TWIRL just TPI in 2D? It should be fairly straightforward from our codes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there something specific in the paper you would like to highlight ? Should we open an issue to create some extension in the future ?
I think @philouc can comment better than me. I just felt its good to add in docs, Its been long having gone through this work, but from what I remember its extension of the earlier work for anisotropic sampling, which can be of interest too.. But surely not to be pursued in this work
I think the key missing ingridient here is this point by Guillaume:
I am assuming this means he is interpolating UNIFORMLY with 3 extra points in between every 2 points in trajectory. (@Daval-G to correct me). |
Actually I interpolate with cubic spline, it is maybe the best way to account for 1st and 2nd order derivative to get smooth transitions. It might be the reason.
That's a good point, so I carried an additional test where I just increase a lot the interpolation/oversampling while keeping the same number of TSP points (Ntsp). In the image below, we start with (b) at Nc=20 and Ns=200 but Ntsp=50, which is already 10% sampling/AF=10 compared to the 200x200 matrix size used for the density. The oversampling from Ntsp to Ns goes from 4 to 64 which is HUGE. I provide example trajectories below (clustered by bands to accelerate computations). At this point I think we can say that we sampled every point on the path followed by the TSP, and the histogram/observed density is still not affected.
All the trajectories are defined over a normalized space [-0.5, 0.5]^d so there is necessarily a resolution and/or readout duration for which they are compliant. Also oversampling with cubic splines make transitions quite smooth already, given enough oversampling. @chaithyagr At this point I think it is relevant anyway to add an option to account for the |
@chaithyagr I think the interpolation is indeed the key, I will carry extra tests this week to check that Edit: Actually no, the results are similar to the ones from the figure above with/without clustering and linear or quadratic interpolation |
Im sorry for the confusion, when I said UNIFORMLY in my above statement, I mean you added EXACTLY the same number of points in between any 2 points in the trajectory (which also makes sense). |
@chaithyagr That was it, great job. The paper considers binary masks instead of continuous spaces, so the number of points within a k-space voxel doesn't matter as long as there is one or more, effectively impacting the density in the way described by Nicolas. Below is an experiment showing that, with empirical densities obtained from continuous spaces put into 2D histograms and just binary masks as presented. The density with binary masks is attenuated, and actually fixed when sampling over pi^(d/(d-1)). That being said, using binary masks for k-space sampling is quite specific. It means one wants to undersample retrospectively without using NUFFT operators. It is quite the opposite of the goal of this package, and can't be used for real life acquisitions. I would advocate to provide the |
Great job on the experienent! Yes I agree with you on the default as False. |
Very nice job @Daval-G ; Your illustration in 2D is clear and very convincing. Additionally, suggestion about deactivating the density correction seems reasonable as the main purpose of this MRI-NUFFT package it to implement undersampling over a continuous k-space domain. |
Very sorry for the late feedback, I'm travelling every week! |
I will open an issue later to add |
@paquiteau I need to add dependencies to |
Indeed the dependencies management in MRI-Nufft could be improved. Then I think we could recommend people to |
Maybe |
the CI should simply install |
@chaithyagr @paquiteau CI updated & passed. Let me know if good to merge, I am planning the next PRs already |
LGTM ! Thanks for this — once again — significant contribution :) |
@paquiteau The merge is blocked until Chaithya & you click on approve :) |
Given a lot of docs and also an example, can we run the docs in CI? You can get a run by adding |
Otherwise this PR LTGM.. |
@chaithyagr Done |
This PR adds densities, sampling, and trajectories based on that (random walks and travelling salesman). In particular, it reproduces 3 papers from Nicolas Chauffert (4th coming "soon"). The features are represented in the images below, but in details:
maths.py
is split into a submodule as the file became too biginits
submodule to store the trajectory initializations, becausetrajectory2D.py
andtrajectory3D.py
are also starting to be too bigTODO:
pywavelets
dependencyDensity features:
Trajectory features: