Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

warping landmarks using the output deformation field #2

Closed
Borda opened this issue Oct 7, 2019 · 13 comments
Closed

warping landmarks using the output deformation field #2

Borda opened this issue Oct 7, 2019 · 13 comments

Comments

@Borda
Copy link
Contributor

Borda commented Oct 7, 2019

I would continue our discussion here as I think that it may be useful also for other users or your SW. To the context, we are using DROP2 as one of the STOA methods in BIRL for image registration participating ANHIR challenge.

I will share with you two pair of images:

  1. overlap of target and warped moving image
  2. visualization of registration error, in this case, target landmarks, warped target landmarks (target landmarks plus the shift extracted from deformation) and moving landmarks; the expectation is that the warped landmarks should match the warped target landmarks (the red line represents the registration error)

Some more details and assumptions:

  • I was running the registration on the tiny dataset consisting of three synthetic and two real images with increasing deformation complexity
  • the used registration was using just affine transformation -l --ltype 0 --lsim 1 --llevels 4 4 4 --lsampling 0.2.
  • affine transformation should be irreversible, so using the deformation fields to warp the target landmarks - if the registration is perfect the warped target landmarks should match sensed landmarks, right?
  • I take the landmarks (x,y) from target frame and to each of them I add output deformation and the position of the target landmarks, is it correct?

For the simple cases with translation or affine transformation it looks fine, just a tiny error
image_refence-warped
registration_visual_landmarks

but for real images, the registration seems to be fine 1] (see the alignment of the holes in the top left and bottom right corners) is quite fine, but 2] shows a quite large error
image_refence-warped
registration_visual_landmarks

Can the deformation be affected by different image sizes?

  • the first case has equal image sizes - 800x600px
  • the second case has - target: 890x733px ; source: 891x735px
@bglocker
Copy link
Member

bglocker commented Oct 7, 2019

I think the problem is that you cannot really 'transfer' the displacements that are defined on the target image to the source image. In order to get a correct visualisation of displacement vectors of source landmarks visualised on the source image, you will need to swap the role of the images during registration (use your current source as target, and vice versa). Can you recreate the last image for this?

@Borda
Copy link
Contributor Author

Borda commented Oct 7, 2019

Ok, I will do it. Is the registration deterministic and symmetric meaning if source -> target reached optima then target -> source should too?

@bglocker
Copy link
Member

bglocker commented Oct 7, 2019

In ideal (continuous) world they would be symmetric, but they won't because of the discrete nature of the setting. The reference space is different for the two images, and the inv(T_src_to_trg) != T_trg_to_src. Because of the way FFDs work (uniform grid overlaid on the reference target frame), in fact the deformations can be significantly different from A->B and B->A.

Having said this, there are specific methods incorporating 'inverse consistency' going back to early work by Christensen & Johnson. Consistent image registration. IEEE TMI 2001 Jul;20(7):568-82.

@Borda
Copy link
Contributor Author

Borda commented Oct 24, 2019

just a quick question:

  • if I do not pass --ocompose option it does not generate any transformation field
  • also (iverse) if I do pass --onoimage option it does not give me the warped image
 --onoimage      OUTPUT: no warped image output (only transformation)
 --ocompose      OUTPUT: composed transformation field

Btw, would you consider to rename the options, until a user sees the help, the n... intuitively leads to a number then non-linear so to rather have out_... (like out_noimage), linear_... (like linear_sampling), etc

@bglocker
Copy link
Member

bglocker commented Oct 25, 2019

If you run linear registration without --ocompose you don't get a transformation field, that's correct, as the displacement field without composing the affine transformation would just be zeros. So if you want the full displacement field for the affine transformation you need to add --ocompose.

If non-linear registration is enabled, you will get the full field (including the affine transformation) when adding --ocompose. If you run non-linear registration without that option, you get only the displacement field of the non-linear part of the registration (this often preferred where an analysis on the non-linear deformation is required without the affine part).

I see your point about renaming the options. I will consider it.

@Borda Borda closed this as completed Oct 25, 2019
@Borda
Copy link
Contributor Author

Borda commented Oct 27, 2019

I have been playing with the landmarks waring and still falling into the same trap, that for some images it is firn (the visualisation correlate with the expectation), but some are bad... Typically the synthetic images are fine, the real is failing...
I found some "inconsistency" in the image warping/deformation field (or there is some mistake in my thinking). I have created a simple script which should do the image warping according to the generated deformation field and compare it with the warped image.

import os

import numpy as np
import nibabel
import matplotlib.pylab as plt
import skimage.color as sk_color
from scipy.interpolate import griddata

REAL = True

def compose_images(img1, img2):
    dims = np.max([img1.shape, img2.shape], axis=0)
    img = np.zeros(tuple(dims[:2]) + (3, ))
    for i, img_ in enumerate([img1, img2]):
        img[:img_.shape[0], :img_.shape[1], i] = img_[:img_.shape[0], :img_.shape[1], ...]
    return img

if REAL:
    PATH_SOURCE = os.path.expanduser('~/Dropbox/Workspace/BIRL/data_images/lesions_/scale-5pc')
    PATH_IMG_SRC = os.path.join(PATH_SOURCE, 'Izd2-29-041-w35_HE.jpg')
    PATH_IMG_REF = os.path.join(PATH_SOURCE, 'Izd2-29-041-w35_proSPC.jpg')
    PATH_REGIST = os.path.expanduser('~/Desktop/BmDROP2_20191027-180923/4')
else:
    PATH_SOURCE = os.path.expanduser('~/Dropbox/Workspace/BIRL/data_images/images')
    PATH_IMG_SRC = os.path.join(PATH_SOURCE, 'artificial_reference.jpg')
    # PATH_IMG_REF = os.path.join(PATH_SOURCE, 'artificial_moving-affine.jpg')
    PATH_IMG_REF = os.path.join(PATH_SOURCE, 'artificial_moving-elastic.jpg')
    PATH_REGIST = os.path.expanduser('~/Desktop/BmDROP2_20191027-180923/2')

PATH_IMG_WARP = os.path.join(PATH_REGIST, 'output.jpeg')
PATH_DEF_X = os.path.join(PATH_REGIST, 'output_field_x.nii.gz')
PATH_DEF_Y = os.path.join(PATH_REGIST, 'output_field_y.nii.gz')

img_src = sk_color.rgb2grey(plt.imread(PATH_IMG_SRC))
img_ref = sk_color.rgb2grey(plt.imread(PATH_IMG_REF))
img_warp = plt.imread(PATH_IMG_WARP) / 255.

fig, axarr = plt.subplots(nrows=2, ncols=2, figsize=(12, 12))
axarr[0, 0].set_title("Overlap: target & source image")
axarr[0, 0].imshow(compose_images(img_ref, img_src))
axarr[0, 1].set_title("Overlap: target & warped (DROP) image")
axarr[0, 1].imshow(compose_images(img_ref, img_warp))

h, w = img_src.shape[:2]
deform_x = nibabel.load(PATH_DEF_X).get_data()[:w, :h, 0].T
deform_y = nibabel.load(PATH_DEF_Y).get_data()[:w, :h, 0].T
grid_y, grid_x = np.mgrid[0:deform_x.shape[0], 0:deform_x.shape[1]]

img_warp2 = griddata(((grid_x - deform_x).ravel(), (grid_y - deform_y).ravel()), img_src.ravel(),
                     (grid_x, grid_y), method='linear')
axarr[1, 0].set_title("Overlap: target & warped (local) image")
axarr[1, 0].imshow(compose_images(img_ref, img_warp2))
axarr[1, 1].set_title("Overlap: DROP & local warped image")
axarr[1, 1].imshow(compose_images(img_warp, img_warp2))

fig.tight_layout()
plt.show()

The synthetic example seems to be well aligned
a1

But the real image is a very different story
a2

PS: the used registration images are here and configuration:

-l --ltype 0 --lsim 1 --llevels 1 1 1 --lsampling 0.5 --ocompose

@Borda Borda reopened this Oct 27, 2019
@bglocker
Copy link
Member

From a quick look at this, I think the x and y displacements are possibly being swapped in the process somewhere. Need to have a more detailed look to find out where. I suspect that the transpose in nibabel.load(PATH_DEF_X).get_data()[:w, :h, 0].T has to do with it.

The DROP warped image and your own warped image should be really identical. But I can see even differences for the synthetic images.

@Borda
Copy link
Contributor Author

Borda commented Oct 28, 2019

The transpose is there because the JPEG image has shape 800x600 but the deformation is 600x800. We have discussed this 90degree rotation and then you added JPEG/PNG as supported output format...
So you say that there is something in DROP or in my image warping (It took me some time thinking what I am doing wrong that the warped images does not match)

@bglocker
Copy link
Member

Got it. The real images have a pixel spacing of ~0.3 and the displacements are stored in the physical units of the images and need to be rescaled to map them to pixels.

There were a few other issues in the way griddata works. The first argument points is the original coordinates, and the argument xi is the displaced points.

If you replace the last bit of your test code with the following, it should work. Note, I'm using SimpleITK to read the .nii.gz but I think you can do similar with nibabel.

img_src_resized = resize(img_src, img_ref.shape)
img_dx = sitk.ReadImage(PATH_DEF_X)
img_dy = sitk.ReadImage(PATH_DEF_Y)
spacing = img_dx.GetSpacing()
deform_x = sitk.GetArrayFromImage(img_dx)[0] / spacing[0]
deform_y = sitk.GetArrayFromImage(img_dy)[0] / spacing[1]
grid_y, grid_x = np.mgrid[0:deform_x.shape[0], 0:deform_x.shape[1]]

img_warp2 = griddata((grid_y.ravel(), grid_x.ravel()), img_src_resized.ravel(),
                     (grid_y + deform_y, grid_x + deform_x), method='linear')
axarr[1, 0].set_title("Overlap: target & warped (local) image")
axarr[1, 0].imshow(compose_images(img_ref, img_warp2))
axarr[1, 1].set_title("Overlap: DROP & local warped image")
axarr[1, 1].imshow(compose_images(img_warp, img_warp2))

fig.tight_layout()
plt.show()

@Borda
Copy link
Contributor Author

Borda commented Oct 29, 2019

cool, that seems to help... :)
registration_visual_landmarks

@Borda Borda closed this as completed Oct 29, 2019
@asmagen
Copy link

asmagen commented Mar 27, 2020

Hello @Borda & @bglocker
This issue and transformations seem very relevant to what I am trying to do - given the gray scale output from drop2, apply the transformation on the original RGB image to get the transformed version in RGB. What do I need to use from the discussion above to do that?
These are the relevant stains I (CD20 aligned to CD3 output in CD20_to_CD3_test)
Screen Shot 2020-03-27 at 12 14 45 PM

@Borda
Copy link
Contributor Author

Borda commented Mar 27, 2020

I would try to run BIRL demo for DROP2

@asmagen
Copy link

asmagen commented Mar 28, 2020

@bglocker I was wondering how long does it take for
img_warp2 = griddata((grid_y.ravel(), grid_x.ravel()), img_src_resized.ravel(),(grid_y + deform_y, grid_x + deform_x), method='linear')
as used above to execute?
I am running it on a relatively small image (4302x2846 pixels) and it didn't finish executing for at least 10 minutes by now. My target image size is 5-10 times as big -- would it scale up or is there an alternative?
Also, I have the CD20_to_CD3_test_field_z.nii.gz file although I used mode2d in the registration. Is it okay to ignore this file in reproducing the above on my own set of images?

Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants