-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cropping of the dataset #7
Comments
Hi, thanks for your interests in our work! The image size used in the TransMorph paper is larger than ours (160/192/224 vs 144/192/160), resulting in larger GPU memory consumption. If you focus only on deformable registration of brain images, 144/192/160 is large enough to include the whole brain, as all brain image has been affine-registered with the MNI-152 brain template (i.e., they are located in the same position and all brains have a similar size less than 144/192/160). TransMorph used a larger image size possibly because they also aimed to solve affine registration within the network, which means the input image space should be larger to provide more space allowing for global movements of brains. Anyway, both TransMorph and CorrMLP are not bounded to any specific image size, so our image size can be used in TransMorph (This is what we did in our experiments). TransMorph's image size also could be used in CorrMLP if there are larger GPUs or the feature channel numbers (enc_channels and dec_channels) are appropriately adjusted. |
Thank you very much for your reply and answer! I have another question. When I crop the image, can I directly crop it from the image size (160, 192, 224) used by transmorph to (144, 192, 160)? Will this have any effect? Will reducing the number of enc_channels and dec_channels have a bad effect on the experimental results? Thank you again for taking the time to answer my questions! |
You need to first check whether the images in size (160, 192, 224) are affine-registered. If they have been registered into the same position (e.g., registered with an atlas), you can directly crop them into (144, 192, 160) by choosing a fixed cropping position, i.e., [x1:x2, y1:y2, z1:z2]. This usually will not incur negative effects, as deformable registration only causes local movement of image pixels and (144, 192, 160) is large enough to include the whole brain and its local deformations.
This will reduce the overall parameter number/model size and surely will slightly degrade the performance. Even so, the degraded performance still should be better than TransMorph because the improvements of CorrMLP are not attrbuted to using more parameters. |
Thank you very much for your reply |
Excuse me again, could you please give me the starting point for cropping the image? I would be grateful! |
This starting points are different for different datasets and different pre-affine registration settings. Think about this: If the images were affine-registered to different positions, the starting points are accordingly different. You should set the starting points for your datasets manually, e.g., with eye check. |
I modified the size directly to (160,192,224), but perhaps this size is not suitable for finding a suitable starting point because it has already been cropped. So I was thinking whether I should find the starting point for cropping in the original image size or use a larger memory to train the image of size (160,192,224). I am confused, can you give me some advice?
---- Replied Message ----
| From | Mingyuan ***@***.***> |
| Date | 09/04/2024 16:56 |
| To | MungoMeng/Registration-CorrMLP ***@***.***> |
| Cc | ZhaiJiaKai ***@***.***>,
Author ***@***.***> |
| Subject | Re: [MungoMeng/Registration-CorrMLP] Cropping of the dataset (Issue #7) |
Excuse me again, could you please give me the starting point for cropping the image? I would be grateful!
This starting points are different for different datasets and different pre-affine registration settings. Think about this: If the images were affine-registered to different positions, the starting points are accordingly different. You should set the starting points for your datasets manually, e.g., with eye check.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you authored the thread.Message ID: ***@***.***>
|
I think it doesn't matter whether you find the starting points from the cropped image (160,192,224) or from the original images. If all images have been affine-registered (to the same position), you can find appropriate starting points for cropping. As cropping just removes some black background and will not change the image resolution or content, you can always crop the images again after cropping. |
Thank you for your answer.!
---- Replied Message ----
| From | Mingyuan ***@***.***> |
| Date | 09/04/2024 17:41 |
| To | MungoMeng/Registration-CorrMLP ***@***.***> |
| Cc | ZhaiJiaKai ***@***.***>,
Author ***@***.***> |
| Subject | Re: [MungoMeng/Registration-CorrMLP] Cropping of the dataset (Issue #7) |
I think it doesn't matter whether you find the starting points from the cropped image (160,192,224) or from the original images. If all images have been affine-registered (to the same position), you can find appropriate starting points for cropping. As cropping just removes some black background and will not change the image resolution or content, you can always crop the images again after cropping.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you authored the thread.Message ID: ***@***.***>
|
For the ACDC dataset, the third dimension is less than 32; how should this be handled? |
Before cropping, the images in ACDC datsets were firstly resampled with a voxel spacing of 1.5×1.5×3.15mm. |
Some data remains smaller than 32 after resampling, such as patient 36. |
In this case, just pad it as 0. |
I have another question. I can't reproduce the 8.10 accuracy. Is it because I didn't use intra-patient images for training and only used the ES and ED frames directly? Does this training strategy have a significant impact? |
If you only used ES and ED frames, there were only 200 image pairs used for training -- too small. However, I don't know how much DSC you have got. If the gap is too large, there might be other unidentified problems. |
My current best DSC is about 0.805. Is this normal? |
This gap is too minor and is reasonable. You can try our training strategy, which might help you increase the results and even become higher than ours. |
Thank you very much for your answer. I hope to see more of your published papers in the future. |
Sorry to bother you, but during my training, I only used the es and ed frames. The performance on the validation set reached 0.805, but on the test set, it only reached 0.76. Is this situation reasonable? Could it be due to issues with how I processed the dataset? |
Hello author, if you use the dataset size of transmorph, will it report an error of insufficient memory when training your model? Cropping the dataset to the dimensions in your paper, are they only applicable to your method or transmorph as well?
The text was updated successfully, but these errors were encountered: