-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Where could I get the database and testInfant_list.txt? #1
Comments
Hi, thank you for your interest. I donot own the dataset, thus I cannot publish the database now. I'll try to upload some hdf5 after miccai ddl. Yes, the testInfant_list.txt is the generated text file which contains the path to the hdf5. |
Thanks. I think I will use the IBSR dataset at https://www.nitrc.org/frs/?group_id=48
For step 1, what sould I do to generate the hdf5 files( or which files should I run)? Assume that my input is |
Yes. What you said is right. One more thing, you have to pay attention to the size of the patch, the size of data patch should be same size with the size of label patch. you can use https://github.com/ginobilinie/infantSeg/blob/master/readMedImg4CaffeCropNie4SingleS.py I hope it helps. |
Thanks for your information. Now, I copy all nii files (raw+label) into same folder
I also modify the
I got a error about index as ./Raw_Label_Images/IBSR_01_ana_strip.nii.gz ./Raw_Label_Images/IBSR_01_segTRI_ana.nii.gz
How should I fix it? In additions, I have three questions:
|
For this error, it is because the patch you extracted is more than 10000, I assume the patch number for a subject is less than 10000:
|
Grateful to your help. I can run This is error when using 3D-caffe of V-Net (did not fix)
This is error when I used DL_BigNeuron (built without USE_CUDNN, because I am using cuda 8.0 and cudnn 5.1). Hence, I changed
Finally, this is error when I use BVLC and U-Net
For above error, we can remove line 140, 141 in
Hence, I think the caffe version is very importance. Could you please provide it in README. It must be support 3D operations.
The error is mismatch size between |
Thank you so much. You really provide a lot of details which I didn't pay much attention to. I'll update the readme soon.
Thanks again. |
you are welcome. in additions, the top of sofmax must be softmax, instead of prob as your deploy.prototxt. right? |
Of course, the deploy should add a softmax layer if you need to combine them. As my previous uploaded one is not only for segmentation, but also for regression, so my uploaded deploy prototxt looks like it. Actually, I think bvlc version support xavier. For the long time issue, you can adjust the step size, (it is 1,1,1 by default, you can make it to 8, 8,8) |
It seems that the output dimension is not consistent with the ground truth. I guess it's a problem about input dimension order. Can you adjust the input dimension order? In my experience, even though the result is not good, the dimension should be matched. |
I am using mipav to show the result. It looks the inconsistent between Ground truth and output. But I checked the output dimension. Its dimension is same with ground truth (still 256x128x256). I check the value of output. It is not label value, it is float value. I think it is reason |
I see. You should convert the round the value to integer. you can use np.rint(x) |
Thanks, I added the code in the function
Although the output is label output but the result is still very bad. This is configuration what I did
Any wrong in my procedure? |
What's your test loss for the last iteration? the result is worse than my worst case. |
Sorry for waiting. I tried to run again. This is my log. I saw that the .cafemodel size just 1.1MB (very small)
|
I suggest you to train more iterations. And your loss should better smaller than 0.1 |
Thank you for your support. I will run more iteration and let you know the result.
where |
if you use 3D format, it is in this manner: n_samplex1x32x32x32 |
Hello, after test your code. I achieved the result (output, ground-truth and training loss) as follows I cannot achieve the expected result, although I trained with 100.000 iterations. After running many version of caffe, I guess some reasons
Thank you so much |
For 1,2, you'd better use ND convolution, PR#3983 is okay, I think the code from U-net is also fine, actually, the ND convolution is supported by cudnn instead of such codes, so you have to install the cudnn. For 3, you have to adjust the code yourself, actually, I use normalization when generating hdf5 and evaluation. So you have to normalize the data in readMedImg4CaffeCropNie4SingleS.py by yourself. |
@John1231983 I read your result, it is bad, do you really use 32x32x32 as input? and do you use 3x3x3 filters and 4x4x4 deconv filters? Even the result is assumed to be much better very simple training. |
@John1231983 you donot necessarily follow everything as I published in this github. Msra initialization is actually better than xavier, you can use that. |
@John1231983 Can you please give me an email, I can send you codes and demos for training and testing. |
My email is [email protected] |
Thank you for sharing your code.
I will download it but I would like to ask you something about database and files txt.
In your code, you used training data (raw+ground truth). In your paper, you said it done by manual segmentation from IBEAT. Is it possible to publish the database? Or how to register it?
If all ways are not possible, could you create it by HDF5Data and share with us? I think HDF5Data will be fine.
Finally, I saw some txt file such as testInfant_list.txt. I think it contains the path of dataset. Could you also provide them for us
Thank you so much
The text was updated successfully, but these errors were encountered: