-
Notifications
You must be signed in to change notification settings - Fork 403
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
increase descriptions of input files for us lay people #8
Comments
I would like to know how I can structure my own datasets, too. |
An example of a Hope this helps! |
OK. I think I understand a bit more now, and will elaborate here so that when I forget, I can just come back :).
So, if one wants to train KittiSeg on their own data, they must create the image masks on their own images, and remove the references to the road data. Does this sound about right? |
That sounds about right. I would usually call the second image |
I have created input.md, which is supposed to describing the input format in detail. Feel free to modify this file or add details if you feel this is needed for better understandability. As author it is sometime hard to see what aspects are obvious ;). |
Providing input.md should also fix this issue :). |
Hello Marvin,
Thanks for the explanation on inputs.md!
Fortunately, my classification problem is binary one, so I think I can take
advantage of "Easy way"
Besides two things you mentioned in the article, isn't there anything I
have to modify, or fix any codes?
I just wonder if I can just use the same file like kitti_seg_input.py,
kitti_eval.py for my problem.
I am a kind of novice, so please bear with me.
Thanks in advance, Marvin.
Andy
…On Fri, Feb 17, 2017 at 2:54 AM, Marvin Teichmann ***@***.***> wrote:
Closed #8 <#8>.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#8 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AT6uQUK-348W0CncryK0xREop8uEix8wks5rdI1NgaJpZM4MCjpM>
.
|
Hi kyoseoksong, are you doing classification or segmentation? The code is build for segmentation, which is basically "classification per pixel". If you are solving segmentation, you can use the code as given, including
If you are actually doing classification, you need to use an input producer and evaluation module for classification. You can find examples for both in my KittiClass project. |
I am doing segmentation, Marvin.
Maybe I should just modify the RGB to black and white according to my ground truth images. I have lung CT scan images and masks. Now I think I grasp the concept. Thanks for your reply, and I will try this today, and get back to you, I expect, with a good result. I hope many people know and use your code cause it's so cool.
Thanks,
Kyoseok
2017. 2. 17. 오후 8:56 Marvin Teichmann <[email protected]> 작성:
… Hi kyoseoksong,
are you doing classification or segmentation? The code is build for segmentation, which is basically "classification per pixel". If you are solving segmentation, you can use the code as given, including kitti_seg_input.py and kitti_eval.py. Just make sure, that you set road_color and background_color correctly.
kitti_eval.py will compute the scores of the kitti benchmark (maxF1 and averagePrecision). If you have your own dataset which comes with different metrics you might want to implement you own evaluation code. But this does not influence training, first make sure that it is running as is.
If you are actually doing classification, you need to write your own input producer, loss function and evaluate for classification. This is not to difficult, but takes some hand-tuning. I did write an input producer for classification at some point. Having said this, this file is most likely not compatible anymore with the current tensorvision/tensorflow version. It might however be a good starting point. Classification input is not that different from segmentation input. So we are talking about minor adjustments.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or mute the thread.
|
This makes the code compatible with greyscale data. Addresses comment in #8.
This makes the code compatible with greyscale data. Addresses comment in #8.
Happy to help. Don't forget to cite one of my papers, if you get any chance ;). Since f7fdb24, using greyscale images work out-of-the box. The images will be converted to RGB upon load, so a black pixel has value [0,0,0], white [255,255,255] and some light grey can be [200, 200, 200]. |
Hi Marvin,
I tried to read your inputs.md again, but I coundn't reach there. Any
reason to get rid of it?
https://github.com/MarvinTeichmann/KittiSeg/blob/master/inputs/inputs.md
Thanks,
Andy
…On Fri, Feb 17, 2017 at 10:23 PM, Marvin Teichmann ***@***.*** > wrote:
Happy to help. Don't forget to cite one of my papers, if you get any
chance ;).
Since f7fdb24
<f7fdb24>,
using greyscale images work out-of-the box. The images will be converted to
RGB upon load, so a black pixel has value [0,0,0], white [255,255,255] and
some light grey can will be [200, 200, 200].
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#8 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AT6uQVgqm9wXg5btF6p7g29TO-3UrH-bks5rdZ8_gaJpZM4MCjpM>
.
|
Hi Andy, commit 0e8d548 moved it to docu/inputs.md. Marvin |
All right.
Thanks, Marvin.
2017. 2. 19. 오후 7:15 Marvin Teichmann <[email protected]> 작성:
… Hi Andy,
commit 0e8d548 moved it to docu/inputs.md.
Marvin
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or mute the thread.
|
Hello, @MarvinTeichmann I'm downloading this data right now (got link by e-mail and started tv-train again), but I don't see, why do I need it to train on my own data, or maybe I just did something wrong. Can you help me? |
I believe you should comment the line 127 in the script train.py , that's the line responsible for downloading the dataset, after commenting it it will just use directly the path's that you provided on the hypes for your own data |
Thank you, for your response, @Bendidi @MarvinTeichmann |
yes it's the 123 rd line ,sorry about that , I added some lines in my script that I forgot about , if you made your input configuration correctly , it should work now |
@Bendidi Thank you for the clarification. Have you done something differently in order to train this model on your custom data? |
I've done something different , I completely changed the input script , so that I just have to give it a path , and it loads the images directly without the need of train.txt and val.txt , I also added a line to resize images to the same shape as kitti images , the images and the masks should be 'RGB' I believe (I don't know what you mean by 3D and 2D ) |
By the input script you mean "input_file" field inside the hypes json? By default it's this one: "../inputs/kitti_seg_input.py" |
yes it's the kitti_seg_input.py script , The problem might be because you use grayscale images for the mask , and the image reader reads them in RGB mode as seen in line 123 of the input file and so it won't find the colors that you provided in the hypes file , to create the training variables , lines 157 and 158 |
I changed the masks to be in RGB as well, but the problem persists: |
nope , commentjson worked fine , I'm running it on python2.7 , maybe that's why when you changed the masks to be in RGB , did you make sure the 2 colors that are in in RGB mask are the ones you provided in the hypes ? aside from that I cannot think of anything else ... |
@Bendidi |
I started training and MaxF1 started near 17, now it is near 26 after 4500 steps (out of 12000) and sometimes jumps around. Is it ok or should I do something? |
@lemhell how to comment line 123 |
@1464256670 |
@lemhell by the way,thanks for your replyment,another question:to train my own data,all what i did is creating my train.txt and val.txt ,is that enough ,anything else to do? |
@lemhell is only running train.py enough,any other .py should i run? |
@1464256670 |
objective function also need to change |
Could you perhaps provide a description of
val3.txt
andtrain3.txt
? For instance, is the file nomenclature somehow significant?It is not entirely clear on how one would make their own training images.
I looked at the examples in TensorVision, but it too does not really explain what the masks are. It just labels them GT:
THANKS
The text was updated successfully, but these errors were encountered: