-
Notifications
You must be signed in to change notification settings - Fork 124
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Questions about Model and Training #1
Comments
Hey @mvpcom . I will release the dataset ! About the benchmark. It is super slow. It takes about one day to run. The results are quite similar. There are a few differences, because of changes in carla, |
Thanks @felipecode Please also check this commit, I put a comment there. |
I wrote a training code for the same model using the same dataset. First of all, I believe during training you generated specific data for each branch according to control input. Am I right? I'm also curious about the speed branch. These are all outputs of the network for each branch separately. The question is, do you use a sequence as input or just a frame? I ask because for determining speed we need to have at least two frames otherwise there is no valuable information in the latest branch. Why is the speed branch useful for training the whole network? branch_config = [["Steer", "Gas", "Brake"], ["Steer", "Gas", "Brake"], and another question is about the augmentation parameters. Would you please let me know for each of different augmentation techniques which parameters were utilized? I need all details for Gaussian Blur, Additive Gaussian Noise, Pixel Dropout, additive and multiplicative brightness variation, contrast variation and saturation variation. |
I use just one frame. The speed is also send as input. For augmentation, i used the imgaug library. Here you have the parameters:
) That is the dirty part of deep learning. The model is quite sensible to augmentation in Looking forward to see your results ! |
I have one more question dear @felipecode `:D The question is about the total steps of training. As you said in the paper, the model trained for 294000 steps. For how many days and using what kind of configuration (GPUs?), you did training? Is there any diagram that shows train and validation loss? I couldn't find any in both papers. At least I hope to have a final train and validation loss after all steps (294000) to have an idea when the model can be comparable to your final model. For your knowledge, I utilized MSE of control outputs/Speed for each branch as a loss function. I want to make sure I didn't forget anything. Using my current implementation and my configuration (1080 GPU), for me every 230 steps approximately take long as one hour. A sample for my training procedure to have a better sense:
|
Hey @mvpcom . Sorry for taking so long to answer. |
Thanks, @felipecode. I am eager to see more detail information. Although a Titan X Pascal is much much better than a Titan X, I'm not sure if it is a good reason for my implementation to take much much longer time for 294000 steps (approximately around 12 days for Titan X and 52 days for 1080). I have to recheck my data loading process. |
Some things that may help.
For the rest, I would say you have some specific bug. 12 days is too much. |
@mvpcom is your train code can public? |
@zdx3578 Yes if the Carla Team doesn't mind that because as far as I know, they are waiting for publishing their paper and this is the reason why I wrote the train code myself from scratch. |
@mvpcom Yes, sure ! Please do it ! Cheers |
@zdx3578 @felipecode Here you can find the first draft of the code. It was part of a larger code, so I'm not sure if it is without bugs. Besides that, it is necessary to revise the code to speed up the training process as we discussed above. If you find any bug, please let me know to fix it up. |
https://github.com/pathak22/noreward-rl this is good job for train can ref it . |
@felipecode Two question for you Felipe. First of all, I'm not sure how to do masking for backpropagation on one branch only. This one is beneficial for reduce the training time. Do you know any link that may help? And unfortunately, I can't load your saved checkpoint into my model. It seems there are some different points in both models. Is there any consideration that I have to consider? It will be awesome if you share your input pipeline too. |
I would like to know what is exactly meant by manual driving here. Is it human driving on the roads of Town01 or can it be done manually in the simulator itself( If I'm not wrong, the standalone mode is just a video game mode). I'd be really helped if someone helps me out on this! Thanks in advance! |
@soham2017 As I see in the paper,80% of the data was collected in Carla an 20% is real manual driving data using a little truck. |
I build a pytorch version to train the policy in case that you may be interested. |
Thanks for sharing this, it seems to me this is a trained model for test purpose only using your benchmark system. According to your paper, you are utilizing a custom dataset (around 14 hours of manual and automatic driving in the Town01) with Adam optimizer for training the network. However, I need more detail about the perturbation technique? How can I add the same method for data collection in the CARLA using python interface? Because you're not going to release your dataset and the training code in the future soon, I'm going to build my own dataset, at least before you release the training code. Another question is, what about the second table results? Is the benchmarking system produce same results as the first table? (By tables I mean tables in "CARLA: An Open Urban Driving Simulator") And the last question is, how long the benchmark is? How many hours? I run that recently and it still runs after more than 2 and half hours with 1080 GPU and a good computer :D
The text was updated successfully, but these errors were encountered: