-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
About data preparation #2
Comments
Thank you for your question. Since we are busy with other things these days, we will explain more details about data generation as soon as possible. |
Could you provide the script or source you used to generate the annotated BEV images? |
Please see readme for details. |
Dear author, how to generate the "ann_bev_dir" for Nuscenes dataset ? Could not find the instruction or the script for it in the link. |
me too. |
We have updated the guidance to generate BEV annotations in README.md. Follow the guidance in KITTI Annotations and nuScens/Argoverse Annotations to get the ann_bev_dir. |
Dear author, abot the KITTI RAW dataset, "For KITTI RAW and KITTI Odometry datasets, we manually annotate static layouts in bird's eye view." However, the ground truth link is 404 not found. I tried to contact the original author of monolayout but was unsuccessfulI. So could you upload it? THX。 |
你好,我想问下作者说的这个make_nuscenes_label.py生成的标签名字是token,请问训练的时候输入的是前视相机的图片吗 |
Thanks for your great work. I have some doubt reproducing your results about data preparation. The data structure puzzles me because they are not the default structure of KITTI, nuScenes and Argoverse. So I wonder how to convert from raw data to your data structure. Could you please explain in detail in README how to formulate the data? E.g. which cam to use and the calib.json file how to generate?
The text was updated successfully, but these errors were encountered: