Skip to content

Weakly Supervised 3D Object Detection from Point Clouds (VS3D), ACM MM 2020

License

Notifications You must be signed in to change notification settings

Zengyi-Qin/Weakly-Supervised-3D-Object-Detection

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Weakly Supervised 3D Object Detection from Point Clouds (VS3D)

Created by Zengyi Qin, Jinglu Wang and Yan Lu. The repository contains an implementation of this ACM MM 2020 Paper. Readers are strongly recommended to create and enter a virtual environment with Python 3.6 before running the code.

Quick Demo with Jupyter Notebook

Clone this repository:

git clone https://github.com/Zengyi-Qin/Weakly-Supervised-3D-Object-Detection.git

Enter the main folder and run installation:

pip install -r requirements.txt

Download the demo data to the main folder and run unzip vs3d_demo.zip. Readers can try out the quick demo with Jupyter Notebook:

cd core
jupyter notebook demo.ipynb

Training

Download the Kitti Object Detection Dataset (image, calib and label) and place them into data/kitti. Download the ground planes and front-view XYZ maps from here and run unzip vs3d_train.zip. Download the pretrained teacher network from here and run unzip vs3d_pretrained.zip. The data folder should be in the following structure:

├── data
│   ├── demo
│   └── kitti
│       └── training
│           ├── calib
│           ├── image_2
│           ├── label_2
│           ├── sphere
│           ├── planes
│           └── velodyne
│       ├── train.txt
│       └── val.txt
│   └── pretrained
│       ├── student
│       └── teacher

The sphere folder contains the front-view XYZ maps converted from velodyne point clouds using the script in ./preprocess/sphere_map.py. After data preparation, readers can train VS3D from scratch by running:

cd core
python main.py --mode train --gpu GPU_ID

The models are saved in ./core/runs/weights during training. Reader can refer to ./core/main.py for other options in training.

Inference

Readers can run the inference on KITTI validation set by running:

cd core
python main.py --mode evaluate --gpu GPU_ID --student_model SAVED_MODEL

Readers can also directly use the pretrained model for inference by passing --student_model ../data/pretrained/student/model_lidar_158000. Predicted 3D bounding boxes are saved in ./output/bbox in KITTI format.

Citation

@article{qin2020vs3d, 
  title={Weakly Supervised 3D Object Detection from Point Clouds}, 
  author={Zengyi Qin and Jinglu Wang and Yan Lu},
  journal={ACM Multimedia},
  year={2020}
}