This page provides specific tutorials about the usage of MMDetection3D for KITTI dataset.
Note: Current tutorial is only for LiDAR-based and multi-modality 3D detection methods. Contents related to monocular methods will be supplemented afterwards.
You can download KITTI 3D detection data HERE and unzip all zip files. Besides, the road planes could be downloaded from HERE, which are optional for data augmentation during training for better performance. The road planes are generated by AVOD, you can see more details HERE.
Like the general way to prepare dataset, it is recommended to symlink the dataset root to $MMDETECTION3D/data
.
The folder structure should be organized as follows before our processing.
mmdetection3d
├── mmdet3d
├── tools
├── configs
├── data
│ ├── kitti
│ │ ├── ImageSets
│ │ ├── testing
│ │ │ ├── calib
│ │ │ ├── image_2
│ │ │ ├── velodyne
│ │ ├── training
│ │ │ ├── calib
│ │ │ ├── image_2
│ │ │ ├── label_2
│ │ │ ├── velodyne
│ │ │ ├── planes (optional)
To create KITTI point cloud data, we load the raw point cloud data and generate the relevant annotations including object labels and bounding boxes. We also generate all single training objects' point cloud in KITTI dataset and save them as .bin
files in data/kitti/kitti_gt_database
. Meanwhile, .pkl
info files are also generated for training or validation. Subsequently, create KITTI data by running
mkdir ./data/kitti/ && mkdir ./data/kitti/ImageSets
# Download data split
wget -c https://raw.githubusercontent.com/traveller59/second.pytorch/master/second/data/ImageSets/test.txt --no-check-certificate --content-disposition -O ./data/kitti/ImageSets/test.txt
wget -c https://raw.githubusercontent.com/traveller59/second.pytorch/master/second/data/ImageSets/train.txt --no-check-certificate --content-disposition -O ./data/kitti/ImageSets/train.txt
wget -c https://raw.githubusercontent.com/traveller59/second.pytorch/master/second/data/ImageSets/val.txt --no-check-certificate --content-disposition -O ./data/kitti/ImageSets/val.txt
wget -c https://raw.githubusercontent.com/traveller59/second.pytorch/master/second/data/ImageSets/trainval.txt --no-check-certificate --content-disposition -O ./data/kitti/ImageSets/trainval.txt
python tools/create_data.py kitti --root-path ./data/kitti --out-dir ./data/kitti --extra-tag kitti --with-plane
Note that if your local disk does not have enough space for saving converted data, you can change the out-dir
to anywhere else, and you need to remove the --with-plane
flag if planes
are not prepared.
The folder structure after processing should be as below
kitti
├── ImageSets
│ ├── test.txt
│ ├── train.txt
│ ├── trainval.txt
│ ├── val.txt
├── testing
│ ├── calib
│ ├── image_2
│ ├── velodyne
│ ├── velodyne_reduced
├── training
│ ├── calib
│ ├── image_2
│ ├── label_2
│ ├── velodyne
│ ├── velodyne_reduced
│ ├── planes (optional)
├── kitti_gt_database
│ ├── xxxxx.bin
├── kitti_infos_train.pkl
├── kitti_infos_val.pkl
├── kitti_dbinfos_train.pkl
├── kitti_infos_test.pkl
├── kitti_infos_trainval.pkl
├── kitti_infos_train_mono3d.coco.json
├── kitti_infos_trainval_mono3d.coco.json
├── kitti_infos_test_mono3d.coco.json
├── kitti_infos_val_mono3d.coco.json
kitti_gt_database/xxxxx.bin
: point cloud data included in each 3D bounding box of the training datasetkitti_infos_train.pkl
: training dataset infos, each frame info contains following details:- info['point_cloud']: {'num_features': 4, 'velodyne_path': velodyne_path}.
- info['annos']: {
- location: x,y,z are bottom center in referenced camera coordinate system (in meters), an Nx3 array
- dimensions: height, width, length (in meters), an Nx3 array
- rotation_y: rotation ry around Y-axis in camera coordinates [-pi..pi], an N array
- name: ground truth name array, an N array
- difficulty: kitti difficulty, Easy, Moderate, Hard
- group_ids: used for multi-part object }
- (optional) info['calib']: {
- P0: camera0 projection matrix after rectification, an 3x4 array
- P1: camera1 projection matrix after rectification, an 3x4 array
- P2: camera2 projection matrix after rectification, an 3x4 array
- P3: camera3 projection matrix after rectification, an 3x4 array
- R0_rect: rectifying rotation matrix, an 4x4 array
- Tr_velo_to_cam: transformation from Velodyne coordinate to camera coordinate, an 4x4 array
- Tr_imu_to_velo: transformation from IMU coordinate to Velodyne coordinate, an 4x4 array }
- (optional) info['image']:{'image_idx': idx, 'image_path': image_path, 'image_shape', image_shape}.
Note: the info['annos'] is in the referenced camera coordinate system. More details please refer to this
The core function to get kitti_infos_xxx.pkl and kitti_infos_xxx_mono3d.coco.json are get_kitti_image_info and get_2d_boxes. Please refer to kitti_converter.py for more details.
A typical train pipeline of 3D detection on KITTI is as below.
train_pipeline = [
dict(
type='LoadPointsFromFile',
coord_type='LIDAR',
load_dim=4, # x, y, z, intensity
use_dim=4, # x, y, z, intensity
file_client_args=file_client_args),
dict(
type='LoadAnnotations3D',
with_bbox_3d=True,
with_label_3d=True,
file_client_args=file_client_args),
dict(type='ObjectSample', db_sampler=db_sampler),
dict(
type='ObjectNoise',
num_try=100,
translation_std=[1.0, 1.0, 0.5],
global_rot_range=[0.0, 0.0],
rot_range=[-0.78539816, 0.78539816]),
dict(type='RandomFlip3D', flip_ratio_bev_horizontal=0.5),
dict(
type='GlobalRotScaleTrans',
rot_range=[-0.78539816, 0.78539816],
scale_ratio_range=[0.95, 1.05]),
dict(type='PointsRangeFilter', point_cloud_range=point_cloud_range),
dict(type='ObjectRangeFilter', point_cloud_range=point_cloud_range),
dict(type='PointShuffle'),
dict(type='DefaultFormatBundle3D', class_names=class_names),
dict(type='Collect3D', keys=['points', 'gt_bboxes_3d', 'gt_labels_3d'])
]
- Data augmentation:
ObjectNoise
: apply noise to each GT objects in the scene.RandomFlip3D
: randomly flip input point cloud horizontally or vertically.GlobalRotScaleTrans
: rotate input point cloud.
An example to evaluate PointPillars with 8 GPUs with kitti metrics is as follows:
bash tools/dist_test.sh configs/pointpillars/hv_pointpillars_secfpn_6x8_160e_kitti-3d-3class.py work_dirs/hv_pointpillars_secfpn_6x8_160e_kitti-3d-3class/latest.pth 8 --eval bbox
KITTI evaluates 3D object detection performance using mean Average Precision (mAP) and Average Orientation Similarity (AOS), Please refer to its official website and original paper for more details.
We also adopt this approach for evaluation on KITTI. An example of printed evaluation results is as follows:
Car [email protected], 0.70, 0.70:
bbox AP:97.9252, 89.6183, 88.1564
bev AP:90.4196, 87.9491, 85.1700
3d AP:88.3891, 77.1624, 74.4654
aos AP:97.70, 89.11, 87.38
Car [email protected], 0.50, 0.50:
bbox AP:97.9252, 89.6183, 88.1564
bev AP:98.3509, 90.2042, 89.6102
3d AP:98.2800, 90.1480, 89.4736
aos AP:97.70, 89.11, 87.38
An example to test PointPillars on KITTI with 8 GPUs and generate a submission to the leaderboard is as follows:
mkdir -p results/kitti-3class
./tools/dist_test.sh configs/pointpillars/hv_pointpillars_secfpn_6x8_160e_kitti-3d-3class.py work_dirs/hv_pointpillars_secfpn_6x8_160e_kitti-3d-3class/latest.pth 8 --out results/kitti-3class/results_eval.pkl --format-only --eval-options 'pklfile_prefix=results/kitti-3class/kitti_results' 'submission_prefix=results/kitti-3class/kitti_results'
After generating results/kitti-3class/kitti_results/xxxxx.txt
files, you can submit these files to KITTI benchmark. Please refer to the KITTI official website for more details.