The PyTorch implementation of Multimodal Transformer for Automatic 3D Annotation and Object Detection, which has been accepted by ECCV2022.
The code has been tested on PyTorch v1.9.1.
IoU loss is required for training. Before running, please install the IoU loss package following this doc.
The KITTI 3D detection dataset can be downloaded from the official webstie: link.
To train a MTrans with the KITTI dataset. Simply run:
python train.py --cfg_file configs/MTrans_kitti.yaml
Trained checkpoint can be downloaded from here. Although we try to fix the random seeds, due to the randomness in some asynchronuous CUDA opearations and data preprocessing (e.g., point sampling), the result might not be exactly the same from run to run.
The IoU loss module is borrowed from "https://github.com/lilanxiao/Rotated_IoU". We thank the author for providing a neat implementation of the IoU loss.