By Tarasha Khurana*, Peiyun Hu*, David Held, and Deva Ramanan
* equal contribution
If you find our work useful, please consider citing:
@inproceedings{khurana2023point,
title={Point Cloud Forecasting as a Proxy for 4D Occupancy Forecasting},
author={Khurana, Tarasha and Hu, Peiyun and Held, David and Ramanan, Deva},
booktitle={IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2023},
}
- Download nuScenes, KITTI-Odometry and ArgoVerse2.0 (code supports the LiDAR dataset, but the change to Sensor dataset is minor). (Tip: See the python scripts to see how to send the file paths.)
- Create a conda environment with the given
environment.yml
. Additionally, install thechamferdist
package given insideutils/chamferdist
by navigating to that directory and doingpip install .
. - All trained model checkpoints for all three datasets for both 1s and 3s forecasting are available in the
models/
folder. - The given code has been tested with python3.8, CUDA-11.1.1, CuDNN-v8.0.4.30, GCC-5.5 and NVIDIA GeForce RTX 3090.
The original code supported training with only L1, L2, or AbsRel losses. For increased flexibility in the choice of loss, we have added differentiable voxel rendering as a layer in PyTorch. Note that we do not use it ourselves, because it incurs a huge memory footprint (as gradients for the entire voxel grid are now retained in memory).
You can import the layer with:
from utils.layers.differentiable_voxel_rendering import DifferentiableVoxelRendering
This layer is expected to be used (without any initialization) in model.py
in place of dvr.render
and dvr.render_forward
like below:
pred_dist, gt_dist = DifferentiableVoxelRendering(
sigma,
output_origin,
output_points,
output_tindex
)
If participating in the CVPR '23 Argoverse2.0 4D Occupancy Forecasting challenge, please see the eval-kit.
Refer to train.sh
.
Refer to test.sh
for executing the ray-based evaluation on all points, and test_fgbg.sh
for evaluation separately on foreground and background points (only supported for nuScenes).
The ray tracing baseline is implemented and evaluated by raytracing_baseline.sh
and raytracing_baseline_fgbg.sh
.
In order to test a model trained on X on a dataset other than X, change the dataset
field in the respective model's config.
The chamferdist
package shipped with this codebase is a version of this package. Voxel rendering is an adaptation of the raycasting in our previous work.