PFGS: High Fidelity Point Cloud Rendering via Feature Splatting
Jiaxu Wang†, Ziyi Zhang†, Junhao He, Renjing Xu*
ECCV 2024
If you found this project useful, please cite us in your paper, this is the greatest support for us.
- Linux
- Python == 3.8
- Pytorch == 1.13.0
- CUDA == 11.7
You can directly install the requirements through:
$ conda env create -f environment.yml
-
Create Environment
$ conda create --name PFGS python=3.8 $ conda activate PFGS
-
Pytorch (Please first check your cuda version)
$ conda install pytorch==1.13.0 torchvision==0.14.0 torchaudio==0.13.0 pytorch-cuda=11.7 -c pytorch -c nvidia
-
Other python packages: open3d, opencv-python, etc.
pip install ./submodules/diff-gaussian-rasterization
You can customize NUM_SEMANTIC_CHANNELS
in submodules/diff-gaussian-rasterization/cuda_rasterizer/config.h
for any number of feature dimensions that you want. ⭐ Thanks to the diff-gaussian-rasterization code from Feature-3DGS, which is a great help for our work.
python build_pkg.py
-
Download and extract data from the original ScanNet-V2 preprocess.
-
Dataset structure:
── scannet └── scene0000_00 ├── pose └──1.txt ├── intrinsic └──*.txt ├── color └──1.jpg └── scene0000_00_vh_clean_2.ply └── images.txt └── scene0000_01
- We reorganize the original datasets in our own format. Here we provide a demonstration of the test set of DTU, which can be downloaded here
- Pretrain
- Download 3D model and extract data from original THuman2.
- Render 36 views based on each 3D model and sparse sample points(8w) on the surface of the model by Blender.
- Demo and Pretrain
python train_stage1.py --dataset scannet --scene_dir $data_path --exp_name scannet_stage1 --img_wh 640 512
python train_stage1.py --dataset dtu --scene_dir $data_path --exp_name dtu_stage1 --img_wh 640 512
python train_stage1.py --dataset thuman2 --scene_dir $data_path --exp_name thuman2_stage1 --img_wh 512 512 --scale_max 0.0001
python train_stage2.py --dataset scannet --scene_dir $data_path --exp_name scannet_stage2 --img_wh 640 512 --ckpt_stage1 $ckpt_stage1_path
python train_stage2.py --dataset dtu --scene_dir $data_path --exp_name dtu_stage2 --img_wh 640 512 --ckpt_stage1 $ckpt_stage1_path
python train_stage1.py --dataset thuman2 --scene_dir $data_path --exp_name thuman2_stage1 --img_wh 512 512 --scale_max 0.0001 --ckpt_stage1 $ckpt_stage1_path
python train_stage2.py --dataset scannet --scene_dir $data_path --exp_name scannet_stage2_eval --img_wh 640 512 --resume_path $ckpt_stage2_path --val_mode test
python train_stage2.py --dataset dtu --scene_dir $data_path --exp_name dtu_stage2_eval --img_wh 640 512 --resume_path $ckpt_stage2_path --val_mode test
python train_stage1.py --dataset thuman2 --scene_dir $data_path --exp_name thuman2_stage1_eval --img_wh 512 512 --scale_max 0.0001 --resume_path $ckpt_stage2_path --val_mode test
The results will be saved in ./log/$exp_name
In this repository, we have used codes or datasets from the following repositories. We thank all the authors for sharing great codes or datasets.
@misc{wang2024pfgshighfidelitypoint,
title={PFGS: High Fidelity Point Cloud Rendering via Feature Splatting},
author={Jiaxu Wang and Ziyi Zhang and Junhao He and Renjing Xu},
year={2024},
eprint={2407.03857},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2407.03857},
}