Skip to content

Latest commit

 

History

History
43 lines (30 loc) · 2.2 KB

README.md

File metadata and controls

43 lines (30 loc) · 2.2 KB

[arXiv 2024] ViewExtrapolator: Novel View Extrapolation with Video Diffusion Priors

This repository contains the official implementation of the paper: Novel View Extrapolation with Video Diffusion Priors. We introduce ViewExtrapolator, a novel approach that leverages the generative priors of Stable Video Diffusion for novel view extrapolation, where the novel views lie far beyond the range of the training views.

To Begin

Our codes are tested on python=3.11, pytorch=2.2.0, CUDA=12.1.

  1. Clone ViewExtrapolator.
git clone https://github.com/Kunhao-Liu/ViewExtrapolator.git
cd ViewExtrapolator
  1. Please refer to the multiview folder for novel view extrapolation with 3D Gaussian Splatting when multiview images are available.

  2. Please refer to the monocular folder for novel view extrapolation with point clouds when only a single view or monocular video is available.

Acknowledgements

Our work is based on Stable Video Diffusion and gsplat implementation of 3D Gaussian Splatting . We thank the authors for their great work and open-sourcing the code.

Citation

Consider citing us if you find this project helpful.

@article{liu2024novel,
  title   = {Novel View Extrapolation with Video Diffusion Priors},
  author  = {Liu, Kunhao and Shao, Ling and Lu, Shijian},
  journal = {arXiv preprint arXiv:2411.14208},
  year    = {2024}
}