Our implementation is based on nerfmm(PyTorch)
A Portable Multiscopic Camera for Novel View and Time Synthesis in Dynamic Scenes
https://yuenfuilau.github.io/
paper | project website | demo video
We present a portable multiscopic camera system with a dedicated model for novel view and time synthesis in dynamic scenes. Our goal is to render high-quality images for a dynamic scene from any viewpoint at any time using our portable multiscopic camera. To achieve such novel view and time synthesis, we develop a physical multiscopic camera equipped with five cameras to train a neural radiance field (NeRF) in both time and spatial domains for dynamic scenes.
- Release the inference code
- Release the pretrained model
- Writing Descriptions
- Release the training code
git clone https://github.com/YuenFuiLau/A-Portable-Multiscopic-Camera-for-Novel-View-and-Time-Synthesis-in-Dynamic-Scenes.git
cd A-Portable-Multiscopic-Camera-for-Novel-View-and-Time-Synthesis-in-Dynamic-Scenes
The environment can be simply set up from the provided environment.yml
:
conda env create -f environment.yml
Thanks to nerfmm(PyTorch) for sharing their code.