Skip to content

nv-tlabs/ReMatchingReconstructionFlow

Repository files navigation

ReMatching Dynamic Reconstruction Flow

Paper, Project Page

ReMatching Dynamic Reconstruction Flow
Sara Oblak, Despoina Paschalidou, Sanja Fidler, Matan Atzmon

Abstract: Reconstructing dynamic scenes from image inputs is a fundamental computer vision task with many downstream applications. Despite recent advancements, existing approaches still struggle to achieve high-quality reconstructions from unseen viewpoints and timestamps. This work introduces the ReMatching framework, designed to improve generalization quality by incorporating deformation priors into dynamic reconstruction models. Our approach advocates for velocity-field-based priors, for which we suggest a matching procedure that can seamlessly supplement existing dynamic reconstruction pipelines. The framework is highly adaptable and can be applied to various dynamic representations. Moreover, it supports integrating multiple types of model priors and enables combining simpler ones to create more complex classes. Our evaluations on popular benchmarks involving both synthetic and real-world dynamic scenes demonstrate a clear improvement in reconstruction accuracy of current state-of-the-art models.

Run our code

Environment setup

git clone https://github.com/nv-tlabs/ReMatchingReconstructionFlow.git --recursive
cd ReMatchingReconstructionFlow

conda create -n rematching python=3.7
conda activate rematching

pip install torch==1.13.1+cu116 torchvision==0.14.1+cu116 --extra-index-url https://download.pytorch.org/whl/cu116
pip install -r requirements.txt

Setting ReMatching framework hyperparameters

The ReMatching framework hyperparameters can be set in the configuration file ./rematching/arguments.conf.

  • Prior selection

    We currently support the following prior classes (names of parameters are consistent with the ones used in the paper):

    Prior Prior parameters
    P1 prior.name = P1
    prior.P1.V = [selected base tensors]
    P3 prior.name = P3
    prior.adaptive_prior.K = [number of parts]
    prior.P3.B = [basis function hyperparameter]
    P4 prior.name = P4
    prior.adaptive_prior.K = [number of parts]
    P1 + P3 prior.name = P1_P3
    prior.adaptive_prior.K = [number of parts]
    prior.P1.V = [selected base tensors]
    prior.P3.B = [basis function hyperparameter]
    P1 + P4 prior.name = P1_P4
    prior.adaptive_prior.K = [number of parts]
    prior.P1.V = [selected base tensors]
    P3 (image level) prior.name = P3_Image
    prior.adaptive_prior.K = [number of parts]
    prior.P3.B = [basis function hyperparameter]
    prior.cam_time = [view selection for the image-level loss]
  • General hyperparameters

      general.rm_weight = [ReMatching loss weight]  
      prior.adaptive_prior.entropy_weight = [entropy loss weight for adaptive prior prediction]
    

Training

python train.py -s path/to/your/dataset -m output/exp-name

Datasets

We conducted our evaluation on the following three datasets:

  • Download and unzip the data folder. When running training, input the path to a specific scene from the dataset, for example:

    python train.py -s [location of downloaded data folder]/data/jumpingjacks -m output/dnerf_jumpingjacks
  • Download and unzip for each selected scene. When running training, input the path to the selected scene, for example:

    python train.py -s [location of downloaded data folder]/slice-banana -m output/hypernerf_banana
  • Download and unzip for each selected scene in the Nvidia_long folder. Then for each scene run:

    mkdir [location of downloaded scene]/dense/sparse/0
    mv [location of downloaded scene]/dense/sparse/*.bin [location of downloaded scene]/dense/sparse/0
    

    When running training, input the path to the selected scene, for example:

    python train.py -s [location of downloaded data folder]/Jumping/dense -m output/dynamicscenes_jumping

Contributing

Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.

License

Copyright © 2025, NVIDIA Corporation & affiliates. All rights reserved. This work is made available under the Nvidia Source Code License.

Acknowledgement

This repo is based on https://github.com/ingra14m/Deformable-3D-Gaussians.

Citation

@article{
  oblak2024rematching,
  title={ReMatching Dynamic Reconstruction Flow},
  author={Oblak, Sara and Paschalidou, Despoina and Fidler, Sanja and Atzmon, Matan},
  journal={arXiv preprint arXiv:2411.00705},
  year={2024}
}

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages