PyTorch implementation for our ECCV 2022 paper Implicit field supervision for robust non-rigid shape matching
-
Setup the conda environment using from the
ifmatch_env.yml
asconda env create -f ifmatch_env.yml
and activate it. -
Install the
pytorch-meta
ascd pytorch-meta && python setup.py install
-
We provide the datasets and variants used in our paper here
-
Once the dataset have been downloaded, we have two-staged pre-processing,
-
Sampling SDF: To sample points with SDF, we follow the DeepSDF scheme as given here. Place all the
npz
files into (say)/path/to/npz
. -
Sampling surface with normal: For this, we use the
mesh-to-sdf
package. To perform this step, rundata_process.py
by providing the path toply
files and thenpz
files from previous point. Run with--help
option to know other required parameters.
-
-
Once the pre-processing is done, your data directory should have three directories,
free_space_pts
containing the SDF,surface_pts_n_normal
containing the surface points along with normal information andvertex_pts_n_normal
containing vertex points. -
Step 2 is repeated for both training and test dataset alike.
-
We provide pre-processed samples consisting of test-set shapes from the FAUST-Remesh dataset here: https://nuage.lix.polytechnique.fr/index.php/s/gb8D3KHBeb7zqNL.
To train, run the following by appropriately replacing parameters,
python train.py --config configs/train/<dataset>.yml --split split/train/dataset.txt --exp_name <my_exp>
Our evaluation is two staged, first we find the optimal latent vector (MAP), then we solve for the P2P map between shapes
- To run the MAP step,
python evaluate.py --config configs/eval/<dataset.yml>
- To obtain the P2P map,
python run_matching.py --config configs/eval/<dataset.yml> --latent_opt
We perform 3 distinct training in total for reported results in the paper. Respective models can be downloaded from here
If you find our work useful, please cite the arxiv version below. (To be updated soon...)
@misc{sundararaman2022implicit,
title={Implicit field supervision for robust non-rigid shape matching},
author={Ramana Sundararaman and Gautam Pai and Maks Ovsjanikov},
year={2022},
eprint={2203.07694},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
We thank authors of DIF-Net and SIREN for graciously open-sourcing their code.