The Replica Dataset is a dataset of high quality reconstructions of a variety of indoor spaces. Each reconstruction has clean dense geometry, high resolution and high dynamic range textures, glass and mirror surface information, planar segmentation as well as semantic class and instance segmentation. See the technical report for more details.
The Replica SDK contained in this repository allows visual inspection of the datasets via the ReplicaViewer and gives an example of how to render out images from the scenes headlessly via the ReplicaRenderer.
For machine learning purposes each dataset also contains an export to the format employed by AI Habitat and is therefore usable seamlessly in that framework for AI agent training and other ML tasks.
If you use the Replica dataset in your research, please cite the following technical report:
@article{replica19arxiv,
title = {The {R}eplica Dataset: A Digital Replica of Indoor Spaces},
author = {Julian Straub and Thomas Whelan and Lingni Ma and Yufan Chen and Erik Wijmans and Simon Green and Jakob J. Engel and Raul Mur-Artal and Carl Ren and Shobhit Verma and Anton Clarkson and Mingfei Yan and Brian Budge and Yajie Yan and Xiaqing Pan and June Yon and Yuyang Zou and Kimberly Leon and Nigel Carter and Jesus Briales and Tyler Gillingham and Elias Mueggler and Luis Pesqueira and Manolis Savva and Dhruv Batra and Hauke M. Strasdat and Renzo De Nardi and Michael Goesele and Steven Lovegrove and Richard Newcombe },
journal = {arXiv preprint arXiv:1906.05797},
year = {2019}
}
The following 18 scenes are included in this initial release:
Each Replica contains the following assets:
├── glass.sur
├── habitat
├── mesh_semantic.ply
├── mesh_semantic.navmesh
├── info_semantic.json
├── mesh_preseg_semantic.ply
├── mesh_preseg_semantic.navmesh
└── info_preseg_semantic.json
├── mesh.ply
├── preseg.bin
├── preseg.json
├── semantic.bin
├── semantic.json
└── textures
├── 0-color-ptex.hdr
├── 0-color-ptex.w
├── 1-color-ptex.hdr
├── 1-color-ptex.w
├── ...
└── parameters.json
The different files contain the following:
glass.sur
: parameterization of glass and mirror surfaces.mesh.ply
: the quad mesh of the scene with vertex colors.preseg.json
andpreseg.bin
: the presegmentation in terms of planes and non-planes of the scene.semantic.json
andsemantic.bin
: the semantic segmentation of the scene.textures
: the high resolution and high dynamic range textures of the scene.habitat/mesh*semantic.ply
: the quad meshes including semantic or presegmentation information for AI Habitat.habitat/info*semantic.json
: mapping from instance IDs in the respectivemesh_*.ply
to semantic names.habitat/mesh*semantic.navmesh
: navigation grid for AI Habitat.
Make sure pigz
and wget
are installed:
# on Mac OS
brew install wget pigz
# on Ubuntu
sudo apt-get install pigz
To download and decompress the dataset use the download.sh
script:
./download.sh /path/to/replica_v1
Execute win_download.bat
to download Replica.
After installing the dependencies of Pangolin, the Replica SDK can be compiled using the build script via
git submodule update --init
./build.sh
It requires the dependencies of Pangolin and Eigen to be installed. If you wish to use the headless renderer ensure you have the libegl1-mesa-dev package.
ReplicaViewer is an interactive UI to explore the Replica Dataset.
./build/bin/ReplicaViewer mesh.ply /path/to/atlases [mirrorFile]
The exposure value for rendering from the HDR textures can be adjusted on the top left.
The ReplicaRenderer shows how to render out images from a Replica for a programmatically defined trajectory without UI. This executable can be run headless on a server if so desired.
./build/bin/ReplicaRenderer mesh.ply textures glass.sur
To use Replica within AI Habitat checkout the AI Habitat Sim at https://github.com/facebookresearch/habitat-sim. After building the project you can launch the test viewer to verify that everything works:
./build/viewer /PATH/TO/REPLICA/apartment_0/habitat/mesh_semantic.ply
Julian Straub, Thomas Whelan, Lingni Ma, Yufan Chen, Erik Wijmans, Simon Green, Jakob J. Engel, Raul Mur-Artal, Carl Ren, Shobhit Verma, Anton Clarkson, Mingfei Yan, Brian Budge, Yajie Yan, Xiaqing Pan, June Yon, Yuyang Zou, Kimberly Leon, Nigel Carter, Jesus Briales, Tyler Gillingham Elias Mueggler, Luis Pesqueira, Manolis Savva, Dhruv Batra, Hauke M. Strasdat, Renzo De Nardi, Michael Goesele, Steven Lovegrove, and Richard Newcombe.
The Replica dataset would not have been possible without the hard work and contributions of Matthew Banks, Christopher Dotson, Rashad Barber, Justin Blosch, Ethan Henderson, Kelley Greene, Michael Thot, Matthew Winterscheid, Robert Johnston, Abhijit Kulkarni, Robert Meeker, Jamie Palacios, Tony Phan, Tim Petrvalsky, Sayed Farhad Sadat, Manuel Santana, Suruj Singh, Swati Agrawal, and Hannah Woolums.
See the LICENSE file for details.
The original ReplicaRenderer and ReplicaViewer remain the same. See above for usage.
For example, to render an example set of [left_ods, right_ods, equirect] images of room_0 (with a hardcoded position):
./build/ReplicaSDK/ReplicaRendererDataset dataset/room_0/mesh.ply dataset/room_0/textures dataset/room_0/glass.sur n y output/dir/ width height
To render on room_0 with txt files:
./build/ReplicaSDK/ReplicaRendererDataset dataset/room_0/mesh.ply dataset/room_0/textures dataset/room_0/glass.sur glob/room_0.txt y output/dir/ width height
Format of one line in the input text file (camera_parameters.txt) should be:
camera_position_x camera_position_y camera_position_z ods_baseline target1_offset_x target1_offset_y target1_offset_z target2_offset_x target2_offset_y target2_offset_z target3_offset_x target3_offset_y target3_offset_z
Find all the existing text files in glob/ and test-glob/.
To render on one scene:
./build/ReplicaSDK/ReplicaVideoRenderer path/to/scene/mesh.ply path/to/scene/textures path/to/scene/glass.sur camera_parameters.txt spherical[y/n] output/dir/ width height
The difference is the format of one line in input text file (camera_parameters.txt), which for video should be:
camera_position_x camera_position_y camera_position_z lookat_x lookat_y lookat_z ods_baseline rotation_x rotation_y rotation_z target_x target_y target_z
Can use glob/gen_video_path.py to generate a candidate path text file. See glob/example_script for example.
Example Usage:
./build/ReplicaSDK/DepthMeshRendererBatch TEST_FILES CAMERA_POSES OUT_DIR SPHERICAL<y|n> y y OUTPUT_WIDTH OUTPUT_HEIGHT
TEST_FILES should contain lines of format:
<ods_left_color>.png <ods_left_depth>.png
CAMERA_POSES should contain the desired camera offsets for rendering as below for each line:
<translate_x> <translate_y> <translate_z>
See more instructions in assets/EVALUATION_HOWTO.