Pipeline and scripts for digitizing the scanned frescoes. This repository provides the framework and tools for the post-processing pipeline used to reconstruct 3D models of fresco fragments. The pipeline involves several stages of processing, as shown in the diagram.
For the scanning procedure we used the following sensoring:
- the Polyga H3 3D scanner
- the sony α7c mirroless digital camera
For digitizing the acquired data from the Polyga H3 3D scanner we used the accompanied software Flex3DScan and a rotary table as peripheral, in order to capture the fragments from multiple perspectives. Moreover, the usage of a lighting box to ensure consistent isometric ambient lighting conditions.
Each frescoe piece is scanned both from the bottom and and upper part, the bottom and upper parts are identified from the corresponding postfixes. For the bottom is RPf_<id>a.ply
and the upper RPf_<id>b.ply
respectively.
Once we have both the bottom and upper part of each frescoe piece we need to align them in order to create a unique piece. This is done by performing a Truncated Least Squares (TLS) registration and using the Teaser++ library.
The alignment process involves extracting robust putative correspondences
Where
For the high resolution maps we utilize the high-resolution images from the digital camera to generate a secondary 3D model, facilitating the mapping of high-resolution texture information onto the 3D model reconstructed from the 3D scanner. For this purpose, we employ Structure from Motion (SfM).
where,
To achieve our goal of reconstructing only the fragments and not the entire scanning scene, using segmentation masks allows us to avoid reconstructing the background environment surrounding each fragment during the scanning process.
To start the aligning process run the align_meshes_v1.py
script. You will need to specify the path were the upper and bottom parts are located, they need to be in the same folder. This will create and save the unique piece as as RPf_<id>.ply
in the same folder where the individual pieces a
and b
are. This mesh model though comes with a low resolution texture map.
For mapping the high resolution texture map acquired from the Sony α7c camera to the mesh model extracted from the Polyga scanner we need to have a set of RGB data, i.e. sample images, covering the frescoe from multiple views around it. If the images are already captured directly then no need for extra pre-processing is needed. If the RGB data are captured as a video then we need to extract the individual frames from this video. This is done with the video2imgs.py
script. Thereafter we create the mask of the frescoes for each of these images which will help to have a better reconstruction model from the photogrammetry pipeline. To extract the mask of the object use the create_bg_fg_mask.py
script.
Once the masks are created you can run the bundler_extractor_metashape_v1.py
script which will launch metashape though comand line and will create the photogrammetric mesh model of the fragment with the high resolution texture map. Thereafter, we just need to map the high resolution map to the original 3D scanner sensor based mesh model. We can do that by running the script texture_mapping_from_images.py
. This will give you the final high resolution textured map mesh file.
TEASER++: fast & certifiable 3D registration
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 964854
If you use this code in your research, please cite the following paper:
@inproceedings{repair2024,
title={Re-assembling the Past: The RePAIR Dataset and Benchmark for Realistic 2D and 3D Puzzle Solving},
author={Tsesmelis, Theodore and Palmieri, Luca and Khoroshiltseva, Marina and Islam, Adeela and Elkin, Gur and Shahar, Ofir Itzhak and Scarpellini, Gianluca and Fiorini, Stefano and Ohayon, Yaniv and Alal, Nadav and Aslan, Sinem and Moretti, Pietro and Vascon, Sebastiano and Gravina, Elena and Napolitano, Maria Cristina and Scarpati, Giuseppe and Zuchtriegel, Gabriel and Spühler, Alexandra and Fuchs, Michel E. and James, Stuart and Ben-Shahar, Ohad and Pelillo, Marcello and Del Bue, Alessio},
booktitle={NeurIPS},
year={2024}
}