Skip to content

Latest commit

 

History

History
6 lines (4 loc) · 3.22 KB

2408.15708.md

File metadata and controls

6 lines (4 loc) · 3.22 KB

Towards Realistic Example-based Modeling via 3D Gaussian Stitching

Using parts of existing models to rebuild new models, commonly termed as example-based modeling, is a classical methodology in the realm of computer graphics. Previous works mostly focus on shape composition, making them very hard to use for realistic composition of 3D objects captured from real-world scenes. This leads to combining multiple NeRFs into a single 3D scene to achieve seamless appearance blending. However, the current SeamlessNeRF method struggles to achieve interactive editing and harmonious stitching for real-world scenes due to its gradient-based strategy and grid-based representation. To this end, we present an example-based modeling method that combines multiple Gaussian fields in a point-based representation using sample-guided synthesis. Specifically, as for composition, we create a GUI to segment and transform multiple fields in real time, easily obtaining a semantically meaningful composition of models represented by 3D Gaussian Splatting (3DGS). For texture blending, due to the discrete and irregular nature of 3DGS, straightforwardly applying gradient propagation as SeamlssNeRF is not supported. Thus, a novel sampling-based cloning method is proposed to harmonize the blending while preserving the original rich texture and content. Our workflow consists of three steps: 1) real-time segmentation and transformation of a Gaussian model using a well-tailored GUI, 2) KNN analysis to identify boundary points in the intersecting area between the source and target models, and 3) two-phase optimization of the target model using sampling-based cloning and gradient constraints. Extensive experimental results validate that our approach significantly outperforms previous works in terms of realistic synthesis, demonstrating its practicality.

使用现有模型的部分重建新模型,通常称为基于示例的建模,是计算机图形学领域的一种经典方法。之前的工作大多集中于形状合成,这使得它们在现实世界场景中对3D物体的逼真组合应用上非常困难。这导致了将多个NeRF结合成一个单一3D场景以实现无缝外观混合。然而,当前的SeamlessNeRF方法由于其基于梯度的策略和网格表示,难以实现交互式编辑和现实世界场景的和谐拼接。 为此,我们提出了一种基于示例的建模方法,结合了多个高斯场,通过点基表示使用样本指导合成。具体来说,在合成方面,我们创建了一个GUI来实时分割和变换多个高斯场,轻松获得由3D高斯斑点(3DGS)表示的语义上有意义的模型组合。对于纹理混合,由于3DGS的离散和不规则性质,像SeamlessNeRF那样直接应用梯度传播是不支持的。因此,提出了一种新的基于采样的克隆方法来协调混合,同时保留原始丰富的纹理和内容。我们的工作流程包括三个步骤:1) 使用精心设计的GUI对高斯模型进行实时分割和变换,2) KNN分析以识别源模型与目标模型交叉区域的边界点,3) 使用基于采样的克隆和梯度约束对目标模型进行两阶段优化。大量实验结果验证了我们的方法在现实合成方面显著优于之前的工作,展示了其实用性。