3D Gaussian Splatting (3DGS) has demonstrated remarkable performance in scene synthesis and novel view synthesis tasks. Typically, the initialization of 3D Gaussian primitives relies on point clouds derived from Structure-from-Motion (SfM) methods. However, in scenarios requiring scene reconstruction from sparse viewpoints, the effectiveness of 3DGS is significantly constrained by the quality of these initial point clouds and the limited number of input images. In this study, we present Dust-GS, a novel framework specifically designed to overcome the limitations of 3DGS in sparse viewpoint conditions. Instead of relying solely on SfM, Dust-GS introduces an innovative point cloud initialization technique that remains effective even with sparse input data. Our approach leverages a hybrid strategy that integrates an adaptive depth-based masking technique, thereby enhancing the accuracy and detail of reconstructed scenes. Extensive experiments conducted on several benchmark datasets demonstrate that Dust-GS surpasses traditional 3DGS methods in scenarios with sparse viewpoints, achieving superior scene reconstruction quality with a reduced number of input images.
3D 高斯投影(3D Gaussian Splatting, 3DGS)在场景合成和新视角合成任务中表现出了卓越的性能。通常,3D 高斯基元的初始化依赖于通过结构光(Structure-from-Motion, SfM)方法生成的点云。然而,在需要从稀疏视角进行场景重建的场景中,3DGS 的有效性受限于初始点云的质量以及输入图像数量的有限性。在本研究中,我们提出了 Dust-GS,这是一个专门设计用于克服 3DGS 在稀疏视角条件下局限性的全新框架。Dust-GS 并不单纯依赖于 SfM,而是引入了一种创新的点云初始化技术,即使在稀疏的输入数据情况下也能保持有效。我们的方法采用了一种结合自适应深度掩码的混合策略,从而提高了重建场景的准确性和细节。在多个基准数据集上进行的大量实验表明,Dust-GS 在稀疏视角场景中超越了传统的 3DGS 方法,以更少的输入图像实现了更高质量的场景重建。