Skip to content

Latest commit

 

History

History
5 lines (3 loc) · 3.03 KB

2407.10062.md

File metadata and controls

5 lines (3 loc) · 3.03 KB

SpikeGS: 3D Gaussian Splatting from Spike Streams with High-Speed Camera Motion

Novel View Synthesis plays a crucial role by generating new 2D renderings from multi-view images of 3D scenes. However, capturing high-speed scenes with conventional cameras often leads to motion blur, hindering the effectiveness of 3D reconstruction. To address this challenge, high-frame-rate dense 3D reconstruction emerges as a vital technique, enabling detailed and accurate modeling of real-world objects or scenes in various fields, including Virtual Reality or embodied AI. Spike cameras, a novel type of neuromorphic sensor, continuously record scenes with an ultra-high temporal resolution, showing potential for accurate 3D reconstruction. Despite their promise, existing approaches, such as applying Neural Radiance Fields (NeRF) to spike cameras, encounter challenges due to the time-consuming rendering process. To address this issue, we make the first attempt to introduce the 3D Gaussian Splatting (3DGS) into spike cameras in high-speed capture, providing 3DGS as dense and continuous clues of views, then constructing SpikeGS. Specifically, to train SpikeGS, we establish computational equations between the rendering process of 3DGS and the processes of instantaneous imaging and exposing-like imaging of the continuous spike stream. Besides, we build a very lightweight but effective mapping process from spikes to instant images to support training. Furthermore, we introduced a new spike-based 3D rendering dataset for validation. Extensive experiments have demonstrated our method possesses the high quality of novel view rendering, proving the tremendous potential of spike cameras in modeling 3D scenes.

新视角合成通过从3D场景的多视图图像生成新的2D渲染图像,发挥着至关重要的作用。然而,使用传统相机捕捉高速场景通常会导致运动模糊,这阻碍了3D重建的有效性。为了解决这个挑战,高帧率密集3D重建作为一种至关重要的技术出现,使得在虚拟现实或具体化人工智能等多个领域中,能够详细且准确地建模现实世界的对象或场景。神经形态传感器中的一种新型相机,即尖峰相机,可以持续以超高时间分辨率记录场景,显示出用于准确3D重建的潜力。尽管有前景,现有的方法,例如将神经辐射场(NeRF)应用于尖峰相机,由于渲染过程耗时而遇到挑战。为了解决这个问题,我们首次尝试将3D高斯喷溅(3DGS)引入高速捕捉的尖峰相机中,提供3DGS作为视角的密集和连续线索,然后构建SpikeGS。具体来说,为了训练SpikeGS,我们在3DGS的渲染过程与尖峰流的瞬时成像和暴露式成像的过程之间建立计算方程。此外,我们构建了一个非常轻量但有效的从尖峰到瞬时图像的映射过程以支持训练。我们还引入了一个新的基于尖峰的3D渲染数据集进行验证。广泛的实验已经证明了我们方法在新视角渲染的高质量,证实了尖峰相机在3D场景建模中的巨大潜力。