Skip to content

Latest commit

 

History

History
5 lines (3 loc) · 3.42 KB

2409.13392.md

File metadata and controls

5 lines (3 loc) · 3.42 KB

Elite-EvGS: Learning Event-based 3D Gaussian Splatting by Distilling Event-to-Video Priors

Event cameras are bio-inspired sensors that output asynchronous and sparse event streams, instead of fixed frames. Benefiting from their distinct advantages, such as high dynamic range and high temporal resolution, event cameras have been applied to address 3D reconstruction, important for robotic mapping. Recently, neural rendering techniques, such as 3D Gaussian splatting (3DGS), have been shown successful in 3D reconstruction. However, it still remains under-explored how to develop an effective event-based 3DGS pipeline. In particular, as 3DGS typically depends on high-quality initialization and dense multiview constraints, a potential problem appears for the 3DGS optimization with events given its inherent sparse property. To this end, we propose a novel event-based 3DGS framework, named Elite-EvGS. Our key idea is to distill the prior knowledge from the off-the-shelf event-to-video (E2V) models to effectively reconstruct 3D scenes from events in a coarse-to-fine optimization manner. Specifically, to address the complexity of 3DGS initialization from events, we introduce a novel warm-up initialization strategy that optimizes a coarse 3DGS from the frames generated by E2V models and then incorporates events to refine the details. Then, we propose a progressive event supervision strategy that employs the window-slicing operation to progressively reduce the number of events used for supervision. This subtly relives the temporal randomness of the event frames, benefiting the optimization of local textural and global structural details. Experiments on the benchmark datasets demonstrate that Elite-EvGS can reconstruct 3D scenes with better textural and structural details. Meanwhile, our method yields plausible performance on the captured real-world data, including diverse challenging conditions, such as fast motion and low light scenes.

事件相机是一种受生物启发的传感器,它输出异步和稀疏的事件流,而不是固定帧。得益于其高动态范围和高时间分辨率等显著优势,事件相机已被应用于解决3D重建问题,这对于机器人地图构建至关重要。最近,神经渲染技术如3D Gaussian Splatting(3DGS)在3D重建中取得了成功。然而,如何开发有效的基于事件的3DGS流程仍然未被充分探索。特别是,由于3DGS通常依赖于高质量的初始化和密集的多视图约束,事件数据的固有稀疏性可能会对3DGS优化带来潜在问题。为此,我们提出了一种新颖的基于事件的3DGS框架,命名为Elite-EvGS。我们的核心思想是利用现成的事件转视频(E2V)模型中的先验知识,以粗到细的优化方式从事件中有效地重建3D场景。具体来说,为了解决基于事件的3DGS初始化复杂性,我们引入了一种新的预热初始化策略,先通过E2V模型生成的帧来优化粗略的3DGS,然后结合事件进一步细化细节。接着,我们提出了一种渐进的事件监督策略,采用窗口切片操作逐步减少用于监督的事件数量,这巧妙地缓解了事件帧的时间随机性,有利于优化局部纹理和全局结构细节。基准数据集上的实验表明,Elite-EvGS能够以更好的纹理和结构细节重建3D场景。同时,我们的方法在真实世界数据中表现出色,特别是在多种具有挑战性的条件下,如快速运动和低光场景。