Reality Fusion: Robust Real-time Immersive Mobile Robot Teleoperation with Volumetric Visual Data Fusion
We introduce Reality Fusion, a novel robot teleoperation system that localizes, streams, projects, and merges a typical onboard depth sensor with a photorealistic, high resolution, high framerate, and wide field of view (FoV) rendering of the complex remote environment represented as 3D Gaussian splats (3DGS). Our framework enables robust egocentric and exocentric robot teleoperation in immersive VR, with the 3DGS effectively extending spatial information of a depth sensor with limited FoV and balancing the trade-off between data streaming costs and data visual quality. We evaluated our framework through a user study with 24 participants, which revealed that Reality Fusion leads to significantly better user performance, situation awareness, and user preferences. To support further research and development, we provide an open-source implementation with an easy-to-replicate custom-made telepresence robot, a high-performance virtual reality 3DGS renderer, and an immersive robot control package.
我们介绍了 Reality Fusion,这是一种新型机器人远程操作系统,它通过本地化、流式传输、投影和融合典型的板载深度传感器数据与复杂远程环境的光逼真、高分辨率、高帧率和宽视场(FoV)的 3D Gaussian splats(3DGS)渲染。我们的框架支持沉浸式 VR 中的强大自我中心和外部中心机器人远程操作,3DGS 有效地扩展了视场有限的深度传感器的空间信息,并平衡了数据流成本和视觉质量之间的权衡。我们通过一项涉及 24 位参与者的用户研究评估了我们的框架,结果显示 Reality Fusion 显著提升了用户表现、情况意识和用户偏好。为了支持进一步的研究和开发,我们提供了一个开源实现,包括易于复制的定制远程存在机器人、高性能虚拟现实 3DGS 渲染器和沉浸式机器人控制包。