3D Gaussian Splatting (3DGS) has shown a powerful capability for novel view synthesis due to its detailed expressive ability and highly efficient rendering speed. Unfortunately, creating relightable 3D assets with 3DGS is still problematic, particularly for reflective objects, as its discontinuous representation raises difficulties in constraining geometries. Inspired by previous works, the signed distance field (SDF) can serve as an effective way for geometry regularization. However, a direct incorporation between Gaussians and SDF significantly slows training. To this end, we propose GS-ROR for reflective objects relighting with 3DGS aided by SDF priors. At the core of our method is the mutual supervision of the depth and normal between deferred Gaussians and SDF, which avoids the expensive volume rendering of SDF. Thanks to this mutual supervision, the learned deferred Gaussians are well-constrained with a minimal time cost. As the Gaussians are rendered in a deferred shading mode, while the alpha-blended Gaussians are smooth, individual Gaussians may still be outliers, yielding floater artifacts. Therefore, we further introduce an SDF-aware pruning strategy to remove Gaussian outliers, which are located distant from the surface defined by SDF, avoiding the floater issue. Consequently, our method outperforms the existing Gaussian-based inverse rendering methods in terms of relighting quality. Our method also exhibits competitive relighting quality compared to NeRF-based methods with at most 25% of training time and allows rendering at 200+ frames per second on an RTX4090.
3D高斯散射(3DGS)由于其详细的表达能力和高效的渲染速度,在新视角合成中显示出强大的能力。不幸的是,使用3DGS创建可重新照明的3D资产仍然存在问题,尤其是对于反射性物体,因为其不连续的表示形式在约束几何形状方面带来了困难。受先前工作的启发,符号距离场(SDF)可以作为几何规范化的有效方式。然而,高斯与SDF的直接结合显著降低了训练速度。为此,我们提出了GS-ROR方法,用于借助SDF先验对反射性物体进行重新照明处理,结合3DGS使用。我们方法的核心是延迟高斯与SDF之间的深度和法线的相互监督,这避免了SDF的昂贵体积渲染。由于这种相互监督,学习到的延迟高斯受到良好的约束,同时最小化了时间成本。由于高斯在延迟着色模式下渲染,虽然alpha混合的高斯是平滑的,但个别高斯仍可能是异常值,产生浮动伪影。因此,我们进一步引入了一种意识到SDF的修剪策略,以移除远离由SDF定义的表面的高斯异常值,避免了浮动问题。因此,我们的方法在重新照明质量方面超过了现有的基于高斯的逆渲染方法。我们的方法与基于NeRF的方法相比也展示了有竞争力的重新照明质量,训练时间最多只有25%,并且在RTX4090上可以以200+帧每秒的速度渲染。