Skip to content

Latest commit

 

History

History
5 lines (3 loc) · 3.54 KB

2402.04796.md

File metadata and controls

5 lines (3 loc) · 3.54 KB

Mesh-based Gaussian Splatting for Real-time Large-scale Deformation

Neural implicit representations, including Neural Distance Fields and Neural Radiance Fields, have demonstrated significant capabilities for reconstructing surfaces with complicated geometry and topology, and generating novel views of a scene. Nevertheless, it is challenging for users to directly deform or manipulate these implicit representations with large deformations in the real-time fashion. Gaussian Splatting(GS) has recently become a promising method with explicit geometry for representing static scenes and facilitating high-quality and real-time synthesis of novel views. However,it cannot be easily deformed due to the use of discrete Gaussians and lack of explicit topology. To address this, we develop a novel GS-based method that enables interactive deformation. Our key idea is to design an innovative mesh-based GS representation, which is integrated into Gaussian learning and manipulation. 3D Gaussians are defined over an explicit mesh, and they are bound with each other: the rendering of 3D Gaussians guides the mesh face split for adaptive refinement, and the mesh face split directs the splitting of 3D Gaussians. Moreover, the explicit mesh constraints help regularize the Gaussian distribution, suppressing poor-quality Gaussians(e.g. misaligned Gaussians,long-narrow shaped Gaussians), thus enhancing visual quality and avoiding artifacts during deformation. Based on this representation, we further introduce a large-scale Gaussian deformation technique to enable deformable GS, which alters the parameters of 3D Gaussians according to the manipulation of the associated mesh. Our method benefits from existing mesh deformation datasets for more realistic data-driven Gaussian deformation. Extensive experiments show that our approach achieves high-quality reconstruction and effective deformation, while maintaining the promising rendering results at a high frame rate(65 FPS on average).

神经隐式表示,包括神经距离场和神经辐射场,已经显示出在重建具有复杂几何和拓扑结构的表面以及生成场景的新视图方面的显著能力。然而,对于用户来说,直接以实时方式对这些隐式表示进行大变形或操作仍然具有挑战性。高斯喷溅(GS)最近成为一种有前景的方法,它具有明确的几何表示,用于表示静态场景并促进高质量且实时的新视图合成。然而,由于使用了离散的高斯函数和缺乏明确的拓扑结构,它不易于形变。为了解决这个问题,我们开发了一种新的基于GS的方法,该方法能够实现交互式形变。我们的关键思想是设计一种创新的基于网格的GS表示,它被整合进高斯学习和操作中。3D高斯被定义在一个明确的网格上,并且彼此绑定:3D高斯的渲染指导网格面的分裂以进行自适应细化,网格面的分裂指导3D高斯的分裂。此外,明确的网格约束有助于规范高斯分布,抑制质量差的高斯(例如,未对齐的高斯、长窄形状的高斯),从而提高视觉质量并在形变过程中避免伪影。基于这种表示,我们进一步引入了一种大规模高斯形变技术来实现可形变的GS,它根据与之关联的网格的操作改变3D高斯的参数。我们的方法受益于现有的网格形变数据集,以实现更现实的数据驱动的高斯形变。广泛的实验表明,我们的方法在保持高帧率(平均每秒65帧)的同时,实现了高质量的重建和有效的形变,并维持了有前景的渲染结果。