Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
Add "EvGGS: A Collaborative Learning Framework for Event-based Generalizable Gaussian Splatting"
Add "GS-Hider: Hiding Messages into 3D Gaussian Splatting"
Add "HDR-GS: Efficient High Dynamic Range Novel View Synthesis at 1000x Speed via Gaussian Splatting"
Add "DisC-GS: Discontinuity-aware Gaussian Splatting"
Add "GSDeformer: Direct Cage-based Deformation for 3D Gaussian Splatting"
Add "Feature Splatting for Better Novel View Synthesis with Low Overlap"
  • Loading branch information
gqk committed May 27, 2024
1 parent 9018db7 commit 7b667b4
Show file tree
Hide file tree
Showing 8 changed files with 80 additions and 0 deletions.
14 changes: 14 additions & 0 deletions Changelog.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,19 @@
# Changelog

### 2024/05/27

Add "EvGGS: A Collaborative Learning Framework for Event-based Generalizable Gaussian Splatting"

Add "GS-Hider: Hiding Messages into 3D Gaussian Splatting"

Add "HDR-GS: Efficient High Dynamic Range Novel View Synthesis at 1000x Speed via Gaussian Splatting"

Add "DisC-GS: Discontinuity-aware Gaussian Splatting"

Add "GSDeformer: Direct Cage-based Deformation for 3D Gaussian Splatting"

Add "Feature Splatting for Better Novel View Synthesis with Low Overlap"

### 2024/05/24

Add "Gaussian Time Machine: A Real-Time Rendering Methodology for Time-Variant Appearances"
Expand Down
36 changes: 36 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -1580,3 +1580,39 @@ Institute of Automation, Chinese Academy of Sciences ⟐ University of Chinese A
- **🏫 单位**:The Chinese University of Hong Kong ⟐ Hong Kong University of Science and Technology ⟐ Huawei Noah’s Ark Lab
- **🔗 链接**[[中英摘要](./abs/2405.14475.md)] [[arXiv:2405.14475](https://arxiv.org/abs/2405.14475)] [Code]
- **📝 说明**

#### [253] EvGGS: A Collaborative Learning Framework for Event-based Generalizable Gaussian Splatting
- **🧑‍🔬 作者**:Jiaxu Wang, Junhao He, Ziyi Zhang, Mingyuan Sun, Jingkai Sun, Renjing Xu
- **🏫 单位**:Hong Kong University of Science and Technology, Guangzhou ⟐ Northeastern University, China
- **🔗 链接**[[中英摘要](./abs/2405.14959.md)] [[arXiv:2405.14959](https://arxiv.org/abs/2405.14959)] [Code]
- **📝 说明**:🏆 Accepted to ICML 2024

#### [254] GS-Hider: Hiding Messages into 3D Gaussian Splatting
- **🧑‍🔬 作者**:Xuanyu Zhang, Jiarui Meng, Runyi Li, Zhipei Xu, Yongbing Zhang, Jian Zhang
- **🏫 单位**:Peking University ⟐ Harbin Institute of Technology (Shenzhen)
- **🔗 链接**[[中英摘要](./abs/2405.15118.md)] [[arXiv:2405.15118](https://arxiv.org/abs/2405.15118)] [Code]
- **📝 说明**

#### [255] HDR-GS: Efficient High Dynamic Range Novel View Synthesis at 1000x Speed via Gaussian Splatting
- **🧑‍🔬 作者**:Yuanhao Cai, Zihao Xiao, Yixun Liang, Yulun Zhang, Xiaokang Yang, Yaoyao Liu, Alan Yuille
- **🏫 单位**:Johns Hopkins University ⟐ HKUST (GZ) ⟐ Tsinghua University ⟐ Shanghai Jiao Tong University
- **🔗 链接**[[中英摘要](./abs/2405.15125.md)] [[arXiv:2405.15125](https://arxiv.org/abs/2405.15125)] [[Code](https://github.com/caiyuanhao1998/HDR-GS)]
- **📝 说明**

#### [256] DisC-GS: Discontinuity-aware Gaussian Splatting
- **🧑‍🔬 作者**:Haoxuan Qu, Zhuoling Li, Hossein Rahmani, Yujun Cai, Jun Liu
- **🏫 单位**:Singapore University of Technology and Design ⟐ Central South University ⟐ Lancaster University ⟐ Nanyang Technological University
- **🔗 链接**[[中英摘要](./abs/2405.15196.md)] [[arXiv:2405.15196](https://arxiv.org/abs/2405.15196)] [Code]
- **📝 说明**

#### [257] GSDeformer: Direct Cage-based Deformation for 3D Gaussian Splatting
- **🧑‍🔬 作者**:Jiajun Huang, Hongchuan Yu
- **🏫 单位**:Bournemouth University
- **🔗 链接**[[中英摘要](./abs/2405.15491.md)] [[arXiv:2405.15491](https://arxiv.org/abs/2405.15491)] [Code]
- **📝 说明**

#### [258] Feature Splatting for Better Novel View Synthesis with Low Overlap
- **🧑‍🔬 作者**:T. Berriel Martins, Javier Civera
- **🏫 单位**:University of Zaragoza
- **🔗 链接**[[中英摘要](./abs/2405.15518.md)] [[arXiv:2405.15518](https://arxiv.org/abs/2405.15518)] [Code]
- **📝 说明**
5 changes: 5 additions & 0 deletions abs/2405.14959.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
### EvGGS: A Collaborative Learning Framework for Event-based Generalizable Gaussian Splatting

Event cameras offer promising advantages such as high dynamic range and low latency, making them well-suited for challenging lighting conditions and fast-moving scenarios. However, reconstructing 3D scenes from raw event streams is difficult because event data is sparse and does not carry absolute color information. To release its potential in 3D reconstruction, we propose the first event-based generalizable 3D reconstruction framework, called EvGGS, which reconstructs scenes as 3D Gaussians from only event input in a feedforward manner and can generalize to unseen cases without any retraining. This framework includes a depth estimation module, an intensity reconstruction module, and a Gaussian regression module. These submodules connect in a cascading manner, and we collaboratively train them with a designed joint loss to make them mutually promote. To facilitate related studies, we build a novel event-based 3D dataset with various material objects and calibrated labels of grayscale images, depth maps, camera poses, and silhouettes. Experiments show models that have jointly trained significantly outperform those trained individually. Our approach performs better than all baselines in reconstruction quality, and depth/intensity predictions with satisfactory rendering speed.

事件相机以其高动态范围和低延迟的优势备受推崇,非常适合光照条件复杂和快速移动的场景。然而,从原始事件流重建三维场景非常困难,因为事件数据稀疏且不包含绝对颜色信息。为了释放其在三维重建中的潜力,我们提出了第一个基于事件的可泛化三维重建框架,称为 EvGGS,它可以仅从事件输入中以前馈方式重建场景为三维高斯模型,并能在未见过的案例中泛化,无需重新训练。该框架包括深度估计模块、强度重建模块和高斯回归模块。这些子模块以级联方式连接,并通过设计的联合损失函数进行协同训练,以促进它们的相互提升。为了促进相关研究,我们构建了一个新颖的基于事件的三维数据集,包括各种材料的对象和经过校准的灰度图像、深度图、相机姿态和剪影标签。实验显示,联合训练的模型显著优于单独训练的模型。我们的方法在重建质量、深度/强度预测方面均优于所有基线,并且渲染速度令人满意。
5 changes: 5 additions & 0 deletions abs/2405.15118.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
### GS-Hider: Hiding Messages into 3D Gaussian Splatting

3D Gaussian Splatting (3DGS) has already become the emerging research focus in the fields of 3D scene reconstruction and novel view synthesis. Given that training a 3DGS requires a significant amount of time and computational cost, it is crucial to protect the copyright, integrity, and privacy of such 3D assets. Steganography, as a crucial technique for encrypted transmission and copyright protection, has been extensively studied. However, it still lacks profound exploration targeted at 3DGS. Unlike its predecessor NeRF, 3DGS possesses two distinct features: 1) explicit 3D representation; and 2) real-time rendering speeds. These characteristics result in the 3DGS point cloud files being public and transparent, with each Gaussian point having a clear physical significance. Therefore, ensuring the security and fidelity of the original 3D scene while embedding information into the 3DGS point cloud files is an extremely challenging task. To solve the above-mentioned issue, we first propose a steganography framework for 3DGS, dubbed GS-Hider, which can embed 3D scenes and images into original GS point clouds in an invisible manner and accurately extract the hidden messages. Specifically, we design a coupled secured feature attribute to replace the original 3DGS's spherical harmonics coefficients and then use a scene decoder and a message decoder to disentangle the original RGB scene and the hidden message. Extensive experiments demonstrated that the proposed GS-Hider can effectively conceal multimodal messages without compromising rendering quality and possesses exceptional security, robustness, capacity, and flexibility.

三维高斯喷溅技术(3DGS)已成为三维场景重建和新视角合成领域的新兴研究焦点。鉴于训练3DGS需要大量时间和计算成本,保护这类三维资产的版权、完整性和隐私至关重要。隐写术作为加密传输和版权保护的关键技术已经被广泛研究。然而,针对3DGS的深入探索仍然不足。与其前身NeRF不同,3DGS具有两个独特特征:1) 明确的三维表示;和2) 实时渲染速度。这些特性导致3DGS点云文件公开且透明,每个高斯点都具有明确的物理意义。因此,在将信息嵌入3DGS点云文件的同时确保原始三维场景的安全性和保真度是一项极其具有挑战性的任务。为了解决上述问题,我们首次提出一个针对3DGS的隐写框架,称为GS-Hider,它可以以隐形方式将三维场景和图像嵌入原始GS点云中,并准确提取隐藏信息。具体来说,我们设计了一个耦合的安全特征属性来替换原始3DGS的球谐系数,然后使用场景解码器和信息解码器来分离原始RGB场景和隐藏信息。广泛的实验表明,所提出的GS-Hider能够有效地隐藏多模态信息,同时不损害渲染质量,并且具有卓越的安全性、鲁棒性、容量和灵活性。
5 changes: 5 additions & 0 deletions abs/2405.15125.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
### HDR-GS: Efficient High Dynamic Range Novel View Synthesis at 1000x Speed via Gaussian Splatting

High dynamic range (HDR) novel view synthesis (NVS) aims to create photorealistic images from novel viewpoints using HDR imaging techniques. The rendered HDR images capture a wider range of brightness levels containing more details of the scene than normal low dynamic range (LDR) images. Existing HDR NVS methods are mainly based on NeRF. They suffer from long training time and slow inference speed. In this paper, we propose a new framework, High Dynamic Range Gaussian Splatting (HDR-GS), which can efficiently render novel HDR views and reconstruct LDR images with a user input exposure time. Specifically, we design a Dual Dynamic Range (DDR) Gaussian point cloud model that uses spherical harmonics to fit HDR color and employs an MLP-based tone-mapper to render LDR color. The HDR and LDR colors are then fed into two Parallel Differentiable Rasterization (PDR) processes to reconstruct HDR and LDR views. To establish the data foundation for the research of 3D Gaussian splatting-based methods in HDR NVS, we recalibrate the camera parameters and compute the initial positions for Gaussian point clouds. Experiments demonstrate that our HDR-GS surpasses the state-of-the-art NeRF-based method by 3.84 and 1.91 dB on LDR and HDR NVS while enjoying 1000x inference speed and only requiring 6.3% training time.

高动态范围(HDR)新视角合成(NVS)旨在使用HDR成像技术从新的视角创建逼真的图像。渲染的HDR图像捕捉更广泛的亮度级别,包含比普通低动态范围(LDR)图像更多的场景细节。现有的HDR NVS方法主要基于NeRF,它们的缺点是训练时间长和推理速度慢。在本文中,我们提出了一个新框架,高动态范围高斯喷溅(HDR-GS),它可以高效地渲染新的HDR视角并根据用户输入的曝光时间重建LDR图像。具体来说,我们设计了一个双动态范围(DDR)高斯点云模型,使用球谐函数拟合HDR颜色,并采用基于MLP的色调映射器来渲染LDR颜色。然后将HDR和LDR颜色输入到两个并行可微光栅化(PDR)过程中,以重建HDR和LDR视图。为了为HDR NVS中基于三维高斯喷溅方法的研究建立数据基础,我们重新校准了相机参数并计算了高斯点云的初始位置。实验表明,我们的HDR-GS在LDR和HDR NVS上分别超过了最新的基于NeRF的方法3.84 dB和1.91 dB,同时享有1000倍的推理速度,仅需6.3%的训练时间。
5 changes: 5 additions & 0 deletions abs/2405.15196.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
### DisC-GS: Discontinuity-aware Gaussian Splatting

Recently, Gaussian Splatting, a method that represents a 3D scene as a collection of Gaussian distributions, has gained significant attention in addressing the task of novel view synthesis. In this paper, we highlight a fundamental limitation of Gaussian Splatting: its inability to accurately render discontinuities and boundaries in images due to the continuous nature of Gaussian distributions. To address this issue, we propose a novel framework enabling Gaussian Splatting to perform discontinuity-aware image rendering. Additionally, we introduce a Bézier-boundary gradient approximation strategy within our framework to keep the "differentiability" of the proposed discontinuity-aware rendering process. Extensive experiments demonstrate the efficacy of our framework.

最近,高斯喷溅法,一种将三维场景表示为高斯分布集合的方法,在解决新视角合成任务中获得了显著关注。在本文中,我们指出了高斯喷溅的一个基本局限性:由于高斯分布的连续性质,其无法准确渲染图像中的不连续性和边界。为了解决这个问题,我们提出了一个新框架,使高斯喷溅能够进行感知不连续性的图像渲染。此外,我们在框架中引入了一个贝塞尔边界梯度近似策略,以保持所提出的感知不连续性渲染过程的“可微分性”。广泛的实验表明我们框架的有效性。
5 changes: 5 additions & 0 deletions abs/2405.15491.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
### GSDeformer: Direct Cage-based Deformation for 3D Gaussian Splatting

We present GSDeformer, a method that achieves free-form deformation on 3D Gaussian Splatting(3DGS) without requiring any architectural changes. Our method extends cage-based deformation, a traditional mesh deformation method, to 3DGS. This is done by converting 3DGS into a novel proxy point cloud representation, where its deformation can be used to infer the transformations to apply on the 3D gaussians making up 3DGS. We also propose an automatic cage construction algorithm for 3DGS to minimize manual work. Our method does not modify the underlying architecture of 3DGS. Therefore, any existing trained vanilla 3DGS can be easily edited by our method. We compare the deformation capability of our method against other existing methods, demonstrating the ease of use and comparable quality of our method, despite being more direct and thus easier to integrate with other concurrent developments on 3DGS.

我们介绍了GSDeformer,这是一种在三维高斯喷溅(3DGS)上实现自由形变的方法,无需对架构进行任何改变。我们的方法是将基于笼子的形变——一种传统的网格形变方法——扩展到3DGS上。这是通过将3DGS转换成一种新的代理点云表示来实现的,其中的形变可以用来推断应用于构成3DGS的三维高斯的变换。我们还提出了一种自动笼子构建算法,用于3DGS,以尽量减少手工操作。我们的方法不修改3DGS的底层架构。因此,任何现有的训练过的普通3DGS都可以通过我们的方法轻松编辑。我们将我们方法的形变能力与其他现有方法进行了比较,证明了我们方法的易用性和可比性质量,尽管它更直接,因此更容易与3DGS的其他同时发展集成。
5 changes: 5 additions & 0 deletions abs/2405.15518.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
### Feature Splatting for Better Novel View Synthesis with Low Overlap

3D Gaussian Splatting has emerged as a very promising scene representation, achieving state-of-the-art quality in novel view synthesis significantly faster than competing alternatives. However, its use of spherical harmonics to represent scene colors limits the expressivity of 3D Gaussians and, as a consequence, the capability of the representation to generalize as we move away from the training views. In this paper, we propose to encode the color information of 3D Gaussians into per-Gaussian feature vectors, which we denote as Feature Splatting (FeatSplat). To synthesize a novel view, Gaussians are first "splatted" into the image plane, then the corresponding feature vectors are alpha-blended, and finally the blended vector is decoded by a small MLP to render the RGB pixel values. To further inform the model, we concatenate a camera embedding to the blended feature vector, to condition the decoding also on the viewpoint information. Our experiments show that these novel model for encoding the radiance considerably improves novel view synthesis for low overlap views that are distant from the training views. Finally, we also show the capacity and convenience of our feature vector representation, demonstrating its capability not only to generate RGB values for novel views, but also their per-pixel semantic labels.

三维高斯喷溅作为一种非常有前景的场景表示方法,已在新视角合成方面实现了前所未有的质量,其速度远快于竞争性替代方法。然而,它使用球谐函数来表示场景颜色限制了三维高斯的表达能力,因此,当我们远离训练视图时,该表示的泛化能力也受到限制。在本文中,我们提出将三维高斯的颜色信息编码到每个高斯的特征向量中,我们将这种方法称为特征喷溅(FeatSplat)。为了合成一个新的视角,首先将高斯“喷溅”到图像平面上,然后将相应的特征向量进行alpha混合,最后通过一个小型的多层感知器(MLP)解码混合向量,渲染出RGB像素值。为了进一步提供模型信息,我们将相机嵌入向量与混合特征向量连接起来,以便也根据视点信息进行解码条件设置。我们的实验表明,这种新的辐射编码模型显著提高了对训练视图远离且重叠度低的新视图的合成质量。最后,我们还展示了我们特征向量表示的容量和便利性,证明了它不仅能为新视图生成RGB值,还能生成每个像素的语义标签。

0 comments on commit 7b667b4

Please sign in to comment.