Skip to content

YixunLiang/ReTR

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ReTR

Code of paper 'ReTR: Modeling Rendering Via Transformer for Generalizable Neural Surface Reconstruction' (NeurIPS 2023)

Abstract: Generalizable neural surface reconstruction techniques have attracted great attention in recent years. However, they encounter limitations of low confidence depth distribution and inaccurate surface reasoning due to the oversimplified volume rendering process employed. In this paper, we present Reconstruction TRansformer (ReTR), a novel framework that leverages the transformer architecture to redesign the rendering process, enabling complex render interaction modeling. It introduces a learnable meta-ray token and utilizes the cross-attention mechanism to simulate the interaction of rendering process with sampled points and render the observed color. Meanwhile, by operating within a high-dimensional feature space rather than the color space, ReTR mitigates sensitivity to projected colors in source views. Such improvements result in accurate surface assessment with high confidence. We demonstrate the effectiveness of our approach on various datasets, showcasing how our method outperforms the current state-of-the-art approaches in terms of reconstruction quality and generalization ability.

Overview

pipeline

Note

We will change our title following the commends from reviewers and ACs, the name in the camera ready version will be: 'ReTR: Modeling Rendering via Transformer for Generalizable Neural Surface Reconstruction'

Installation

Requirements

  • python 3.9
  • CUDA 11.1
conda create --name retr python=3.9 pip
conda activate retr

pip install -r requirements.txt

Sparse View Reconstruction on DTU

Tranining

root_directory
├──Cameras
├──Rectified
└──Depths_raw
  • In train_dtu.sh, set DATASET as the root directory of dataset; set LOG_DIR as the directory to store the checkpoints.

  • Train the model by running bash train_dtu.sh on GPU.

Evaluation

Download Evaluation Datas

  • Download pre-processed DTU dataset. The dataset is organized as follows:
root_directory
├──cameras
    ├── 00000000_cam.txt
    ├── 00000001_cam.txt
    └── ...  
├──pair.txt
├──scan24
├──scan37
      ├── image               
      │   ├── 000000.png       
      │   ├── 000001.png       
      │   └── ...                
      └── mask                   
          ├── 000.png   
          ├── 001.png
          └── ...                
SampleSet
├──MVS Data
      └──Points
  • Following SparseNeuS and VolRecon, you can evalute the result after your training or directly use our checkpoints by respectively running after you set paths following the comment in the scripts:
bash eval_dtu.sh               ## Render depth maps and images
bash tsdf_fusion.sh            ## Get the reconstructed meshes
bash clean_mesh.sh             ## Clean the raw mesh with object masks
bash eval_dtu_result.sh        ## Get the quantitative results

Citation

If you find this project useful for your research, please cite:

@misc{liang2023ReTR,
  title={ReTR: Modeling Rendering Via Transformer for Generalizable Neural Surface Reconstruction},
  author={Liang, Yixun and He, Hao and Chen, Ying-cong},
  journal={arXiv preprint arXiv:2305.18832},
  year={2023}
}

Acknowledgement

Our code is based on VolRecon, Thanks for their excellent work.

About

Official code of ReTR (NeurIPS 2023)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published