Skip to content

Tong-ZHAO/Pixel2Mesh-Pytorch

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

26 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Pixel2Mesh-Pytorch

This repository aims to implement the ECCV 2018 paper: Pixel2Mesh: Generating 3D Mesh Models from Single RGB Images in PyTorch. The official code in Tensorflow is available online. Based on the proposed structure, we replaced the VGG model by a U-Net based autoencoder to reconstruct the image, which helps the net to converge faster.

Requirements

  • PyTorch 1.0 (Enable Sparse Tensor)
  • >= Python 3
  • >= Cuda 9.2 (Enable Chamfer Distance)
  • Visdom (Enable Data Visualization)

External Codes

  • pygcn: Base code of GraphConvolution
  • atlasnet: Chamfer Distance

Getting Started

Installation

cd ./model/chamfer/
python setup.py install

Dataset

We use the same dataset as the one used in Pixel2Mesh. The point clouds are from ShapeNet and the rendered views are from 3D-R2N2.

The whole dataset can be downloaded Here.

Please respect the ShapeNet license while using.

Train

python train.py

The hyper-parameters can be changed from command. To get more help, please use

python train.py -h

Validation

To evaluate the model on one example, please use the following command

python evaluate.py --dataPath *** --modelPath ***

Some Examples

Due to the device limit, we trained our model on the airplane class instead of the whole dataset. A trained model is provided Here

Some test examples are shown as below:

About

Final project for RecVis 18

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published