Skip to content

whuhxb/shape-of-motion

 
 

Repository files navigation

Shape of Motion: 4D Reconstruction from a Single Video

Project Page | Arxiv

Qianqian Wang1,2*, Vickie Ye1*, Hang Gao1*, Jake Austin1, Zhengqi Li2, Angjoo Kanazawa1

1UC Berkeley   2Google Research

* Equal Contribution

Installation

conda create -n som python=3.10
conda activate som

Update requirements.txt with correct CUDA version for PyTorch and cuUML, i.e., replacing cu122 and cu12 with your CUDA version.

pip install -r requirements.txt
pip install git+https://github.com/nerfstudio-project/gsplat.git

Usage

Preprocessing

We depend on the third-party libraries in preproc to generate depth maps, object masks, camera estimates, and 2D tracks. Please follow the guide in the preprocessing README.

Fitting to a Video

python run_training.py --work-dir <OUTPUT_DIR> --data:davis --data.seq-name horsejump-low

Evaluation on iPhone Dataset

First, download our processed iPhone dataset from this link. To train on a sequence, e.g., paper-windmill, run:

python run_training.py --work-dir <OUTPUT_DIR> --port <PORT> --data:iphone --data.data-dir </path/to/paper-windmill/>

After optimization, the numerical result can be evaluated via:

PYTHONPATH='.' python scripts/evaluate_iphone.py --data_dir </path/to/paper-windmill/> --result_dir <OUTPUT_DIR> --seq_names paper-windmill  

Citation

@inproceedings{som2024,
  title     = {Shape of Motion: 4D Reconstruction from a Single Video},
  author    = {Wang, Qianqian and Ye, Vickie and Gao, Hang and Austin, Jake and Li, Zhengqi and Kanazawa, Angjoo},
  journal   = {arXiv preprint arXiv:2407.13764},
  year      = {2024}
}

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.7%
  • Shell 0.3%