demo_trim2.mp4
We offer several ways to interact with LucidDreamer:
- A demo is available on
ironjr/LucidDreamer
HuggingFace Space (including custom SD ckpt) andironjr/LucidDreamer-mini
HuggingFace Space (minimal features / try at here in case of the former is down) (We appreciate all the HF / Gradio team for their support).
Untitled.mov
- Another demo is available on a Colab, implemented by @camenduru (We greatly thank @camenduru for the contribution).
- You can use the gradio demo locally by running
CUDA_VISIBLE_DEVICES=0 python app.py
(full feature including huggingface model download, requires ~15GB) orCUDA_VISIBLE_DEVICES=0 python app_mini.py
(minimum viable demo, uses only SD1.5). - You can also run this with command line interface as described below.
- Linux: Ubuntu>=18.04.
- CUDA>=11.4 (higher version is OK).
- Python==3.9 (cannot use 3.10 due to open3d compatibility)
conda create -n lucid python=3.9
conda activate lucid
pip install peft diffusers scipy numpy imageio[ffmpeg] opencv-python Pillow open3d torchvision gradio
pip install torch==2.0.1 timm==0.6.7 # ZoeDepth
pip install plyfile==0.8.1 # Gaussian splatting
cd submodules/depth-diff-gaussian-rasterization-min
# sudo apt-get install libglm-dev # may be required for the compilation.
python setup.py install
cd ../simple-knn
python setup.py install
cd ../..
# Default Example
python run.py
To run with your own inputs and prompts, attach following arguments after run.py
.
-img
: path of input image.-t
: text prompt. Can be either path to txt file or the text itself.-nt
: negative text prompt. Can be either path to txt file or the text itself.-cg
: camera extrinsic path for generating scenes. Can be one of "Rotate_360", "LookAround", or "LookDown".-cr
: camera extrinsic path for rendering videos. Can be one of "Back_and_forth", "LLFF", or "Headbanging".--seed
: manual seed for Stable Diffusion inpainting.--diff_steps
: number of denoising steps for Stable Diffusion inpainting. Default is 50.-s
: path to save results.
There are multiple available viewers / editors for Gaussian splatting .ply
files.
- @playcanvas's Super-Splat project (Live demo). This is the viewer we have used for our debugging along with MeshLab.
-
@antimatter15's WebGL viewer for Gaussian splatting (Live demo).
-
@splinetool's web-based viewer for Gaussian splatting. This is the version we have used in our project page's demo.
- ✅ December 8, 2023: HuggingFace Space demo is out. We deeply thank all the HF team for their support!
- ✅ December 7, 2023: Colab implementation is now available thanks to @camenduru!
- ✅ December 6, 2023: Code release!
- ✅ November 22, 2023: We have released our paper, LucidDreamer on arXiv.
Please cite us if you find our project useful!
@article{chung2023luciddreamer,
title={LucidDreamer: Domain-free Generation of 3D Gaussian Splatting Scenes},
author={Chung, Jaeyoung and Lee, Suyoung and Nam, Hyeongjin and Lee, Jaerin and Lee, Kyoung Mu},
journal={arXiv preprint arXiv:2311.13384},
year={2023}
}
We deeply appreciate ZoeDepth, Stability AI, and Runway for their models.
If you have any questions, please email [email protected]
, [email protected]
, [email protected]
.