This repository implements the Video2VR
pipeline, transforming user-input videos into 3D models compatible with WebXR viewer. Powered by NeRFStudio and MobileNeRF technologies.
TODO: insert architecture image from drawio
Preprocessing | Training | Postprocessing |
---|---|---|
Input videos are preprocessed using NeRFStudio | Utilizing MobileNeRF as the foundation, our model undergoes a two-stage training process. | Following training, the model is postprocessed to ensure compatibility with WebXR Viewer. |
Scene Type | Preview | Recommendations | Test Dataset Used | License |
---|---|---|---|---|
Forwardfaced | - Capture indoor or limited outdoor spaces from the front. Record videos from multiple angles facing forward. | nerf_llff_data | CC BY 4.0 | |
Unbounded | - Capture indoor or limited outdoor spaces from the front. Record videos from multiple angles facing forward. Aim for 2~5 minute videos with detailed content. | Tanks and Temples Mip-NeRF 360 |
CC BY 4.0 | |
Indoor | - Capture interior spaces from various angles and heights. If possible, capture the ceiling as well. Avoid capturing entire houses in one go; opt for room-sized data. | Deep Blending | Apache-2.0 license |
- Clone this repository.
- Set up GCP credentials.
- Build the Docker image:
docker build -t MY_CONTAINER_NAME .
- Run the container with volume binding:
docker run -d \
-p 8080:8080 \
-v $PWD:/app \
MY_CONTAINER_NAME
- MobileNeRF: Exploiting the Polygon Rasterization Pipeline for Efficient Neural Field Rendering on Mobile Architectures [Link] [Apache 2.0]
- MobileNeRF + WebXR [Link] [Apache 2.0]
- HumanNeRF: Free-viewpoint Rendering of Moving People from Monocular Video [Link] [MIT]
- AvatarSDK [Link] [BSD-3-Clause]
- Google Draco [Link] [Apache 2.0]