Skip to content

Awj2021/mesh_demo

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

16 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Introduction

The is a simple, but a real-time demo. We mainly use the PyQt5 to conduct an interactive interface, which shows some original video streams captured from different camera ids and people's mesh image added on different style's backgrounds. When the different style's button is pressed, these background images could be changed.

This demo is Metaverse 3d human digitalization, we achieved realtime multiperson 3d mesh recovery on single camera. 3d human digitalization is fundamental structure for metaverse application. e.g. virtual reality meeting or class. Existing method required multi-camera system and/or motion capture device, which is money cost. Our group research focus on efficient 3d mesh recovery from 2d, without acquire camera parameters.

Beyond that, we also add 5g to our pipeline to achieve real-time remote 3d mesh reconstruct ability. We add the 5g to our video stream collection device, thus our sever can access to the video stream through larger bandwith low latency 5g network remotely. when we work anywhere, we just need to carry out camera and collection device.

And we also add the background virtualization to the convert the realistic background to several styles.

Welcome to our groundbreaking demo of Metaverse 3D human digitalization! We have achieved real-time, multi-person 3D mesh recovery using just a single camera, making 3D human digitalization a more accessible and cost-effective solution for various Metaverse applications, such as virtual reality meetings or classes.

Traditionally, 3D mesh recovery has required expensive multi-camera systems and/or motion capture devices. Our research group has revolutionized this process by focusing on efficient 3D mesh recovery from 2D images. With our approach, there is no need for camera calibration, making it user-friendly and hassle-free. All the necessary 3D information for mesh reconstruction is self-estimated by our advanced AI model, streamlining the process and ensuring accuracy.

Furthermore, we have incorporated 5G technology into our pipeline to enable real-time remote 3D mesh reconstruction. By integrating 5G into our video stream collection device, our server can access the video stream remotely through a high-bandwidth, low-latency 5G network. This means that you only need a camera and our collection device to create 3D digitalizations from anywhere you go.

Additionally, we have implemented background virtualization to convert the realistic background into various styles, adding a touch of customization to your Metaverse experience. Join us and witness the power of our cutting-edge Metaverse 3D human digitalization technology!

Key components

  • Two big windows, one is used for capturing video from a camera and the other is used for showing the processed video stream.
  • Eight small windows showing different videos from different monitors.
  • Four buttons that modifying background image styles, e.g., Cyberpunk, Cartoon, Steampunk and Science_Fiction.

Key Tech

  • FFMPEG: Getting video stream in a real-time manner.
  • ROMP: processing one frame and return a mesh image.
  • PyQt6: Interact.

Performance

A video of performance. demo_video

Environment

Reproduce the environment: (ONLY using the environment.yml file.)

conda env create -f environment.yml
conda activate py310

Install ROMP please refer to the original reporitory.
https://github.com/Arthur151/ROMP We strongly suggest that installing ROMP with source code, i.e., by make *.sh.

Please make sure these necessory packages should be installed: (including in the environment.yml)

gcc (used for PyQt6)
PyQt6
cv2
ffmpeg
imutils
romp (using the recommended method in the ROMP.) python setup.py install.

The detailed installation steps are listed below:

pip install --upgrade setuptools numpy cython
conda install pytorch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1 cudatoolkit=11.6 -c pytorch -c conda-forge
sudo apt install ffmpeg 
pip install pyqt6, imutils
pip install ipdb

Running

The lattest code is visualization.py file.

python visualization.py

If you want to run the older versions of code, you could try to run the visualization_bak files, e.g., visualization_bak405.py. But I cannot guarantee that these codes will work correctly.

  • If there are any new background images added into the showing, please remove the bgs.json file firstly, and the main code will generate this file again.

By the way, we could directly run the shell file: run.sh.

bash ./run.sh

Keypoints

Please modify the dir of model that controls the gender of mesh. https://github.com/Arthur151/ROMP/releases/tag/V2.0 And then change the args parameters in

cd Desktop/mesh_demo/ROMP/simple_romp/romp/main.py
parser.add_argument('--smpl_path', type=str, default=osp.join(osp.expanduser("~"),'.romp','SMPL_MALE.pth'), help = 'The path of smpl model file')

Version Recording

Several different version is seperately saved with different file name for convient showing. Here recoading the file name only and do not upload the corresponding files.

  • [ x ] visualization_0414.py: add a reset button for the mesh QThread.
  • [ x ] visualization_button_layout.py: re-arrange the buttons' layout and add the button of changing webcam id.
  • [ x ] Sovling the delay accumulation and stop problem immediately.
  • [ x ] Add more background images for different cameras. Firstly we generate a json file according to the background images.
  • [ x ] Re-arrange the botton layout.
  • [ x ] Add more images for showing on the README.md.
  • [ x ] Solving the delay problem of switching different cameras.
  • [ ] Refine the layout of buttons.
  • [ x ] Regenerate the background images.
  • [ x ] Refine the Introduction, combine the command in the shell file.
  • [ x ] Add the exit button when the full screen showing.
  • Solving the overlapped problem when using the setting of temporal_optimize==true. We want to use the track_ids in the multi-Avator generation.

Reference

Thanks to these below reporitories.
ROMP
ffmpeg-python

Authors

If you have any questions about this demo, please contact us.
Ai Wenjie [email protected]
Li Yanchao [email protected]

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published