Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Output is completely black #14

Closed
lucasjinreal opened this issue Aug 12, 2022 · 32 comments
Closed

Output is completely black #14

lucasjinreal opened this issue Aug 12, 2022 · 32 comments

Comments

@lucasjinreal
Copy link

image

image

output is empty

@lucasjinreal lucasjinreal changed the title can not the world output even optimziation process runed can not get the world output even optimziation process runed Aug 12, 2022
@Khrylx
Copy link
Collaborator

Khrylx commented Aug 12, 2022

Have you installed the latest requirements.txt and install.sh?

@iulsko
Copy link

iulsko commented Aug 21, 2022

same problem!

here is the output of conda list:

Name Version Build Channel
_libgcc_mutex 0.1 main
_openmp_mutex 5.1 1_gnu
absl-py 1.2.0 pypi_0 pypi
aiohttp 3.8.1 pypi_0 pypi
aiosignal 1.2.0 pypi_0 pypi
appdirs 1.4.4 pypi_0 pypi
async-timeout 4.0.2 pypi_0 pypi
attrs 22.1.0 pypi_0 pypi
autograd 1.4 pypi_0 pypi
blas 1.0 mkl
brotlipy 0.7.0 py39hb9d737c_1004 conda-forge
bzip2 1.0.8 h7f98852_4 conda-forge
ca-certificates 2022.07.19 h06a4308_0
cachetools 5.2.0 pypi_0 pypi
certifi 2022.6.15 py39h06a4308_0
cffi 1.14.6 py39he32792d_0 conda-forge
charset-normalizer 2.1.0 pyhd8ed1ab_0 conda-forge
chumpy 0.70 pypi_0 pypi
colorama 0.4.5 pyhd8ed1ab_0 conda-forge
cryptography 37.0.2 py39hd97740a_0 conda-forge
cudatoolkit 11.6.0 hecad31d_10 conda-forge
cycler 0.11.0 pypi_0 pypi
cython 0.29.32 pypi_0 pypi
dill 0.3.5.1 pypi_0 pypi
easydict 1.9 pypi_0 pypi
ffmpeg 4.3 hf484d3e_0 pytorch
ffmpeg-python 0.2.0 pypi_0 pypi
filterpy 1.4.5 pypi_0 pypi
fonttools 4.36.0 pypi_0 pypi
freetype 2.10.4 h0708190_1 conda-forge
freetype-py 2.3.0 pypi_0 pypi
frozenlist 1.3.1 pypi_0 pypi
fsspec 2022.7.1 pypi_0 pypi
future 0.18.2 pypi_0 pypi
fvcore 0.1.5.post20220512 pyhd8ed1ab_0 conda-forge
gmp 6.2.1 h58526e2_0 conda-forge
gnutls 3.6.13 h85f3911_1 conda-forge
google-auth 2.11.0 pypi_0 pypi
google-auth-oauthlib 0.4.6 pypi_0 pypi
grpcio 1.47.0 pypi_0 pypi
h5py 3.7.0 pypi_0 pypi
hybrik 0.1.0+db0b4bd dev_0
idna 3.3 pyhd8ed1ab_0 conda-forge
imageio 2.21.1 pypi_0 pypi
importlib-metadata 4.12.0 pypi_0 pypi
imutils 0.5.4 pypi_0 pypi
intel-openmp 2021.4.0 h06a4308_3561
iopath 0.1.9 py39 iopath
joblib 1.1.0 pypi_0 pypi
jpeg 9e h166bdaf_1 conda-forge
kiwisolver 1.4.4 pypi_0 pypi
lame 3.100 h7f98852_1001 conda-forge
lcms2 2.12 hddcbb42_0 conda-forge
ld_impl_linux-64 2.38 h1181459_1
libffi 3.3 he6710b0_2
libgcc-ng 11.2.0 h1234567_1
libgomp 11.2.0 h1234567_1
libiconv 1.17 h166bdaf_0 conda-forge
libpng 1.6.37 h21135ba_2 conda-forge
libstdcxx-ng 11.2.0 h1234567_1
libtiff 4.2.0 hf544144_3 conda-forge
libwebp-base 1.2.2 h7f98852_1 conda-forge
llvmlite 0.39.0 pypi_0 pypi
lmdb 1.3.0 pypi_0 pypi
lz4-c 1.9.3 h9c3ff4c_1 conda-forge
markdown 3.4.1 pypi_0 pypi
markupsafe 2.1.1 pypi_0 pypi
matplotlib 3.5.3 pypi_0 pypi
meshio 4.4.6 pypi_0 pypi
mkl 2021.4.0 h06a4308_640
mkl-service 2.4.0 py39h7e14d7c_0 conda-forge
mkl_fft 1.3.1 py39h0c7bc48_1 conda-forge
mkl_random 1.2.2 py39hde0f152_0 conda-forge
multi-person-tracker 0.1 dev_0
multidict 6.0.2 pypi_0 pypi
multiprocess 0.70.13 pypi_0 pypi
ncurses 6.3 h5eee18b_3
nettle 3.6 he412f7d_0 conda-forge
networkx 2.8.5 pypi_0 pypi
numba 0.56.0 pypi_0 pypi
numpy 1.22.4 pypi_0 pypi
oauthlib 3.2.0 pypi_0 pypi
olefile 0.46 pyh9f0ad1d_1 conda-forge
open3d-python 0.3.0.0 pypi_0 pypi
opencv-python 4.6.0.66 pypi_0 pypi
opendr 0.78 pypi_0 pypi
openh264 2.1.1 h780b84a_0 conda-forge
openjpeg 2.4.0 hb52868f_1 conda-forge
openssl 1.1.1q h7f8727e_0
packaging 21.3 pypi_0 pypi
pandas 1.4.3 pypi_0 pypi
pathos 0.2.9 pypi_0 pypi
pillow 9.2.0 pypi_0 pypi
pip 22.1.2 py39h06a4308_0
portalocker 2.5.1 py39hf3d152e_0 conda-forge
pox 0.3.1 pypi_0 pypi
ppft 1.7.6.5 pypi_0 pypi
progress 1.6 pypi_0 pypi
protobuf 3.19.4 pypi_0 pypi
pyasn1 0.4.8 pypi_0 pypi
pyasn1-modules 0.2.8 pypi_0 pypi
pycocotools 2.0.4 pypi_0 pypi
pycparser 2.21 pyhd8ed1ab_0 conda-forge
pydeprecate 0.3.0 pypi_0 pypi
pyglet 1.5.26 pypi_0 pypi
pynvml 11.4.1 pypi_0 pypi
pyopengl 3.1.0 pypi_0 pypi
pyopenssl 22.0.0 pyhd8ed1ab_0 conda-forge
pyparsing 3.0.9 pypi_0 pypi
pyrender 0.1.45 pypi_0 pypi
pysocks 1.7.1 py39hf3d152e_5 conda-forge
python 3.9.12 h12debd9_1
python-dateutil 2.8.2 pypi_0 pypi
python-graphviz 0.20.1 pypi_0 pypi
python_abi 3.9 2_cp39 conda-forge
pytorch 1.12.1 py3.9_cuda11.6_cudnn8.3.2_0 pytorch
pytorch-lightning 1.3.5 pypi_0 pypi
pytorch-mutex 1.0 cuda pytorch
pytorch3d 0.7.0 py39_cu116_pyt1121 pytorch3d-nightly
pytube 12.1.0 pypi_0 pypi
pytz 2022.2.1 pypi_0 pypi
pyvista 0.31.3 pypi_0 pypi
pywavelets 1.3.0 pypi_0 pypi
pyyaml 5.4.1 pypi_0 pypi
readline 8.1.2 h7f8727e_1
requests 2.28.1 pyhd8ed1ab_0 conda-forge
requests-oauthlib 1.3.1 pypi_0 pypi
rsa 4.9 pypi_0 pypi
scikit-image 0.19.3 pypi_0 pypi
scikit-learn 1.1.2 pypi_0 pypi
scipy 1.9.0 pypi_0 pypi
scooby 0.6.0 pypi_0 pypi
seaborn 0.11.2 pypi_0 pypi
setuptools 61.2.0 py39h06a4308_0
shapely 1.8.4 pypi_0 pypi
six 1.16.0 pyh6c4a22f_0 conda-forge
sklearn 0.0 pypi_0 pypi
smplx 0.1.28 pypi_0 pypi
sqlite 3.39.2 h5082296_0
tabulate 0.8.10 pyhd8ed1ab_0 conda-forge
tb-nightly 2.11.0a20220821 pypi_0 pypi
tensorboard 2.10.0 pypi_0 pypi
tensorboard-data-server 0.6.1 pypi_0 pypi
tensorboard-plugin-wit 1.8.1 pypi_0 pypi
tensorboardx 2.5.1 pypi_0 pypi
termcolor 1.1.0 pyhd8ed1ab_3 conda-forge
terminaltables 3.1.10 pypi_0 pypi
threadpoolctl 3.1.0 pypi_0 pypi
tifffile 2022.8.12 pypi_0 pypi
tk 8.6.12 h1ccaba5_0
torchaudio 0.12.1 py39_cu116 pytorch
torchmetrics 0.6.0 pypi_0 pypi
torchvision 0.13.1 py39_cu116 pytorch
tqdm 4.64.0 pyhd8ed1ab_0 conda-forge
transforms3d 0.3.1 pypi_0 pypi
trimesh 3.13.5 pypi_0 pypi
typing_extensions 4.3.0 pyha770c72_0 conda-forge
tzdata 2022a hda174b7_0
urllib3 1.26.11 pyhd8ed1ab_0 conda-forge
vtk 9.1.0 pypi_0 pypi
werkzeug 2.2.2 pypi_0 pypi
wheel 0.37.1 pyhd3eb1b0_0
wslink 1.7.0 pypi_0 pypi
xz 5.2.5 h7f8727e_1
yacs 0.1.8 pyhd8ed1ab_0 conda-forge
yaml 0.2.5 h7f98852_2 conda-forge
yarl 1.8.1 pypi_0 pypi
yolov3 0.1 dev_0
zipp 3.8.1 pypi_0 pypi
zlib 1.2.12 h7f8727e_2
zstd 1.5.0 ha95c52a_0 conda-forge

python version is 3.9, cuda is 11.6

@Khrylx
Copy link
Collaborator

Khrylx commented Aug 21, 2022

@iulsko what OS you are using?

@iulsko
Copy link

iulsko commented Aug 21, 2022

@Khrylx hi! i am using ubuntu 20.04

@Khrylx
Copy link
Collaborator

Khrylx commented Aug 21, 2022

Check out solutions here:
#10 (comment)

The issue is mainly with pyvista. You can try first to run their examples in the link above

@iulsko
Copy link

iulsko commented Aug 21, 2022

@Khrylx yes, I have already done so, I can generate the shadow.png and have installed pyopengl (seen in my conda list as pyopengl 3.1.0 pypi_0 pypi). it does not help:(

@Khrylx
Copy link
Collaborator

Khrylx commented Aug 21, 2022

are you able to pop a vis GUI with --vis flag when running the demo?

@iulsko
Copy link

iulsko commented Aug 21, 2022

@Khrylx yes, it does.

so, the GUI window popped up after Hybrik fitting was done and i can find the video in pose_est/render.mp4. but it is completely black with the exception of the coordinate frame at the bottom left. moreover, the fitting process for GLAMR is not running.

@Khrylx
Copy link
Collaborator

Khrylx commented Aug 22, 2022

I'm perplexed that the window is completely black. The fitting process only runs once since it will use cached results saved when running first time.

@iulsko
Copy link

iulsko commented Aug 22, 2022

@Khrylx the cached results are only stored in the out_dir, correct?

@Khrylx
Copy link
Collaborator

Khrylx commented Aug 22, 2022

Yes, that's correct.

@iulsko
Copy link

iulsko commented Aug 22, 2022

@Khrylx could you maybe confirm the structure of the folder?

GLAMR
│  README.md
|  mesa_18.3.3-0.deb
|  mesa_18.3.3-0.deb.1
│   ...  
│
└─── data
│   │   J_regressor_extra.npy
│   └─── body_models
│          └─── smpl
│              │  neutral_smpl_with_cocoplus_reg.pkl
│              │  SMPL_FEMALE.pkl
│              │  SMPL_MALE.pkl
│              │  SMPL_NEUTRAL.pkl
│
└─── results
│   └─── motion_filler
│   |   └─── motion_infiller_demo
│   |   |   └─── version_0
│   |   |   |   └─── checkpoints
│   |   |   |   |    model-best-epoch=0307.ckpt
│   └─── traj_pred
│   |   └─── traj_pred_demo
│   |   |   └─── version_0
│   |   |   |   └─── checkpoints
│   |   |   |   |    model-best-epoch=1556.ckpt

i am not including the contents of the Hybrik folder as I assume they are ok.
p.s. i am not getting any "file not found errors"

@Khrylx
Copy link
Collaborator

Khrylx commented Aug 22, 2022

Looks good, I think it's mostly an issue with the renderer. I attached my conda env file here for your reference.

name: glamr
channels:
  - pytorch
  - conda-forge
  - defaults
dependencies:
  - _libgcc_mutex=0.1=main
  - _openmp_mutex=5.1=1_gnu
  - blas=1.0=mkl
  - ca-certificates=2022.07.19=h06a4308_0
  - certifi=2022.6.15=py38h06a4308_0
  - cudatoolkit=11.1.1=ha002fc5_10
  - intel-openmp=2021.4.0=h06a4308_3561
  - ld_impl_linux-64=2.38=h1181459_1
  - libffi=3.3=he6710b0_2
  - libgcc-ng=11.2.0=h1234567_1
  - libgomp=11.2.0=h1234567_1
  - libstdcxx-ng=11.2.0=h1234567_1
  - libuv=1.40.0=h7b6447c_0
  - mkl=2021.4.0=h06a4308_640
  - mkl-service=2.4.0=py38h7f8727e_0
  - mkl_fft=1.3.1=py38hd3c417c_0
  - mkl_random=1.2.2=py38h51133e4_0
  - ncurses=6.3=h5eee18b_3
  - ninja=1.10.2=h06a4308_5
  - ninja-base=1.10.2=hd09550d_5
  - numpy-base=1.22.3=py38hf524024_0
  - openssl=1.1.1q=h7f8727e_0
  - pip=22.1.2=py38h06a4308_0
  - python=3.8.13=h12debd9_0
  - pytorch=1.8.0=py3.8_cuda11.1_cudnn8.0.5_0
  - readline=8.1.2=h7f8727e_1
  - setuptools=61.2.0=py38h06a4308_0
  - six=1.16.0=pyhd3eb1b0_1
  - sqlite=3.38.5=hc218d9a_0
  - tk=8.6.12=h1ccaba5_0
  - torchaudio=0.8.0=py38
  - typing_extensions=4.1.1=pyh06a4308_0
  - wheel=0.37.1=pyhd3eb1b0_0
  - xz=5.2.5=h7f8727e_1
  - zlib=1.2.12=h7f8727e_2
  - pip:
    - absl-py==1.0.0
    - addict==2.4.0
    - aiohttp==3.8.1
    - aiosignal==1.2.0
    - appdirs==1.4.4
    - async-timeout==4.0.2
    - attributee==0.1.5
    - attrs==21.4.0
    - autograd==1.4
    - blessings==1.7
    - cachetools==4.2.4
    - charset-normalizer==2.0.12
    - chumpy==0.70
    - click==8.1.3
    - cycler==0.11.0
    - cython==0.29.30
    - dill==0.3.5.1
    - docker-pycreds==0.4.0
    - dotty-dict==1.3.0
    - easydict==1.9
    - fonttools==4.33.3
    - freetype-py==2.3.0
    - frozenlist==1.3.0
    - fsspec==2022.5.0
    - future==0.18.2
    - gitdb==4.0.9
    - gitpython==3.1.27
    - glx==0.6
    - google-auth==1.35.0
    - google-auth-oauthlib==0.4.6
    - gpustat==0.6.0
    - grpcio==1.46.3
    - h5py==3.7.0
    - idna==3.3
    - imageio==2.19.3
    - importlib-metadata==4.11.4
    - imutils==0.5.4
    - joblib==1.1.0
    - kiwisolver==1.4.2
    - lap==0.4.0
    - llvmlite==0.36.0
    - lmdb==1.3.0
    - mako==1.2.0
    - markdown==3.3.7
    - markupsafe==2.1.1
    - matplotlib==3.5.2
    - meshio==4.4.6
    - mmcls==0.23.1
    - mmcv-full==1.5.2
    - mmdet==2.25.0
    - motmetrics==1.2.5
    - multidict==6.0.2
    - multiprocess==0.70.13
    - networkx==2.8.2
    - numba==0.53.0
    - numpy==1.22.4
    - nvidia-ml-py3==7.352.0
    - oauthlib==3.2.0
    - open3d-python==0.3.0.0
    - opencv-python==4.5.5.64
    - packaging==21.3
    - pandas==1.3.5
    - pathos==0.2.9
    - pathtools==0.1.2
    - pillow==9.2.0
    - pox==0.3.1
    - ppft==1.7.6.5
    - progress==1.6
    - promise==2.3
    - protobuf==3.19.4
    - psutil==5.9.1
    - pyasn1==0.4.8
    - pyasn1-modules==0.2.8
    - pycocotools==2.0.2
    - pydeprecate==0.3.0
    - pyglet==1.5.26
    - pynvml==11.4.1
    - pyopengl-accelerate==3.1.5
    - pyparsing==3.0.9
    - pyrender==0.1.45
    - python-dateutil==2.8.2
    - python-graphviz==0.20
    - pytorch-lightning==1.3.5
    - pytube==12.1.0
    - pytz==2022.1
    - pyvista==0.31.3
    - pywavelets==1.3.0
    - pyyaml==5.4.1
    - rectangle==0.4
    - requests==2.27.1
    - requests-oauthlib==1.3.1
    - rsa==4.8
    - scikit-image==0.19.2
    - scikit-learn==1.1.1
    - scipy==1.7.3
    - scooby==0.5.12
    - seaborn==0.11.2
    - sentry-sdk==1.5.12
    - setproctitle==1.2.3
    - setuptools-scm==6.4.2
    - shapely==1.8.2
    - shortuuid==1.0.9
    - sklearn==0.0
    - smmap==5.0.0
    - smplx==0.1.28
    - tensorboard==2.9.0
    - tensorboard-data-server==0.6.1
    - tensorboard-plugin-wit==1.8.1
    - terminaltables==3.1.10
    - threadpoolctl==3.1.0
    - tifffile==2022.5.4
    - tomli==2.0.1
    - torchmetrics==0.3.2
    - tqdm==4.64.0
    - transforms3d==0.3.1
    - trimesh==3.12.5
    - typing-extensions==4.2.0
    - urllib3==1.26.9
    - vtk==9.1.0
    - wandb==0.12.17
    - werkzeug==2.1.2
    - wslink==1.6.4
    - xmltodict==0.13.0
    - yapf==0.32.0
    - yarl==1.7.2
    - zipp==3.8.0

@iulsko
Copy link

iulsko commented Aug 22, 2022

@Khrylx absolutely no idea

  1. uninstalled the environment
  2. built it back up with python 3.8.13 just like yours
  3. modified the requirements.txt to match your package versions exactly

i still only see the Hybrik output

@Khrylx
Copy link
Collaborator

Khrylx commented Aug 22, 2022

can you try running this file directly? https://github.com/NVlabs/GLAMR/blob/62e9f30f7aaa38e6a8b3142f83e0457063928f85/lib/utils/visualizer3d.py
If it doesn't work, then it's something wrong with the visualizer.

@iulsko
Copy link

iulsko commented Aug 22, 2022

@Khrylx well, if i do python lib/utils/visualizer3d.py i get ModuleNotFoundError: No module named 'lib' but maybe that's not the way to run it 🙈

@Khrylx
Copy link
Collaborator

Khrylx commented Aug 22, 2022

I made some changes and you can now run it directly.

@iulsko
Copy link

iulsko commented Aug 22, 2022

@Khrylx here is my visualization
image

still nothing. at the end of the fitting I see:

saving world animation for "video"
WARNING: You are using a SMPL model, with only 10 shape coefficients.
set camera focal: (-0.06933883598203301, 0.046888127091186727, 0.8211618708134137)
set camera pos: (5.870481791729302, 0.12487453547217336, 1.1867617195413058)
saving cam animation for "video"
WARNING: You are using a SMPL model, with only 10 shape coefficients.
saving side-by-side animation for "video"

@Khrylx
Copy link
Collaborator

Khrylx commented Aug 22, 2022

This is something right? Do you see this when running GLAMR or do you see entire black screen

@iulsko
Copy link

iulsko commented Aug 22, 2022

@Khrylx this is the viz i get from running python lib/utils/visualizer3d.py, it does not work with the video though

@Khrylx
Copy link
Collaborator

Khrylx commented Aug 22, 2022

This is just a test video to see if pyvista works.

It seems something wrong with vis_grecon.py. Since I can't reproduce the error, it's really hard for me to figure out what exactly is wrong. One way debug, is to gradually remove things from vis_grecon.py until it's not completely black.

@iulsko
Copy link

iulsko commented Aug 22, 2022

@Khrylx i don't quite understand how that technique would work. so, visualizer3d.py works and is used in vis_grecon.py to initialize a class. but i already have a blank output. if i keep removing part of code in vis_grecon.py, I will only get errors but not come to a point where the output becomes blank.

@Khrylx
Copy link
Collaborator

Khrylx commented Aug 22, 2022

vis_grecon.py gives you something black right? so if you gradually reduce the things in vis_grecon.py, e.g., removing the smpl_actors etc, you should be able to get what you get from visualizer3d.py, which is not black.

@iulsko
Copy link

iulsko commented Aug 22, 2022

@Khrylx wait, exactly the reverse is happening:

  1. if i run python lib/utils/visualizer3d.py I get the viz you saw above
  2. if i run python global_recon/vis/vis_grecon.py I don't get any output: no errors, no new windows

@Khrylx
Copy link
Collaborator

Khrylx commented Aug 22, 2022

global_recon/vis/vis_grecon.py cannot be run directly, to show a window, you have to run the demo with --vis flag, and it will pop up a GUI, which you said is black. Black is wrong, so I just wanted to find out what in vis_grecon.py causes it to be black.

@Khrylx
Copy link
Collaborator

Khrylx commented Aug 22, 2022

import os, sys
sys.path.append(os.path.join(os.getcwd()))
import os.path as osp
import pyvista
import time
import torch
import numpy as np
import glob
import vtk
from collections import defaultdict
from lib.utils.visualizer3d import Visualizer3D
from lib.models.smpl import SMPL, SMPL_MODEL_DIR
from motion_infiller.vis.vis_smpl import SMPLActor, SkeletonActor
from traj_pred.utils.traj_utils import convert_traj_world2heading
from lib.utils.vis import hstack_videos, make_checker_board_texture, vtk_matrix_to_nparray
from lib.utils.torch_transform import angle_axis_to_quaternion, quaternion_to_angle_axis, quat_apply


class GReconVisualizer(Visualizer3D):

    def __init__(self, data, coord, device=torch.device('cpu'), use_y_up_coord=False, use_est_traj=False, background_img_dir=None,
                 show_gt_pose=False, show_est_pose=True, show_smpl=True, show_skeleton=False, show_camera=True, align_pose=False, view_dist=13.0, render_cam_pos=None, render_cam_focus=None, **kwargs):
        super().__init__(use_floor=False, **kwargs)
        self.device = device
        self.use_y_up_coord = use_y_up_coord
        self.use_est_traj = use_est_traj
        self.show_gt_pose = show_gt_pose
        self.show_est_pose = show_est_pose
        self.show_smpl = show_smpl
        self.show_skeleton = show_skeleton
        self.show_camera = show_camera
        self.align_pose = align_pose
        self.view_dist = view_dist
        self.render_cam_pos = render_cam_pos
        self.render_cam_focus = render_cam_focus
        self.background_img_dir = background_img_dir
        self.background_img_arr = sorted(glob.glob(f'{self.background_img_dir}/*.png')) + sorted(glob.glob(f'{self.background_img_dir}/*.jpg'))
        self.has_background = False
        self.hide_env = background_img_dir is not None

        self.smpl = SMPL(SMPL_MODEL_DIR, pose_type='body26fk', create_transl=False).to(device)
        faces = self.smpl.faces.copy()
        self.smpl_faces = faces = np.hstack([np.ones_like(faces[:, [0]]) * 3, faces])
        self.smpl_joint_parents = self.smpl.parents.cpu().numpy()
        self.last_fr = None
        self.align_freq = 150
        self.load_scene(data, coord)

    def get_aligned_orient_trans(self, pose_dict, exist_frames):
        orient_q = angle_axis_to_quaternion(pose_dict['smpl_orient_world'][exist_frames])
        trans = pose_dict['root_trans_world'][exist_frames]
        pose_dict['aligned_orient_q'] = []
        pose_dict['aligned_trans'] = []
        for i in range(int(np.ceil((orient_q.shape[0] / self.align_freq)))):
            sind = i * self.align_freq - int(i > 0)
            eind = min((i + 1) * self.align_freq, orient_q.shape[0])
            aligned_orient_q, aligned_trans = convert_traj_world2heading(orient_q[sind:eind], trans[sind:eind], apply_base_orient_after=True)
            res_start = int(i > 0)
            pose_dict['aligned_orient_q'].append(aligned_orient_q[res_start:])
            pose_dict['aligned_trans'].append(aligned_trans[res_start:])
        pose_dict['aligned_orient_q'] = torch.cat(pose_dict['aligned_orient_q'])
        pose_dict['aligned_trans'] = torch.cat(pose_dict['aligned_trans'])
        pose_dict['smpl_orient_world'][exist_frames] = quaternion_to_angle_axis(pose_dict['aligned_orient_q'])
        pose_dict['root_trans_world'][exist_frames] = pose_dict['aligned_trans']

    def load_scene(self, data, coord):
        self.coord = coord 
        assert coord in {'cam', 'world', 'cam_in_world'}
        self.data = data
        self.scene_dict = data['person_data']
        self.gt = data['gt']
        self.gt_meta = data['gt_meta']

        self.focal_length = next(iter(self.scene_dict.values()))['cam_K'][0][[0, 1], [0, 1]]

        if len(self.gt) == 0:
            self.num_fr = self.scene_dict[0]['max_len']
            self.num_person = len(self.scene_dict)
        else:
            self.num_fr = self.gt[0]['pose'].shape[0]
            self.num_person = len(self.gt)
        self.unit = 0.001
        """ GT """
        suffix = '' if coord in {'world', 'cam_in_world'} else '_cam'
        for idx, pose_dict in self.gt.items():
            est_dict = self.scene_dict[idx]
            if self.use_est_traj:
                smpl_motion = self.smpl(
                    global_orient=torch.tensor(est_dict[f'smpl_orient_world']),
                    body_pose=torch.tensor(pose_dict['pose'][:, 3:]).float(),
                    betas=torch.tensor(est_dict['smpl_beta']),
                    root_trans = torch.tensor(est_dict[f'root_trans_world']),
                    root_scale = torch.tensor(est_dict['scale']) if est_dict['scale'] is not None else None,
                    return_full_pose=True,
                    orig_joints=True
                )
            else:
                pose_dict[f'smpl_orient_world'] = torch.tensor(pose_dict[f'pose{suffix}'][:, :3]).float()
                pose_dict[f'root_trans_world'] = torch.tensor(pose_dict[f'root_trans{suffix}']).float()
                if self.align_pose:
                    self.get_aligned_orient_trans(pose_dict, est_dict['exist_frames'])
                smpl_motion = self.smpl(
                    global_orient=pose_dict[f'smpl_orient_world'],
                    body_pose=torch.tensor(pose_dict['pose'][:, 3:]).float(),
                    betas=torch.tensor(pose_dict['shape']).float().repeat(pose_dict['pose'].shape[0], 1),
                    root_trans = pose_dict[f'root_trans_world'],
                    root_scale = None,
                    return_full_pose=True,
                    orig_joints=True
                )
            pose_dict['smpl_verts'] = smpl_motion.vertices.numpy()
            pose_dict['smpl_joints'] = smpl_motion.joints.numpy()
            if 'fr_start' not in pose_dict:
                pose_dict['fr_start'] = np.where(pose_dict['visible'])[0][0]

        """ Estimation """
        suffix = '' if coord == 'cam' else '_world'
        for pose_dict in self.scene_dict.values():
            if 'smpl_pose' in pose_dict:
                if not isinstance(pose_dict[f'smpl_orient{suffix}'], torch.Tensor):
                    pose_dict[f'smpl_orient{suffix}'] = torch.tensor(pose_dict[f'smpl_orient{suffix}'])
                    pose_dict[f'root_trans{suffix}'] = torch.tensor(pose_dict[f'root_trans{suffix}'])
                if self.align_pose and suffix == '_world':
                    self.get_aligned_orient_trans(pose_dict, pose_dict['exist_frames'])
                smpl_motion = self.smpl(
                    global_orient=pose_dict[f'smpl_orient{suffix}'],
                    body_pose=torch.tensor(pose_dict['smpl_pose']),
                    betas=torch.tensor(pose_dict['smpl_beta']),
                    root_trans = pose_dict[f'root_trans{suffix}'],
                    root_scale = torch.tensor(pose_dict['scale']) if pose_dict['scale'] is not None else None,
                    return_full_pose=True,
                    orig_joints=True
                )
                pose_dict['smpl_verts'] = smpl_motion.vertices.numpy()
                pose_dict['smpl_joints'] = smpl_motion.joints.numpy()
            if 'smpl_joint_pos' in pose_dict:
                orient = torch.tensor(pose_dict[f'smpl_orient{suffix}'])
                joints = torch.tensor(pose_dict['smpl_joint_pos'])
                trans = torch.tensor(pose_dict[f'root_trans{suffix}'])
                joints = torch.cat([torch.zeros_like(joints[..., :3]), joints], dim=-1).view(*joints.shape[:-1], -1, 3)
                orient_q = angle_axis_to_quaternion(orient).unsqueeze(-2).expand(joints.shape[:-1] + (4,))
                joints_world = quat_apply(orient_q, joints) + trans.unsqueeze(-2)
                pose_dict['smpl_joints'] = joints_world

        if 'exist_frames' in self.scene_dict[0]:
            self.init_est_root_pos = np.concatenate([x['smpl_joints'][x['exist_frames'], 0] for x in self.scene_dict.values()]).mean(axis=0)
        else:
            self.init_est_root_pos = np.concatenate([x['smpl_joints'][:, 0] for x in self.scene_dict.values()]).mean(axis=0)
        self.init_focal_point = self.init_est_root_pos

    def init_camera(self):
        pass
        # if self.coord in {'cam', 'cam_in_world'}:
        #     self.pl.camera_position = 'zy'
        #     self.pl.camera.focal_point = (0, 0, 1)
        #     self.pl.camera.position = (0, 0, 0)
        #     self.pl.camera.up = (0, -1, 0)
        #     self.pl.camera.elevation = 0
        #     self.pl.camera.azimuth = 0
        #     self.set_camera_instrinsics(fx=self.focal_length[0], fy=self.focal_length[1])
        # else:
        #     focal_point = self.init_focal_point
        #     if self.use_y_up_coord:
        #         focal_point[2] += 3.0
        #         self.pl.camera.position = (focal_point[0] + self.view_dist, focal_point[1] + 2, focal_point[2])
        #     else:
        #         self.pl.camera.position = (focal_point[0] + self.view_dist, focal_point[1], focal_point[2] + 2)
        #         cam_pose_inv = self.data['cam_pose_inv'][self.fr]
        #         cam_origin = cam_pose_inv[:3, 3].copy()
        #         cam_origin = (cam_origin - focal_point) * 1.5 + focal_point

        #         if self.render_cam_focus is not None:
        #             self.pl.camera.focal_point = self.render_cam_focus
        #             print('set camera focal:', self.pl.camera.focal_point)
        #         if self.render_cam_pos is not None:
        #             self.pl.camera.position = self.render_cam_pos
        #             print('set camera pos:', self.pl.camera.position)

        #     self.pl.camera.up = (0, 1, 0)

    def init_scene(self, init_args):
        if init_args is None:
            init_args = dict()
        super().init_scene(init_args)
        
        """ floor """
        whl = (20.0, 0.05, 20.0) if self.use_y_up_coord else (20.0, 20.0, 0.05)
        if self.coord == 'cam':
            center = np.array([0, whl[1] * 0.5 + 2, 7])
        else:
            center = self.init_focal_point
            if self.use_y_up_coord:
                center[1] = 0.0
            else:
                center[2] = -0.2

        if not self.hide_env:
            self.floor_mesh = pyvista.Cube(center, *whl)
            self.floor_mesh.t_coords *= 2 / self.floor_mesh.t_coords.max()
            tex = pyvista.numpy_to_texture(make_checker_board_texture('#81C6EB', '#D4F1F7'))
            self.pl.add_mesh(self.floor_mesh, texture=tex, ambient=0.2, diffuse=0.8, specular=0.8, specular_power=5, smooth_shading=True)
        
        if self.coord == 'world' and self.show_camera:
            self.cam_sphere = pyvista.Sphere(radius=0.05, center=(0, 0, 2))
            self.cam_arrow_z = pyvista.Arrow(start=(0, 0, 2), direction=(0, 0, 1), scale=0.4)
            self.cam_arrow_y = pyvista.Arrow(start=(0, 0, 2), direction=(0, 0, 1), scale=0.4)
            self.cam_arrow_x = pyvista.Arrow(start=(0, 0, 2), direction=(0, 0, 1), scale=0.4)
            self.pl.add_mesh(self.cam_sphere, color='green', ambient=0.5, diffuse=0.8, specular=0.8, specular_power=5, smooth_shading=True)
            self.pl.add_mesh(self.cam_arrow_z, color='yellow', ambient=0.5, diffuse=0.8, specular=0.8, specular_power=5, smooth_shading=True)
            self.pl.add_mesh(self.cam_arrow_y, color='red', ambient=0.5, diffuse=0.8, specular=0.8, specular_power=5, smooth_shading=True)
            self.pl.add_mesh(self.cam_arrow_x, color='blue', ambient=0.5, diffuse=0.8, specular=0.8, specular_power=5, smooth_shading=True)

        
        vertices = self.gt[0]['smpl_verts'][0] if len(self.gt) > 0 else self.scene_dict[0]['smpl_verts'][0]
        colors = ['#33b400', '#8e95f2', 'orange', 'white', 'purple', 'cyan', 'blue', 'pink', 'red', 'green', 'yellow', 'black']
        self.smpl_actors = [SMPLActor(self.pl, vertices, self.smpl_faces, visible=False, color=color) for _, color in zip(range(self.num_person), colors)]
        self.smpl_gt_actors = [SMPLActor(self.pl, vertices, self.smpl_faces, visible=False) for _ in range(self.num_person)]
        self.skeleton_actors = [SkeletonActor(self.pl, self.smpl_joint_parents, bone_color='yellow', joint_color='green', visible=False) for _ in range(self.num_person)]
        self.skeleton_gt_actors = [SkeletonActor(self.pl, self.smpl_joint_parents, bone_color='red', joint_color='purple', visible=False) for _ in range(self.num_person)]
        self.smpl_actor_main = self.smpl_actors[0]
        
    def update_camera(self, interactive):
        pass
        # if self.coord == 'cam_in_world':
        #     cam_pose_inv = self.data['cam_pose_inv'][self.fr]
        #     cam_origin = cam_pose_inv[:3, 3]
        #     view_vec = cam_pose_inv[:3, 2]
        #     up_vec = -cam_pose_inv[:3, 1]
        #     new_focal = cam_origin + view_vec
        #     self.pl.camera.up = up_vec.tolist()
        #     self.pl.camera.focal_point = new_focal.tolist()
        #     self.pl.camera.position = cam_origin.tolist()
        # elif self.coord == 'cam':
        #     view_vec = np.asarray(self.pl.camera.position) - np.asarray(self.pl.camera.focal_point)
        #     new_focal = np.array(self.pl.camera.focal_point)
        #     new_pos = new_focal + view_vec
        #     self.pl.camera.up = (0, -1, 0)
        #     self.pl.camera.focal_point = new_focal.tolist()
        #     self.pl.camera.position = new_pos.tolist()
        # else:
        #     if self.use_y_up_coord:
        #         self.pl.camera.up = (0, 1, 0)
        #     else:
        #         self.pl.camera.up = (0, 0, 1)

    def update_scene(self):
        super().update_scene()
        if self.verbose:
            print(self.fr)

        """ Background """
        # if self.fr < len(self.background_img_arr):
        #     if self.interactive:
        #         if self.has_background:
        #             self.pl.remove_background_image()
        #         self.pl.add_background_image(self.background_img_arr[self.fr])
        #         self.has_background = True
        #     else:
        #         self.background_img = self.background_img_arr[self.fr]


        """ Estimation """     
        if self.show_est_pose:
            i = 0
            j = 0
            for idx, pose_dict in self.scene_dict.items():
                actor = self.smpl_actors[i]
                sk_actor = self.skeleton_actors[j]
                if self.fr in pose_dict['frames']:
                    pind = pose_dict['frame2ind'][self.fr]
                    # smpl actor
                    if self.show_smpl and 'smpl_verts' in pose_dict:
                        if 'exist_frames' in pose_dict and not pose_dict['exist_frames'][self.fr]:
                            actor.set_visibility(False)
                        else:
                            actor.set_visibility(True)
                            verts_i = pose_dict['smpl_verts'][pind]
                            actor.update_verts(verts_i)
                            full_opacity = 0.7 if self.show_skeleton else 1.0
                            opacity = 0.4 if pose_dict['invis_frames'][self.fr] else full_opacity
                            actor.set_opacity(opacity)
                        i += 1
                    # skeleton actor
                    if self.show_skeleton and 'smpl_joints' in pose_dict:
                        sk_actor.set_visibility(True)
                        joints_i = pose_dict['smpl_joints'][pind]
                        sk_actor.update_joints(joints_i)
                        opacity = 0.4 if pose_dict['invis_frames'][self.fr] else 1.0
                        sk_actor.set_opacity(opacity)
                        j += 1
            for k in range(i, self.num_person):
                self.smpl_actors[k].set_visibility(False)
            for k in range(j, self.num_person):
                self.skeleton_actors[k].set_visibility(False)

        """ GT """
        if self.show_gt_pose:
            for i, actor in enumerate(self.smpl_gt_actors):
                pose_dict = self.gt[i]
                sk_actor = self.skeleton_gt_actors[i]
                # smpl actor
                if self.show_smpl:
                    actor.set_visibility(True)
                    verts_i = pose_dict['smpl_verts'][self.fr]
                    actor.update_verts(verts_i)
                    actor.set_opacity(0.5)
                else:
                    actor.set_visibility(False)
                # skeleton actor
                if self.show_skeleton:
                    sk_actor.set_visibility(True)
                    joints_i = pose_dict['smpl_joints'][self.fr]
                    sk_actor.update_joints(joints_i)
                    sk_actor.set_opacity(0.5)
                else:
                    sk_actor.set_visibility(False)

        if self.coord == 'world' and self.show_camera:
            cam_pose_inv = self.data['cam_pose_inv'][self.fr]
            new_sphere = pyvista.Sphere(radius=0.05, center=cam_pose_inv[:3, 3].tolist())
            new_arrow_z = pyvista.Arrow(start=cam_pose_inv[:3, 3].tolist(), direction=cam_pose_inv[:3, 2].tolist(), scale=0.4)
            new_arrow_y = pyvista.Arrow(start=cam_pose_inv[:3, 3].tolist(), direction=cam_pose_inv[:3, 1].tolist(), scale=0.4)
            new_arrow_x = pyvista.Arrow(start=cam_pose_inv[:3, 3].tolist(), direction=cam_pose_inv[:3, 0].tolist(), scale=0.4)
            self.cam_sphere.points[:] = new_sphere.points
            self.cam_arrow_z.points[:] = new_arrow_z.points
            self.cam_arrow_y.points[:] = new_arrow_y.points
            self.cam_arrow_x.points[:] = new_arrow_x.points

    def setup_key_callback(self):
        super().setup_key_callback()

        def go_to_frame():
            self.fr = 50
            if self.verbose:
                print(self.fr)
            self.paused = True
            self.update_scene()

        def print_camera():
            print(f"'cam_focus': {self.pl.camera.focal_point},")
            print(f"'cam_pos': {self.pl.camera.position}")

        def toggle_smpl():
            self.show_smpl = not self.show_smpl

        def toggle_skeleton():
            self.show_skeleton = not self.show_skeleton

        self.pl.add_key_event('t', print_camera)
        self.pl.add_key_event('z', go_to_frame)
        self.pl.add_key_event('j', toggle_smpl)
        self.pl.add_key_event('k', toggle_skeleton)

Could you try this version and let me know whether the output is still black?

@iulsko
Copy link

iulsko commented Aug 22, 2022

@Khrylx the output is complete blank, now the original video is not showing in central frame "GLAMR (Cam)"
maybe a related question, is it necessary to have the preprocessed datasets downloaded for the demo?

@Khrylx Khrylx changed the title can not get the world output even optimziation process runed Output is completely black Aug 23, 2022
@Khrylx
Copy link
Collaborator

Khrylx commented Aug 23, 2022

Ok, I found out it's a problem with anti_aliasing, so you just need to set anti_aliasing to False here:
https://github.com/NVlabs/GLAMR/blob/main/lib/utils/visualizer3d.py#L20

Please confirm if this fix works for you.

@iulsko
Copy link

iulsko commented Aug 23, 2022

@Khrylx Ye, thank you so much! works like a charm. thanks for taking the time to look into this 😊

@Khrylx Khrylx pinned this issue Aug 23, 2022
@Khrylx
Copy link
Collaborator

Khrylx commented Aug 23, 2022

No problem!

Fixed in commit: f0602fd

@Khrylx Khrylx closed this as completed Aug 23, 2022
@lucasjinreal
Copy link
Author

does the black output fixed now?

@Khrylx
Copy link
Collaborator

Khrylx commented Aug 23, 2022

@jinfagang Yes, it should be fixed now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants