Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to convert depth maps obtained by orthogonal cameras into 3D point cloud #979

Closed
flamehaze1115 opened this issue Sep 26, 2023 · 2 comments
Labels
question Question, not yet a bug ;)

Comments

@flamehaze1115
Copy link

Describe the issue

Hello. Thanks for the great work.
I am using BlenderProc, with orthogonal cameras to render color images and depth maps, when I would like to convert the depth maps into 3D, I met some problem.
I know orthogonal camera is unlike pinhole camera, I use that code to do the conversion:
image
When I fuse two depth maps using cam2world matrices, the fused point cloud can be roughly matched but they are not aligned well.
I don't know what the reason is? maybe my conversion code is wrong?
image

image

Minimal code example

No response

Files required to run the code

No response

Expected behavior

Hoping for your help.

BlenderProc version

main branch

@flamehaze1115 flamehaze1115 added the question Question, not yet a bug ;) label Sep 26, 2023
@flamehaze1115
Copy link
Author

When I use ortho_scale=1, the pc from the two depth maps will be like this :
image

@flamehaze1115
Copy link
Author

I get the problem. Blender ortho scale correponds a unit cube. But my code maps a (-1, 1) cube, should be "q = q / ortho_scale * 0.5"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Question, not yet a bug ;)
Projects
None yet
Development

No branches or pull requests

1 participant