Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Blender Camera Pose #5

Closed
YJonmo opened this issue Apr 23, 2020 · 3 comments
Closed

Blender Camera Pose #5

YJonmo opened this issue Apr 23, 2020 · 3 comments

Comments

@YJonmo
Copy link

YJonmo commented Apr 23, 2020

Thanks for this work.
My question might be a bit irrelevant to this.

I have created a simulated knee environment in Blender and for every frame, I have the depth, camera pose, and the rendered image. Using these three ingredients, I should be able to reconstruct the 3 point cloud and mesh of the environment using other approaches such as TSDF (https://github.com/andyzeng/tsdf-fusion-python). But that is not happening. Even though I put the correct camera matrix intrinsic (I think).

To cross check the Blender camera pose info, I used only the depth images and Kinect Fusion to estimate the camera pose and used the estimated camera pose info for depth image on TSDF and still it did not work. I tried the same approach (estimating the camera pose using Kinect Fusion) on the demo data of the TSDF and it worked (Kinect Fuseion-estimate pose + groundtruth depth). That means the Kinect Fusion could produce the correct camera pose info as good as groundtruth pose, to be used with TSDF. This narrows down the issue I have to the groundtruth depth that Blender produces.

My question is that, do I need to do any transformation on the depth images that Blender produces?
I see you are talking about making a conversion between the Camera Pose and the DSO. I wonder I might need to do the same to be able to get the TSDF algorithm working.

@GSORF
Copy link
Owner

GSORF commented Apr 23, 2020

Hello @YJonmo

Thank you for your question. Unfortunately I have no experience with TSDF yet (thanks for the link).

In my view you do not need to transform the Depth-Maps from Blender, as they are expressed relative to the blender camera coordinate system and not with respect to a specific world coordinate frame. However, depending on the specific TSDF implementation, you might need to triangulate or reproject the 3D points, for which you in turn need the correct camera pose. Therefore, your issue might indeed be linked to a wrong Blender camera pose.

You can check how I did the conversion in the python addon. The coordinate transformation from Blender to DSO can be found from line 34 to line 113 https://github.com/GSORF/Visual-GPS-SLAM/blob/master/02_Utilities/BlenderAddon/addon_vslam_groundtruth_Blender280.py#L34

You might also want to double check your calculation of the camera intrinsics with mine which you can find from lines 326 to 329: https://github.com/GSORF/Visual-GPS-SLAM/blob/master/02_Utilities/BlenderAddon/addon_vslam_groundtruth_Blender280.py#L326

Hope this helps. Kind regards,
Adam

@YJonmo
Copy link
Author

YJonmo commented Apr 24, 2020

Thanks a lot man. It worked.
I put a start for your great work.

@GSORF
Copy link
Owner

GSORF commented Apr 24, 2020

Thanks a lot man. It worked.
I put a start for your great work.

Awesome @YJonmo, that is great to read! Thanks for the update and great success for your project.

Kind regards,
Adam

@GSORF GSORF closed this as completed Apr 24, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants