Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

blenderproc.camera.pointcloud_from_depth() return points using coordinate with the blender world as the origin instead of the camera pose. #1162

Open
Seekerzero opened this issue Nov 26, 2024 · 1 comment
Labels
documentation Improvements or additions to documentation

Comments

@Seekerzero
Copy link

Describe the issue

When generate point cloud from depth with function blenderproc.camera.pointcloud_from_depth(), the result point cloud has it value in blender world coordinate instead of the coordinate in camera pose.

Minimal code example

pose_number = 0
with open(camera, "r") as f:
    for line in f.readlines():
        line = [float(x) for x in line.split()]
        position, euler_rotation = line[:3], line[3:6]
        matrix_world = bproc.math.build_transformation_mat(position, euler_rotation)
        # convert_matrix_world = (
        #     bproc.math.change_source_coordinate_frame_of_transformation_matrix(
        #         matrix_world, ["X", "Y", "Z"]
        #     )
        # )
        # bproc.camera.add_camera_pose(convert_matrix_world)
        bproc.camera.add_camera_pose(matrix_world)
        pose_number += 1

# for i in range(pose_number):
#     bvh_tree = bproc.object.create_bvh_tree_multi_objects(objs)
#     depth = bproc.camera.depth_via_raytracing(bvh_tree, i)
#     points = bproc.camera.pointcloud_from_depth(depth, i)
#     points = points.reshape(-1, 3)

#     # using numpy to filter out nan values
#     points = points[~np.isnan(points).any(axis=1)]

#     open3d_points = o3d.geometry.PointCloud()
#     open3d_points.points = o3d.utility.Vector3dVector(points)
#     # save the point cloud
#     o3d.io.write_point_cloud(
#         os.path.join(point_cloud_output, f"point_cloud_{i}.pcd"), open3d_points
#     )

bproc.renderer.enable_depth_output(False)


data = bproc.renderer.render()
# data["depth"] = bproc.postprocessing.add_kinect_azure_noise(
#     data["depth"], data["colors"]
# )

bproc.writer.write_hdf5(point_cloud_ref_image_raw_output, data)


for i in range(pose_number):
    depth = data["depth"][i]
    points = bproc.camera.pointcloud_from_depth(depth, i)
    points = points.reshape(-1, 3)

    # using numpy to filter out nan values
    points = points[~np.isnan(points).any(axis=1)]

    open3d_points = o3d.geometry.PointCloud()
    open3d_points.points = o3d.utility.Vector3dVector(points)
    # save the point cloud
    o3d.io.write_point_cloud(
        os.path.join(point_cloud_output, f"point_cloud_{i}.pcd"), open3d_points
    )

Files required to run the code

No response

Expected behavior

When creating a point cloud from a depth image, we usually assume that the generated point cloud will use the coordinates of the camera pose instead of the world coordinate frame. However, this is not mentioned in either the tutorial or the API, which can confuse the user.

BlenderProc version

2.7.0

@Seekerzero Seekerzero added the question Question, not yet a bug ;) label Nov 26, 2024
@cornerfarmer
Copy link
Member

Thanks for the hint. I agree this should be made more clear in the documentation

@cornerfarmer cornerfarmer added documentation Improvements or additions to documentation and removed question Question, not yet a bug ;) labels Nov 27, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation
Projects
None yet
Development

No branches or pull requests

2 participants