Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Questions on Transforms Performed on the KITTI Dataset #29

Open
kevintsq opened this issue May 25, 2024 · 0 comments
Open

Questions on Transforms Performed on the KITTI Dataset #29

kevintsq opened this issue May 25, 2024 · 0 comments

Comments

@kevintsq
Copy link

kevintsq commented May 25, 2024

Thanks for your great work! I have some questions on transformations performed on the KITTI dataset, as I want to test on longer scenes such as KITTI Odometry whose poses are provided in cam0's coordinate. Your answers will be greatly appreciated.

  1. Why do you perform auto_orient_and_center_poses on the KITTI dataset, but not on the Waymo dataset?
    # Orients and centers the poses
    oriented = torch.from_numpy(np.array(cam_poses_tracking).astype(np.float32)) # (n_frames, 3, 4)
    oriented, transform_matrix = auto_orient_and_center_poses(
    oriented
    ) # oriented (n_frames, 3, 4), transform_matrix (3, 4)
    row = torch.tensor([0, 0, 0, 1], dtype=torch.float32)
    zeros = torch.zeros(oriented.shape[0], 1, 4)
    oriented = torch.cat([oriented, zeros], dim=1)
    oriented[:, -1] = row # (n_frames, 4, 4)
    transform_matrix = torch.cat([transform_matrix, row[None, :]], dim=0) # (4, 4)
    cam_poses_tracking = oriented.numpy()
    transform_matrix = transform_matrix.numpy()
  2. Why do you "transform camera axis from kitti to opengl"? The camera coordinate of KITTI should already be the same as the 3DGS.
    opengl2kitti = np.array([[1, 0, 0, 0], [0, -1, 0, 0], [0, 0, -1, 0], [0, 0, 0, 1]])

    # transform camera axis from kitti to opengl for nerf:
    cam_i_camrect = np.matmul(Tr_cam_i2camrect, opengl2kitti)
  3. Why are the following magic Euler angles applied to the Tr_cam2camrect matrix?
    #####################
    # Debug Camera offset
    if scene_no == 2:
    yaw = np.deg2rad(0.7) ## Affects camera rig roll: High --> counterclockwise
    pitch = np.deg2rad(-0.5) ## Affects camera rig yaw: High --> Turn Right
    # pitch = np.deg2rad(-0.97)
    roll = np.deg2rad(0.9) ## Affects camera rig pitch: High --> up
    # roll = np.deg2rad(1.2)
    elif scene_no == 1:
    if exp:
    yaw = np.deg2rad(0.3) ## Affects camera rig roll: High --> counterclockwise
    pitch = np.deg2rad(-0.6) ## Affects camera rig yaw: High --> Turn Right
    # pitch = np.deg2rad(-0.97)
    roll = np.deg2rad(0.75) ## Affects camera rig pitch: High --> up
    # roll = np.deg2rad(1.2)
    else:
    yaw = np.deg2rad(0.5) ## Affects camera rig roll: High --> counterclockwise
    pitch = np.deg2rad(-0.5) ## Affects camera rig yaw: High --> Turn Right
    roll = np.deg2rad(0.75) ## Affects camera rig pitch: High --> up
    else:
    yaw = np.deg2rad(0.05)
    pitch = np.deg2rad(-0.75)
    # pitch = np.deg2rad(-0.97)
    roll = np.deg2rad(1.05)
    # roll = np.deg2rad(1.2)
    cam_debug = np.eye(4)
    cam_debug[:3, :3] = get_rotation(roll, pitch, yaw)
    Tr_cam2camrect = tracking_calibration["Tr_cam2camrect"]
    Tr_cam2camrect = np.matmul(Tr_cam2camrect, cam_debug)

    Thank you for your time!
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant