-
Hi, I am new to nerfstudio and I there's some thing about the tool that I don't understand. I have a sequence of images of an airport together with labeled camera poses and I am trying to use the images together with the poses to train a nerf. I created a dataset following same format as what's output by ns-process-data and create the transforms.json by filling in the camera poses and camera parameters. However, when training the model, I notice from the ns viewer that the orientation of the images seems to be flipped and by examing some of the key frames I notice that a consistent transformation (constant rotation matrix and translation based on camera position) is applied to the poses of the camera in transforms.json. I am wondering where is this transformation coming from. Is it because the scale of the scenario is too large (over 10000 meters)? Thanks in advance. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
This may be related to a similar issue I faced when using ORB-SLAM results for pose estimation, and then running nerfacto. If your camera poses are in the OpenCV convention, then you will have to transform them into the OpenGL convention - #3101 (comment) hope that helps. |
Beta Was this translation helpful? Give feedback.
This may be related to a similar issue I faced when using ORB-SLAM results for pose estimation, and then running nerfacto.
If your camera poses are in the OpenCV convention, then you will have to transform them into the OpenGL convention - #3101 (comment)
hope that helps.