-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Runtime error in data loader due to variable size of tensors #2
Comments
in data_utils.py the 3DMatch data preprocesses of their experiment, the voxelization is working on well aligned point clouds. Which means they surely have same number of voxels after overlap detection. Please check the code data_utils.py: 93-97 for voxelizing the well aligned pcs and 121-125 for transforming p1. It knows registration result before registrating. So apparently in your case, voxelization is not working on well aligned point clouds, this produce the problem. This might be a bug, I also emailed the author, waiting for reply. |
Thanks, @Jarrome for giving your insights. What I understood is that the 3DMatch point cloud data is already well-aligned. While, in my case, it might not be. Can you please clarify, what does it mean by "well-aligned point cloud data? |
Hi, @praj441. I mean they are registered before voxelization. From Xueqian Li's email reply, you may need another global registrater before voxelization if I understand correctly. Please check if it work combined with some ransac registrater. But I'm keep asking, as 3DMatch test then transformed back with gt "x". It is not a refinement setting. |
Hey @praj441, you can try: 1. finding the overlapped area between the point clouds, we only need to compute the feature for overlapped areas; 2. after finding the overlapping area, you can output the voxel index from the points_to_voxel_second() function, and try to find the corresponding voxels. Another reminder is that you should not use voxelization during training. And thanks for @Jarrome's reply, please bear in mind that our method is still a local registration method that needs some overlapping between the source and the target point clouds. So depending on your application, you may want to find a global estimation first. |
Thanks, @Lilac-Lee, and @Jarrome for the helpful comments. I will try to implement it accordingly. |
In synthetic, no problem, as the zero-mean is with point cloud mean which merely need one point cloud itself. But on 3DMatch test, the zero-mean is with voxel-mean. And this voxel-mean is taken from two aligned point cloud. This is the difference and that's why I say the bug is only on voxelization. "then the p1 is transformed back with a pose "x" for registration" . For coarse-to-fine registration, P---T1--->P^{1}---T2--->P^{2}. I also checked, with voxel_zero_mean=True and voxel=1 on the 3DMatch test, it is like the PointNetLK_revisited merely solve rotation during the registration. Please check! |
Hi @Jarrome, zero_mean is required by our method. The voxelization part is the same analog as the synthetic part. voxel_mean is required when computing global Jacobian. Also, you can open a new issue or we can continue discussing this through email. Thanks. |
When trying to run the train.py on my custom dataset, I am getting the following error:-
"RuntimeError: stack expects each tensor to be equal size, but got [4, 1000, 3] at entry 0 and [2, 1000, 3] at entry 1"
The data loader function, defined in data_utils.py outputs variable size tensors. Because the number of voxels output by function points_to_voxel_second() outputs the variable number of voxels.
Can you suggest some solution for this?
The text was updated successfully, but these errors were encountered: