You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I greatly appreciate the wonderful tool - PixSfM. It has become a favourite of mine.
I was just wondering if you knew of any ways to improve the quality of images with low texture. Specifically, I'm working with some pictures of Rubix Cubes (Link), and I used PixSfM to estimate camera locations and create a Point Cloud. Just wanted to mention that PixSfM seemed to have fewer issues with camera location estimation compared to COLMAP. I used COLMAP to visualise the output of PixSfM, as shown in the following video:
Colmap.cube.mp4
Assuming all our scenes are 360 views, I also want to know if you can extract the centre Point Cloud (the interested region between camera views).
More precisely, I want the area marked in red, as shown below:
P.S. We are referring to "the low-textured objects" for these scenes that do not have many details, such as a scene with a ball or a banana on a table, where the surface of these objects is not highly detailed.
Looking forward to hearing from you soon,
Thanks a million,
Ahmad
The text was updated successfully, but these errors were encountered:
amughrabi
changed the title
Inquiry about the possibility of minimizing the noise in Point Cloud
Inquiry about the possibility of minimizing the noise in Point Cloud for low-textured objects
Jul 5, 2023
PixSfM uses S2DNet, which is trained on MegaDepth and thus strong textures. As a consequence, it does not perform as well on low textured scenes. But you could try refining your 3D model with LIMAP, which builds a 3D line map (but also refines your camera poses) and works really well in low-textured scenes.
I am not aware of a direct way to extract the center point cloud, but if you are solely interested in the Rubix Cube, you could run image segmentation on your images, check which 2D points are within the area of interest in each image, and filter the 3D points based on this.
Thanks for sharing the info! I totally agree that quality work (PixSfM) always stands out. :) You rock it! 🥇
I'm actually thinking of giving LIMAP a try myself to see how effective it is. It's not my first choice since I was hoping to avoid using reference data techniques like semantic segmentation, but it looks like we don't have any other options at the moment.
I greatly appreciate the wonderful tool - PixSfM. It has become a favourite of mine.
I was just wondering if you knew of any ways to improve the quality of images with low texture. Specifically, I'm working with some pictures of Rubix Cubes (Link), and I used PixSfM to estimate camera locations and create a Point Cloud. Just wanted to mention that PixSfM seemed to have fewer issues with camera location estimation compared to COLMAP. I used COLMAP to visualise the output of PixSfM, as shown in the following video:
Colmap.cube.mp4
Assuming all our scenes are 360 views, I also want to know if you can extract the centre Point Cloud (the interested region between camera views).
More precisely, I want the area marked in red, as shown below:
P.S. I am currently utilizing the following code with PixSfM: https://github.com/amughrabi/pre-nerf/blob/main/pixelsfm.py
Here is a sample data: https://drive.google.com/drive/folders/1TJnBwlgQXoJ8YyQDA_nReAdz4_0YPHvK?usp=sharing
P.S. The faster estimation is highly recommended.
P.S. We are referring to "the low-textured objects" for these scenes that do not have many details, such as a scene with a ball or a banana on a table, where the surface of these objects is not highly detailed.
Looking forward to hearing from you soon,
Thanks a million,
Ahmad
The text was updated successfully, but these errors were encountered: