You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hengshuang et al mention in their work "we apply selfattention locally, which enables scalability to large scenes with millions of points". But this implementation could be hardly trained for num_point>8k (nvidia rtx 3090).
Any suggestions on how to train/apply this implementation for large point clouds?
The text was updated successfully, but these errors were encountered:
You may borrow some ideas from PointNet++, which splits the whole area into a set of chunks, e.g., 1m x 1m x 1m. Then within each chunk, use the PointTransformer to predict the label for each point. You may find more details about this in the code provided by PointNet++ (https://github.com/charlesq34/pointnet2/blob/master/scannet/scannet_dataset.py).
Hengshuang et al mention in their work "we apply selfattention locally, which enables scalability to large scenes with millions of points". But this implementation could be hardly trained for num_point>8k (nvidia rtx 3090).
Any suggestions on how to train/apply this implementation for large point clouds?
The text was updated successfully, but these errors were encountered: