-
Notifications
You must be signed in to change notification settings - Fork 444
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Leverage GPU acceleration for point cloud processing #644
Comments
Are there any particular operations that you are using that are problematic on your hardware or do you have more information on the number of lidars you are trying to use and your resource utilization expectations for given hardware? |
@matthew-lidar Our software takes about 40% of an NVIDIA Jetson Xavier CPU capacity (evaluated through ouster-ros about 2 years ago). |
Please let us know if this is an issue with the ROS driver specifically rather than the SDK. I've found the SDK quite performant on Nvidia ARM and have been able to implement whole object detection pipelines from Python that run in less than a single core, but would be very interested in your specific setup. |
We are using the ROS driver (not the SDK) with a combination of OS0 and OS1 sensors. Unfortunately I can't currently provide details on the exact number of sensors or the exact processing that we are doing. In addition to running a ROS driver instance for each sensor we also do point cloud fusion and motion compensation on the same hardware in addition to other autonomy processing. When CPU utilization is too high it can start leading to packet drops as well. With a GPU present many of the intensive lidar point operations (e.g. polar to cartesian) could be offloaded to free up CPU cycles. |
your request is totally legit (with or without ROS) and I had previously prototyped offloading some of the operations to the GPU but I didn't have the capacity to push it to production. I'll see if we can slot something in within the next release. |
Awesome thanks @Samahu |
Is your feature request related to a problem? Please describe.
On lower CPU capable systems (e.g. Jetson) or when multiple lidars are in use the CPU utilization can be very high which prevents being able to perform much additional CPU work.
Describe the solution you'd like
Either through a different binary release or a configuration setting be able to leverage a GPU if present to perform computationally expensive operations and reduce CPU utilization. Other lidar vendors offer similar binary solutions which can leverage a GPU.
Describe alternatives you've considered
Making local changes
Targeted Platform (please complete the following information only if applicable, otherwise dot N/A):
The text was updated successfully, but these errors were encountered: