Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Align processing block - CUDA implementation #2670

Merged
merged 7 commits into from
Nov 12, 2018

Conversation

matkatz
Copy link
Contributor

@matkatz matkatz commented Nov 4, 2018

This PR improves the performance of the align processing block when building librealsense with CUDA (#2257).
Also the align processing block was split to 3 different implementations (CPU, SSE, CUDA).

The following table demonstrates the performance improvements as it was measured over NVIDIA Jetson TX2 where power saving mode is turned off ( via jetson_clocks.sh):
align_results

@matkatz matkatz requested a review from dorodnic November 4, 2018 17:10
@dorodnic
Copy link
Contributor

dorodnic commented Nov 4, 2018

#2569 #2257

@dorodnic dorodnic added the cuda label Nov 4, 2018
@dorodnic dorodnic mentioned this pull request Nov 6, 2018
@dorodnic
Copy link
Contributor

Need to fix tabs all over

@dorodnic dorodnic merged commit c48c88c into IntelRealSense:development Nov 12, 2018
@ev-mp
Copy link
Collaborator

ev-mp commented Nov 19, 2018

Addresses #2321, #2376

@stefangordon
Copy link

My TX1 at 640x480 went from 7FPS to 19FPS with this - was previously unusable and now works great! Thanks!

@vagrant-drift
Copy link

vagrant-drift commented Dec 19, 2018

@matkatz When I update lib from 2.13 to 2.17, align processing performance is great. but,memory occupied increase(0.1M) each time the thread read one frame. could you give me a help,thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants