Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inference Acceleration with Mobile GPU #5469

Closed
hedaoyuan opened this issue Nov 8, 2017 · 0 comments
Closed

Inference Acceleration with Mobile GPU #5469

hedaoyuan opened this issue Nov 8, 2017 · 0 comments
Assignees

Comments

@hedaoyuan
Copy link
Contributor

hedaoyuan commented Nov 8, 2017

Mobile GPU

Currently, in the mainstream mobile phone will have a GPU, and mobile GPU performance has been greatly improved in recent years. As you can see from the data in these links, Adreno 540 vs Adreno 530 vs Adreno 430, Adreno 430 vs Adreno 420, the Adreno 430 has 30% performance increase over the Adreno 420, the Adreno 530 has 30%-40% performance increase over the Adreno 430, and the Adreno 540(Release in Q2 2017) has 30%-40% performance increase over the Adreno 530. From Adreno WIKI also can see the corresponding trend.

In addition, mobile GPU has also been greatly improved in computational performance for deep learning. As you can see from this example Matrix Multiply on Adreno GPUs,
based on OpenCL's matrix multiplication optimization, the performance on the Adreno 420, Adreno 430 and Adreno 530 is 44 ms, 38 ms, 23 ms for the 1024-size matrix, respectively. And with the Snapdragon NPE's GPU acceleration, in some case can achieve 5x better performance on the Adreno GPU, compared to a generic CPU implementation.

Why OpenCL

We consider using OpenCL to support Android GPU, mainly based on the following considerations.

  • OpenCL is based on the standard C/C++ language and doesn't need to rely on a special compiler.
  • All the mainstream GPUs support the development based on OpenCL, and OpenCL is also a mature solution.
  • The framework(wrapper) developed based on OpenCL, can be used to support the GPU(AMD GPU) on the server for model training acceleration.
  • Using OpenCL allows to directly interoperate with some other OpenCL libraries(like Eigen, ARM ComputeLibrary).
@hedaoyuan hedaoyuan self-assigned this Nov 8, 2017
@Xreki Xreki closed this as completed Apr 26, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants