Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Object Recognition: We should default to GPU enabled tensorflow during install. #149

Open
pburma opened this issue Apr 5, 2019 · 0 comments

Comments

@pburma
Copy link

pburma commented Apr 5, 2019

In order to enable TensorFlow's use of GPU it is necessary today to manually add in the jni_gpu jar file into the libs directory.

I am probably doing this with unnecessary extra steps because I haven't spent the time to determine how the dependencies impact each other, but today what I do is change the gradle.build file for the objectRecognition project. I have specified the latest version of tensorflow and added the gpu jar reference:

compile "org.tensorflow:tensorflow:1.12.0"
compile "org.tensorflow:libtensorflow_jni:1.12.0"
compile "org.tensorflow:libtensorflow_jni_gpu:1.12.0"
compile "org.apache.commons:commons-math3:3.6.1"

I am pretty sure you need both the _jni.jar and the _jni_gpu.jar but I could be wrong about that.

After the jars are built I will go into the libs directory under the build folder and rename "libtensorflow_jni.jar" to "libtensorflow_jni.original" and then rename the "libtensorflow_jni_gpu.jar" to "libtensorflow_jni.jar".

When I startup the objectRecognition process tensorflow is now using GPU.

IMO - we should be including GPU support by default. This will require having the tensorflow compatible nvidia libraries as specified here:

The following NVIDIA® software must be installed on your system:

https://www.tensorflow.org/install/gpu

NVIDIA® GPU drivers —CUDA 10.0 requires 410.x or higher.
CUDA® Toolkit —TensorFlow supports CUDA 10.0 (TensorFlow >= 1.13.0)
CUPTI ships with the CUDA Toolkit.
cuDNN SDK (>= 7.4.1)
(Optional) TensorRT 5.0 to improve latency and throughput for inference on some models.

The support for CUDA 10.0 seems to be recent, up until today I have had to always install CUDA 9.0.

Back to my original request, that we should either run GPU by default or have a flag to make it easy to run in GPU mode so I don't have to go and modify the gradle build file or rename the jars after the install.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant