You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In order to enable TensorFlow's use of GPU it is necessary today to manually add in the jni_gpu jar file into the libs directory.
I am probably doing this with unnecessary extra steps because I haven't spent the time to determine how the dependencies impact each other, but today what I do is change the gradle.build file for the objectRecognition project. I have specified the latest version of tensorflow and added the gpu jar reference:
I am pretty sure you need both the _jni.jar and the _jni_gpu.jar but I could be wrong about that.
After the jars are built I will go into the libs directory under the build folder and rename "libtensorflow_jni.jar" to "libtensorflow_jni.original" and then rename the "libtensorflow_jni_gpu.jar" to "libtensorflow_jni.jar".
When I startup the objectRecognition process tensorflow is now using GPU.
IMO - we should be including GPU support by default. This will require having the tensorflow compatible nvidia libraries as specified here:
The following NVIDIA® software must be installed on your system:
NVIDIA® GPU drivers —CUDA 10.0 requires 410.x or higher.
CUDA® Toolkit —TensorFlow supports CUDA 10.0 (TensorFlow >= 1.13.0)
CUPTI ships with the CUDA Toolkit.
cuDNN SDK (>= 7.4.1)
(Optional) TensorRT 5.0 to improve latency and throughput for inference on some models.
The support for CUDA 10.0 seems to be recent, up until today I have had to always install CUDA 9.0.
Back to my original request, that we should either run GPU by default or have a flag to make it easy to run in GPU mode so I don't have to go and modify the gradle build file or rename the jars after the install.
The text was updated successfully, but these errors were encountered:
In order to enable TensorFlow's use of GPU it is necessary today to manually add in the jni_gpu jar file into the libs directory.
I am probably doing this with unnecessary extra steps because I haven't spent the time to determine how the dependencies impact each other, but today what I do is change the gradle.build file for the objectRecognition project. I have specified the latest version of tensorflow and added the gpu jar reference:
compile "org.tensorflow:tensorflow:1.12.0"
compile "org.tensorflow:libtensorflow_jni:1.12.0"
compile "org.tensorflow:libtensorflow_jni_gpu:1.12.0"
compile "org.apache.commons:commons-math3:3.6.1"
I am pretty sure you need both the _jni.jar and the _jni_gpu.jar but I could be wrong about that.
After the jars are built I will go into the libs directory under the build folder and rename "libtensorflow_jni.jar" to "libtensorflow_jni.original" and then rename the "libtensorflow_jni_gpu.jar" to "libtensorflow_jni.jar".
When I startup the objectRecognition process tensorflow is now using GPU.
IMO - we should be including GPU support by default. This will require having the tensorflow compatible nvidia libraries as specified here:
The following NVIDIA® software must be installed on your system:
https://www.tensorflow.org/install/gpu
NVIDIA® GPU drivers —CUDA 10.0 requires 410.x or higher.
CUDA® Toolkit —TensorFlow supports CUDA 10.0 (TensorFlow >= 1.13.0)
CUPTI ships with the CUDA Toolkit.
cuDNN SDK (>= 7.4.1)
(Optional) TensorRT 5.0 to improve latency and throughput for inference on some models.
The support for CUDA 10.0 seems to be recent, up until today I have had to always install CUDA 9.0.
Back to my original request, that we should either run GPU by default or have a flag to make it easy to run in GPU mode so I don't have to go and modify the gradle build file or rename the jars after the install.
The text was updated successfully, but these errors were encountered: