-
Notifications
You must be signed in to change notification settings - Fork 611
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Build with custom cuda/cudnn version #770
Comments
Hi @Cospel! Thanks for bringing this up. So we actually use the same script for identifying CUDA as TF-Core. This means it should be pretty straight forward to enable this. We're in the middle of some pretty big BUILD revamping (enable Windows, etc.) so this might get pushed a week or two, but we'll be sure to support this within the config script. Currently it does check for environment variables of alternative paths (cudNN/CUDA), but there may be some breakage and it hasn't been tested with other versions. |
have a good news to you. |
This should be significantly easier to manage if we can use an exported TF method for finding CUDA. This looks possible if tensorflow/tensorflow#38964 merges |
Thanks for the great news! Let's hope that it will be merged soon 👍 |
After a lot of tries and nights, we were able to build tf-addons without segmentation fault #1298 #1277. Here is what I did: https://gist.github.com/Cospel/fb9c313cdb83d5e474aa0e3f956d14e0 Here is the built pip for tf-addons on cuda10, cuddn7.5 for centos and python3.6 and tf2.2: https://github.com/Ximilar-com/tf2-wheels/blob/master/README.md#tf2-addons |
TensorFlow Addons is transitioning to a minimal maintenance and release mode. New features will not be added to this repository. For more information, please see our public messaging on this decision: Please consider sending feature requests / contributions to other repositories in the TF community with a similar charters to TFA: |
Describe the feature and the current behaviour/state.
Right now the build is locked to cuda 10.1. However tensorflow2+ is able to build from source with different CUDA versions (by specifying it through configure). Can this package mimic the tf2 build process?
For example I'm working mostly with docker containers and server instances which has cuda 10.0 and doing custom builds of tf2 for this cuda version.
Right now we are unable to install or build addons as it is locked to 10.1. It will be great if users are able to set during the configuration their own cuda version as it is in tensorflow2+.
This would also help when future versions of cuda are available and we want to test them.
Relevant information
Which API type would this fall under (layer, metric, optimizer, etc.)
Who will benefit with this feature?
Everyone.
Any other info.
The text was updated successfully, but these errors were encountered: