-
Notifications
You must be signed in to change notification settings - Fork 218
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OOM when allocating tensor #7
Comments
I haven't trained with tensorflow yet but I'll look into it. In the meantime, try using theano with cnmem enabled ( |
Maybe the reason for this might be the same as for the tensorflow implementation here: ibab/tensorflow-wavenet#4 (comment) |
I was wondering why keras was requiring the dilation values to be equal in both dimensions when using tensorflow; it uses |
I'm closing this with the assumption that this is probably fixed in tensorflow by now. If not, please let me know. |
Nope. This is not fixed in Tensorflow as of yet. Im getting the EXACT same error as the OP. Trying to run using theano backend and seeing if it works. |
Did it work using theano backend? |
I would like to let you all know it is fixed in TensorFlow 1.10. Works like a charm. I'm using the unmodified current master. (well, technically I modified a single line in dataset to make the code work in Python 3.x) |
@meridion Thanks! Would you mind sending a pull request so other users can easily benefit from your fix? |
This is solved in Python 2.7, tensorflow-gpu 1.8.0 |
I have a 12GB GPU but attempting to train anything with the default settings produces an OOM on the first epoch. I had to dial the batch_size and the dilation_depth way down before it would even start. What settings are you using when you train?
The text was updated successfully, but these errors were encountered: