Replies: 1 comment
-
You might also try setting the environment variable |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi,
I have a fairly complex code that calculates d(dy/dx)/dp where y is a scalar output, x is the input and p are the parameters to optimize. While everything works perfectly on 1080TI and V100 gpus, I get NaN values for the gradients on A100 GPUs. I am using double precision.
I wonder what the best way is to debug such a problem. I assume this is probably related to architecture specific XLA optimizations.
Beta Was this translation helpful? Give feedback.
All reactions