Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

get nan in loss while running tacos dataset #7

Open
lumiaomiao opened this issue Sep 30, 2020 · 5 comments
Open

get nan in loss while running tacos dataset #7

lumiaomiao opened this issue Sep 30, 2020 · 5 comments

Comments

@lumiaomiao
Copy link

hi,I run you config my environment with your instructions, and got torch==1.6.0.
when running tocas dataset , the loss is nan. But, I can run activatynet dataset normally.
Do you know the reason?

@onlyonewater
Copy link

onlyonewater commented Sep 30, 2020

Maybe you can try using pytorch==1.4.0 or pytorch==1.2.0

@lumiaomiao
Copy link
Author

Maybe you can try using pytorch==1.4.0 or pytorch==1.2.0

nan doesn‘t disappear in pytorch==1.4.0, and I can't configure pytorch==1.2.0.

@onlyonewater
Copy link

onlyonewater commented Oct 16, 2020

sorry, I haven't had this problem either.

@DW-Lay
Copy link

DW-Lay commented Nov 11, 2020

You can check whether there is a /0 or 0/0 in your code, which will cause nan, and the final loss does not converge

@rookiecm
Copy link

hi,I run you config my environment with your instructions, and got torch==1.6.0.
when running tocas dataset , the loss is nan. But, I can run activatynet dataset normally.
Do you know the reason?

Hi, has you solved the problem? I meet the same problem.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants