-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The program stops automatically #3
Comments
I have found the problem,it is this line that cause problem. in train.py i comment out this line of code ,torch.cuda.empty_cache(),it works. the reason why CPU memory increases while training is that, dataloader set too many workers. |
你好,请问你怎么把代码跑起来的呢 |
你好,我是用其他的数据集,根据config目录下evoconv2-config.yaml文件中的设置,修改了数据加载中的代码。 |
你好,请问能加你微信吗?1033898863 |
请问您成功运行了吗?我怎么总是遇到矩阵的维数有问题?可以留个联系方式吗?谢谢 |
When run this program, the gpu memory used is always changing. After running several epochs, it stops automatically.
The text was updated successfully, but these errors were encountered: