Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The program stops automatically #3

Open
wanzhixiao opened this issue Feb 10, 2021 · 5 comments
Open

The program stops automatically #3

wanzhixiao opened this issue Feb 10, 2021 · 5 comments

Comments

@wanzhixiao
Copy link

wanzhixiao commented Feb 10, 2021

When run this program, the gpu memory used is always changing. After running several epochs, it stops automatically.

@wanzhixiao
Copy link
Author

When run this program, the gpu memory used is always changing. After running several epochs, it stops automatically.

I have found the problem,it is this line that cause problem. in train.py i comment out this line of code ,torch.cuda.empty_cache(),it works. the reason why CPU memory increases while training is that, dataloader set too many workers.

@shuqincao
Copy link

你好,请问你怎么把代码跑起来的呢

@wanzhixiao
Copy link
Author

你好,请问你怎么把代码跑起来的呢

你好,我是用其他的数据集,根据config目录下evoconv2-config.yaml文件中的设置,修改了数据加载中的代码。

@shuqincao
Copy link

你好,请问能加你微信吗?1033898863

@eeluo
Copy link

eeluo commented Jun 2, 2021

你好,请问你怎么把代码跑起来的呢

你好,我是用其他的数据集,根据config目录下evoconv2-config.yaml文件中的设置,修改了数据加载中的代码。

请问您成功运行了吗?我怎么总是遇到矩阵的维数有问题?可以留个联系方式吗?谢谢

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants