-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
关于FFCM在分布式训练中遇到了问题 #7
Comments
在图像复原的任务上我们的确没有遇到这个问题,但我们不确定这是否是torch.fft本身和torch.nn.parallel.DistributedDataParallel的问题有关,因为在我们的训练过程中使用的是更古老的nn.DataParallel; |
Hello, I encountered a similar problem. How did you solve it? |
作者您好,感谢您提供出色的成果。
我们将您模型中的FFCM模块单独应用到语义分割任务中时,在分布式训练中遇到报错RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by passing the keyword argument find_unused_parameters=True to torch.nn.parallel.DistributedDataParallel, and by
making sure all forward function outputs participate in calculating loss.
If you already have done the above, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's forward function. Please include the loss function and the structure of the return value of forward of your module when reporting this issue (e.g. list, dict, iterable).
Parameter indices which did not receive grad for rank 0: 128 129 130 131 252 253 254 255 452 453 454 455
In addition, you can set the environment variable TORCH_DISTRIBUTED_DEBUG to either INFO or DETAIL to print out information about which particular parameters did not receive gradient on this rank as part of this error
经排查可能是由于傅里叶变换而导致的梯度问题,您在工作的过程中是否遇到了类似的情况?
The text was updated successfully, but these errors were encountered: