You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates th at your module has parameters that were not used in producing loss. You can enable unused parameter detection by (1) passing the keyword argument find_unused_parameters=True to torch.nn.parallel.DistributedDataParallel;
As suggested, I modified the model by adding find_unused_parameters=True, as followings, model = DistributedDataParallel(model, device_ids=[rank], find_unused_parameters=True).to(device), but I still got the same errors, could you train normally when with multi GPU? Any suggestions to fix this?
Many Thanks.
The text was updated successfully, but these errors were encountered:
RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates th at your module has parameters that were not used in producing loss. You can enable unused parameter detection by (1) passing the keyword argument
find_unused_parameters=True
totorch.nn.parallel.DistributedDataParallel
;As suggested, I modified the model by adding
find_unused_parameters=True
, as followings, model = DistributedDataParallel(model, device_ids=[rank], find_unused_parameters=True).to(device), but I still got the same errors, could you train normally when with multi GPU? Any suggestions to fix this?Many Thanks.
The text was updated successfully, but these errors were encountered: