Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixing loss device handling #62

Merged
merged 1 commit into from
Nov 24, 2022
Merged

Fixing loss device handling #62

merged 1 commit into from
Nov 24, 2022

Conversation

SebieF
Copy link
Collaborator

@SebieF SebieF commented Nov 23, 2022

Training on cuda is broken at the moment, because the loss function does not get moved to cuda (or x and y do not get moved to cpu).

Moving loss device to gpu if necessary, might also improve performance compared to x.cpu(), y.cpu(): https://discuss.pytorch.org/t/what-does-it-mean-to-move-a-loss-function-to-device-gpu/52832

Moving loss device to gpu if necessary, might also improve performance compared to x.cpu(), y.cpu():
https://discuss.pytorch.org/t/what-does-it-mean-to-move-a-loss-function-to-device-gpu/52832
@SebieF SebieF added the bug Something isn't working label Nov 23, 2022
@SebieF SebieF added this to the Version 1.0.0 milestone Nov 23, 2022
@SebieF SebieF self-assigned this Nov 23, 2022
@SebieF SebieF merged commit d4f14eb into sacdallago:main Nov 24, 2022
@SebieF SebieF deleted the loss-hotfix branch November 24, 2022 15:08
@SebieF SebieF mentioned this pull request Dec 5, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant