-
Notifications
You must be signed in to change notification settings - Fork 487
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Follow-up tensors on wrong device #1023
Conversation
Codecov Report
@@ Coverage Diff @@
## main #1023 +/- ##
==========================================
+ Coverage 90.20% 90.26% +0.06%
==========================================
Files 21 21
Lines 4748 4737 -11
==========================================
- Hits 4283 4276 -7
+ Misses 465 461 -4
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. |
Model Benchmark
|
accelerator="cuda" working nicely on my GPU! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
-
I made a commit to add the
meta is None
condition at the beginning of the forward. -
np working on my gpu now
🔬 Background
As pointed out in #1002, some tensors have remained on the CPU when training with a GPU. This has previously been adressed #1010, it seems like some tensors remained on the wrong device.
🔮 Key changes
torch.zeros
,torch.ones
,torch.tensor
via strg+F)self. meta_used_in_model
in the forward function📋 Review Checklist
Please make sure to follow our best practices in the Contributing guidelines.