-
Notifications
You must be signed in to change notification settings - Fork 493
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[lightning] GPU support #961
Conversation
Model BenchmarkShow benchmark results
Model TrainingPeytonManningYosemiteTempsAirPassengers |
Codecov Report
@@ Coverage Diff @@
## main #961 +/- ##
==========================================
- Coverage 87.68% 87.43% -0.26%
==========================================
Files 17 17
Lines 4433 4448 +15
==========================================
+ Hits 3887 3889 +2
- Misses 546 559 +13
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. |
Can you make test case(s) for this to improve the codecov? |
@Kevin-Chen0 Note sure if that makes sense in this case, bc the machine would need to have a GPU / accelerator aside from CPU to test the feature. As far as I know thats not available / reasonable in gh actions. |
Can you resolve the merge conflict first? |
Karl, can you address the two questions I have before I approve? In the meantime, I'm changing the status back to needs fix. Thx. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code looks good to me and no merge conflicts.
I agree with @Kevin-Chen0 that it would be great to have test cases, yet as @karl-richter pointed out there is no option with github actions I'd be aware of - if someone knows any good way lets create test cases with a follow up issue 👍
running with CUDA does not work: RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! |
Adds functionality to train the TimeNet model using an accelerator.
Supports GPU, MPS (M1) automatically and all other available Pytorch Lightning accelerators via manual declaration (eg. TPU etc.). Updated after Lightning merge.
Closes #420
Closes #938