-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Generalize internal checks for precision plugin type, training type, accelerator type #10821
Comments
This issue has been automatically marked as stale because it hasn't had any recent activity. This issue will be closed in 7 days if no further activity occurs. Thank you for your contributions, Pytorch Lightning Team! |
This seems cleaner and easier to debug. Let's do it. |
Since this is invisible to the user, this doesn't need to happen strictly before 1.6 so I'll move it out and we can update the milestone whenever this gets done. |
NVM my previous comment, @justusschock will take it |
@justusschock Do you think you could finish this for 1.8? |
Definitely! |
You failed 😄 |
Sorry, couldn't resist with the stupid comment 😄 But in all seriousness, I think we completed this in the meantime already. It seems all enum types got removed. Or do you see anything left to do? |
Proposed refactor
Internally, our checks against the type of Accelerator, Precision type, strategy is not robust towards custom instances passed in by the user.
Motivation
Internally, some operations in the optimization, logging, etc. need a different code path depending on 1) Accelerator type (cpu, gpu) or 2) Precision type (apex, native) or 3) strategy type (ddp, ddp-spawn, ...). Currently we have this pattern:
Pitch
Change these to
This has the benefits:
Additional context
Discusson started in #10596
If you enjoy Lightning, check out our other projects! ⚡
Metrics: Machine learning metrics for distributed, scalable PyTorch applications.
Lite: enables pure PyTorch users to scale their existing code on any kind of device while retaining full control over their own loops and optimization logic.
Flash: The fastest way to get a Lightning baseline! A collection of tasks for fast prototyping, baselining, fine-tuning, and solving problems with deep learning.
Bolts: Pretrained SOTA Deep Learning models, callbacks, and more for research and production with PyTorch Lightning and PyTorch.
Lightning Transformers: Flexible interface for high-performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra.
cc @Borda @justusschock @awaelchli @rohitgr7 @kaushikb11 @akihironitta @ananthsub
The text was updated successfully, but these errors were encountered: