You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Instead in tests and where we need check device type, use
isinstance(trainer.accelerator, XAccelerator)
Additional context
If you enjoy Lightning, check out our other projects! ⚡
Metrics: Machine learning metrics for distributed, scalable PyTorch applications.
Lite: enables pure PyTorch users to scale their existing code on any kind of device while retaining full control over their own loops and optimization logic.
Flash: The fastest way to get a Lightning baseline! A collection of tasks for fast prototyping, baselining, fine-tuning, and solving problems with deep learning.
Bolts: Pretrained SOTA Deep Learning models, callbacks, and more for research and production with PyTorch Lightning and PyTorch.
Lightning Transformers: Flexible interface for high-performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra.
Hey @four4fish
Thanks for looking at this.
This is not something we need to deprecate because it's already declared as internal, not exposed to user.
It can simply be removed together with the last part of #10821
four4fish
changed the title
Deprecate trainer._device_type in favor of check Accelerator class
Remove trainer._device_type in favor of check Accelerator class
Dec 9, 2021
Proposed refactor
Follow up for #11001: Generalize internal checks for precision plugin type, training type, accelerator type
Motivation
Code simplification
Pitch
After #11001, _device_type is not needed anymore
Instead in tests and where we need check device type, use
Additional context
If you enjoy Lightning, check out our other projects! ⚡
Metrics: Machine learning metrics for distributed, scalable PyTorch applications.
Lite: enables pure PyTorch users to scale their existing code on any kind of device while retaining full control over their own loops and optimization logic.
Flash: The fastest way to get a Lightning baseline! A collection of tasks for fast prototyping, baselining, fine-tuning, and solving problems with deep learning.
Bolts: Pretrained SOTA Deep Learning models, callbacks, and more for research and production with PyTorch Lightning and PyTorch.
Lightning Transformers: Flexible interface for high-performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra.
cc @justusschock @awaelchli @akihironitta
The text was updated successfully, but these errors were encountered: