-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Consolidate get_gpu_id functions to device utils #11427
Comments
The code in So should we instead move the gpu-specific utilities in |
@carmocca I think that make sense. Device parsing logic should be in X_accelerators as static functions, as the logic is used by accelerator_connector too, which could be called before accelerator is initialized. I will add more details in next few days with the accelerator_connector draft. |
This issue has been automatically marked as stale because it hasn't had any recent activity. This issue will be closed in 7 days if no further activity occurs. Thank you for your contributions, Pytorch Lightning Team! |
Proposed refactor
get_gpu_id is a common util function, we have duplicated logic doing the same thing. This issue propose to consolidate them
Motivation
Code simplification
Pitch
Move _get_gpu_ids from gpu_stats_monitor.py
https://github.com/PyTorchLightning/pytorch-lightning/blob/948cfd24de4f64a2980395581f15544e5e37eab0/pytorch_lightning/callbacks/gpu_stats_monitor.py#L192-L198
and gpu accelerator's get_gpu_id()
https://github.com/PyTorchLightning/pytorch-lightning/blob/948cfd24de4f64a2980395581f15544e5e37eab0/pytorch_lightning/accelerators/gpu.py#L129-L134
to devices utils
Additional context
If you enjoy Lightning, check out our other projects! ⚡
Metrics: Machine learning metrics for distributed, scalable PyTorch applications.
Lite: enables pure PyTorch users to scale their existing code on any kind of device while retaining full control over their own loops and optimization logic.
Flash: The fastest way to get a Lightning baseline! A collection of tasks for fast prototyping, baselining, fine-tuning, and solving problems with deep learning.
Bolts: Pretrained SOTA Deep Learning models, callbacks, and more for research and production with PyTorch Lightning and PyTorch.
Lightning Transformers: Flexible interface for high-performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra.
cc @justusschock @awaelchli @akihironitta @rohitgr7 @kaushikb11 @carmocca
The text was updated successfully, but these errors were encountered: