-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adopt PEP 563, PEP 585, and PEP 604 #11205
Comments
Looks like this would break the |
I tired the
I guess this could be done for the rest of the code that does not depend on pydantic. Though it might be a much more manual process. |
Yes. We could do the upgrade separately for fabric, the trainer, and apps. There might be newer ways to upgrade the codebase. The instructions in the top post are quite old by now. |
I tried to upgrade only
Then I had to manually fix the future import in With this, there is a different error. Upgrading the code does not seem like a simple task.
|
If you open a PR with the changes, I might be able to help with fixing the circular import |
Created #17779. |
Proposed refactor
We've dropped support for Python 3.6, which means we can use the new annotations format.
Motivation
New shiny things are nice
Pitch
Adopt PEP 585, PEP 604, and PEP 563.
We can do this automatically with the following sequence of commands:
Although some manual cleanup will be necessary because isort will add the import to all files, even those who don't need the
__future__
importAdditional context
In regards to non-quoted annotations, there are talks of replacing PEP 563 with PEP 649. I don't think it impacts our project, but depending on the resolution of the latter the import statement might change.
Also, we might want to delay this until 1.6 will be released, to avoid conflicts with the bug-fix branch.
If you enjoy Lightning, check out our other projects! ⚡
Metrics: Machine learning metrics for distributed, scalable PyTorch applications.
Lite: enables pure PyTorch users to scale their existing code on any kind of device while retaining full control over their own loops and optimization logic.
Flash: The fastest way to get a Lightning baseline! A collection of tasks for fast prototyping, baselining, fine-tuning, and solving problems with deep learning.
Bolts: Pretrained SOTA Deep Learning models, callbacks, and more for research and production with PyTorch Lightning and PyTorch.
Lightning Transformers: Flexible interface for high-performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra.
cc @justusschock @awaelchli @akihironitta
The text was updated successfully, but these errors were encountered: