Skip to content

Commit

Permalink
Consistent imports in docs for core APIs (#18869)
Browse files Browse the repository at this point in the history
Co-authored-by: Sebastian Raschka <[email protected]>
  • Loading branch information
awaelchli and rasbt authored Oct 27, 2023
1 parent c1437cc commit f6a36cf
Show file tree
Hide file tree
Showing 23 changed files with 103 additions and 93 deletions.
4 changes: 2 additions & 2 deletions docs/source-pytorch/accelerators/tpu_faq.rst
Original file line number Diff line number Diff line change
Expand Up @@ -88,10 +88,10 @@ How to setup the debug mode for Training on TPUs?

.. code-block:: python
import lightning.pytorch as pl
import lightning as L
my_model = MyLightningModule()
trainer = pl.Trainer(accelerator="tpu", devices=8, strategy="xla_debug")
trainer = L.Trainer(accelerator="tpu", devices=8, strategy="xla_debug")
trainer.fit(my_model)
Example Metrics report:
Expand Down
8 changes: 4 additions & 4 deletions docs/source-pytorch/accelerators/tpu_intermediate.rst
Original file line number Diff line number Diff line change
Expand Up @@ -44,10 +44,10 @@ To use a full TPU pod skip to the TPU pod section.

.. code-block:: python
import lightning.pytorch as pl
import lightning as L
my_model = MyLightningModule()
trainer = pl.Trainer(accelerator="tpu", devices=8)
trainer = L.Trainer(accelerator="tpu", devices=8)
trainer.fit(my_model)
That's it! Your model will train on all 8 TPU cores.
Expand Down Expand Up @@ -113,10 +113,10 @@ By default, TPU training will use 32-bit precision. To enable it, do

.. code-block:: python
import lightning.pytorch as pl
import lightning as L
my_model = MyLightningModule()
trainer = pl.Trainer(accelerator="tpu", precision="16-true")
trainer = L.Trainer(accelerator="tpu", precision="16-true")
trainer.fit(my_model)
Under the hood the xla library will use the `bfloat16 type <https://en.wikipedia.org/wiki/Bfloat16_floating-point_format>`_.
13 changes: 6 additions & 7 deletions docs/source-pytorch/advanced/model_parallel/deepspeed.rst
Original file line number Diff line number Diff line change
Expand Up @@ -132,12 +132,11 @@ For even more speed benefit, DeepSpeed offers an optimized CPU version of ADAM c

.. code-block:: python
import lightning.pytorch
from lightning.pytorch import Trainer
from lightning.pytorch import LightningModule, Trainer
from deepspeed.ops.adam import DeepSpeedCPUAdam
class MyModel(pl.LightningModule):
class MyModel(LightningModule):
...
def configure_optimizers(self):
Expand Down Expand Up @@ -180,7 +179,7 @@ Also please have a look at our :ref:`deepspeed-zero-stage-3-tips` which contains
from deepspeed.ops.adam import FusedAdam
class MyModel(pl.LightningModule):
class MyModel(LightningModule):
...
def configure_optimizers(self):
Expand All @@ -202,7 +201,7 @@ You can also use the Lightning Trainer to run predict or evaluate with DeepSpeed
from lightning.pytorch import Trainer
class MyModel(pl.LightningModule):
class MyModel(LightningModule):
...
Expand All @@ -228,7 +227,7 @@ This reduces the time taken to initialize very large models, as well as ensure w
from deepspeed.ops.adam import FusedAdam
class MyModel(pl.LightningModule):
class MyModel(LightningModule):
...
def configure_model(self):
Expand Down Expand Up @@ -367,7 +366,7 @@ This saves memory when training larger models, however requires using a checkpoi
import deepspeed
class MyModel(pl.LightningModule):
class MyModel(LightningModule):
...
def configure_model(self):
Expand Down
4 changes: 2 additions & 2 deletions docs/source-pytorch/advanced/training_tricks.rst
Original file line number Diff line number Diff line change
Expand Up @@ -398,7 +398,7 @@ The :class:`~lightning.pytorch.core.datamodule.LightningDataModule` class provid

.. code-block:: python
class MNISTDataModule(pl.LightningDataModule):
class MNISTDataModule(L.LightningDataModule):
def prepare_data(self):
MNIST(self.data_dir, download=True)
Expand All @@ -421,7 +421,7 @@ For this, all data pre-loading should be done on the main process inside :meth:`

.. code-block:: python
class MNISTDataModule(pl.LightningDataModule):
class MNISTDataModule(L.LightningDataModule):
def __init__(self, data_dir: str):
self.mnist = MNIST(data_dir, download=True, transform=T.ToTensor())
Expand Down
2 changes: 1 addition & 1 deletion docs/source-pytorch/cli/lightning_cli_advanced.rst
Original file line number Diff line number Diff line change
Expand Up @@ -164,7 +164,7 @@ to the class constructor. For example, your model is defined as:
.. code:: python
# model.py
class MyModel(pl.LightningModule):
class MyModel(L.LightningModule):
def __init__(self, criterion: torch.nn.Module):
self.criterion = criterion
Expand Down
11 changes: 7 additions & 4 deletions docs/source-pytorch/common/checkpointing_advanced.rst
Original file line number Diff line number Diff line change
Expand Up @@ -54,9 +54,9 @@ Modify a checkpoint anywhere
****************************
When you need to change the components of a checkpoint before saving or loading, use the :meth:`~lightning.pytorch.core.hooks.CheckpointHooks.on_save_checkpoint` and :meth:`~lightning.pytorch.core.hooks.CheckpointHooks.on_load_checkpoint` of your ``LightningModule``.

.. code:: python
.. code-block:: python
class LitModel(pl.LightningModule):
class LitModel(L.LightningModule):
def on_save_checkpoint(self, checkpoint):
checkpoint["something_cool_i_want_to_save"] = my_cool_pickable_object
Expand All @@ -65,9 +65,12 @@ When you need to change the components of a checkpoint before saving or loading,
Use the above approach when you need to couple this behavior to your LightningModule for reproducibility reasons. Otherwise, Callbacks also have the :meth:`~lightning.pytorch.callbacks.callback.Callback.on_save_checkpoint` and :meth:`~lightning.pytorch.callbacks.callback.Callback.on_load_checkpoint` which you should use instead:

.. code:: python
.. code-block:: python
import lightning as L
class LitCallback(pl.Callback):
class LitCallback(L.Callback):
def on_save_checkpoint(self, checkpoint):
checkpoint["something_cool_i_want_to_save"] = my_cool_pickable_object
Expand Down
4 changes: 2 additions & 2 deletions docs/source-pytorch/common/checkpointing_basic.rst
Original file line number Diff line number Diff line change
Expand Up @@ -127,7 +127,7 @@ In some cases, we may also pass entire PyTorch modules to the ``__init__`` metho

.. code-block:: python
class LitAutoencoder(pl.LightningModule):
class LitAutoencoder(L.LightningModule):
def __init__(self, encoder, decoder):
...
Expand Down Expand Up @@ -160,7 +160,7 @@ For example, let's pretend we created a LightningModule like so:
...
class Autoencoder(pl.LightningModule):
class Autoencoder(L.LightningModule):
def __init__(self, encoder, decoder, *args, **kwargs):
...
Expand Down
2 changes: 1 addition & 1 deletion docs/source-pytorch/common/checkpointing_intermediate.rst
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ Any value that has been logged via *self.log* in the LightningModule can be moni

.. code-block:: python
class LitModel(pl.LightningModule):
class LitModel(L.LightningModule):
def training_step(self, batch, batch_idx):
self.log("my_metric", x)
Expand Down
6 changes: 3 additions & 3 deletions docs/source-pytorch/common/evaluation_basic.rst
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ To add a test loop, implement the **test_step** method of the LightningModule

.. code:: python
class LitAutoEncoder(pl.LightningModule):
class LitAutoEncoder(L.LightningModule):
def training_step(self, batch, batch_idx):
...
Expand Down Expand Up @@ -99,7 +99,7 @@ To add a validation loop, implement the **validation_step** method of the Lightn

.. code:: python
class LitAutoEncoder(pl.LightningModule):
class LitAutoEncoder(L.LightningModule):
def training_step(self, batch, batch_idx):
...
Expand Down Expand Up @@ -127,5 +127,5 @@ To run the validation loop, pass in the validation set to **.fit**
model = LitAutoEncoder(...)
# train with both splits
trainer = pl.Trainer()
trainer = L.Trainer()
trainer.fit(model, train_loader, valid_loader)
2 changes: 1 addition & 1 deletion docs/source-pytorch/common/evaluation_intermediate.rst
Original file line number Diff line number Diff line change
Expand Up @@ -121,7 +121,7 @@ you can also pass in an :doc:`datamodules <../data/datamodule>` that have overri

.. code-block:: python
class MyDataModule(pl.LightningDataModule):
class MyDataModule(L.LightningDataModule):
...
def test_dataloader(self):
Expand Down
42 changes: 21 additions & 21 deletions docs/source-pytorch/common/lightning_module.rst
Original file line number Diff line number Diff line change
Expand Up @@ -84,13 +84,13 @@ Here are the only required methods.

.. code-block:: python
import lightning.pytorch as pl
import lightning as L
import torch
from lightning.pytorch.demos import Transformer
class LightningTransformer(pl.LightningModule):
class LightningTransformer(L.LightningModule):
def __init__(self, vocab_size):
super().__init__()
self.model = Transformer(vocab_size=vocab_size)
Expand Down Expand Up @@ -118,7 +118,7 @@ Which you can train by doing:
dataloader = DataLoader(dataset)
model = LightningTransformer(vocab_size=dataset.vocab_size)
trainer = pl.Trainer(fast_dev_run=100)
trainer = L.Trainer(fast_dev_run=100)
trainer.fit(model=model, train_dataloaders=dataloader)
The LightningModule has many convenient methods, but the core ones you need to know about are:
Expand Down Expand Up @@ -157,7 +157,7 @@ To activate the training loop, override the :meth:`~lightning.pytorch.core.Light

.. code-block:: python
class LightningTransformer(pl.LightningModule):
class LightningTransformer(L.LightningModule):
def __init__(self, vocab_size):
super().__init__()
self.model = Transformer(vocab_size=vocab_size)
Expand Down Expand Up @@ -235,7 +235,7 @@ override the :meth:`~lightning.pytorch.LightningModule.on_train_epoch_end` metho

.. code-block:: python
class LightningTransformer(pl.LightningModule):
class LightningTransformer(L.LightningModule):
def __init__(self, vocab_size):
super().__init__()
self.model = Transformer(vocab_size=vocab_size)
Expand Down Expand Up @@ -269,7 +269,7 @@ To activate the validation loop while training, override the :meth:`~lightning.p

.. code-block:: python
class LightningTransformer(pl.LightningModule):
class LightningTransformer(L.LightningModule):
def validation_step(self, batch, batch_idx):
inputs, target = batch
output = self.model(inputs, target)
Expand Down Expand Up @@ -306,7 +306,7 @@ and calling :meth:`~lightning.pytorch.trainer.trainer.Trainer.validate`.
.. code-block:: python
model = LightningTransformer(vocab_size=dataset.vocab_size)
trainer = pl.Trainer()
trainer = L.Trainer()
trainer.validate(model)
.. note::
Expand All @@ -327,7 +327,7 @@ Note that this method is called before :meth:`~lightning.pytorch.LightningModule

.. code-block:: python
class LightningTransformer(pl.LightningModule):
class LightningTransformer(L.LightningModule):
def __init__(self, vocab_size):
super().__init__()
self.model = Transformer(vocab_size=vocab_size)
Expand Down Expand Up @@ -366,7 +366,7 @@ The only difference is that the test loop is only called when :meth:`~lightning.
model = LightningTransformer(vocab_size=dataset.vocab_size)
dataloader = DataLoader(dataset)
trainer = pl.Trainer()
trainer = L.Trainer()
trainer.fit(model=model, train_dataloaders=dataloader)
# automatically loads the best weights for you
Expand All @@ -377,7 +377,7 @@ There are two ways to call ``test()``:
.. code-block:: python
# call after training
trainer = pl.Trainer()
trainer = L.Trainer()
trainer.fit(model=model, train_dataloaders=dataloader)
# automatically auto-loads the best weights from the previous run
Expand All @@ -387,7 +387,7 @@ There are two ways to call ``test()``:
model = LightningTransformer.load_from_checkpoint(PATH)
dataset = WikiText2()
test_dataloader = DataLoader(dataset)
trainer = pl.Trainer()
trainer = L.Trainer()
trainer.test(model, dataloaders=test_dataloader)
.. note::
Expand Down Expand Up @@ -420,7 +420,7 @@ For the example let's override ``predict_step``:

.. code-block:: python
class LightningTransformer(pl.LightningModule):
class LightningTransformer(L.LightningModule):
def __init__(self, vocab_size):
super().__init__()
self.model = Transformer(vocab_size=vocab_size)
Expand All @@ -447,7 +447,7 @@ There are two ways to call ``predict()``:
.. code-block:: python
# call after training
trainer = pl.Trainer()
trainer = L.Trainer()
trainer.fit(model=model, train_dataloaders=dataloader)
# automatically auto-loads the best weights from the previous run
Expand All @@ -457,7 +457,7 @@ There are two ways to call ``predict()``:
model = LightningTransformer.load_from_checkpoint(PATH)
dataset = WikiText2()
test_dataloader = DataLoader(dataset)
trainer = pl.Trainer()
trainer = L.Trainer()
predictions = trainer.predict(model, dataloaders=test_dataloader)
Inference in Research
Expand All @@ -469,7 +469,7 @@ If you want to perform inference with the system, you can add a ``forward`` meth

.. code-block:: python
class LightningTransformer(pl.LightningModule):
class LightningTransformer(L.LightningModule):
def __init__(self, vocab_size):
super().__init__()
self.model = Transformer(vocab_size=vocab_size)
Expand Down Expand Up @@ -500,7 +500,7 @@ such as text generation:

.. code-block:: python
class Seq2Seq(pl.LightningModule):
class Seq2Seq(L.LightningModule):
def forward(self, x):
embeddings = self(x)
hidden_states = self.encoder(embeddings)
Expand All @@ -514,7 +514,7 @@ In the case where you want to scale your inference, you should be using

.. code-block:: python
class Autoencoder(pl.LightningModule):
class Autoencoder(L.LightningModule):
def forward(self, x):
return self.decoder(x)
Expand All @@ -538,7 +538,7 @@ For cases like production, you might want to iterate different models inside a L
from torchmetrics.functional import accuracy
class ClassificationTask(pl.LightningModule):
class ClassificationTask(L.LightningModule):
def __init__(self, model):
super().__init__()
self.model = model
Expand Down Expand Up @@ -590,7 +590,7 @@ Tasks can be arbitrarily complex such as implementing GAN training, self-supervi

.. code-block:: python
class GANTask(pl.LightningModule):
class GANTask(L.LightningModule):
def __init__(self, generator, discriminator):
super().__init__()
self.generator = generator
Expand Down Expand Up @@ -643,7 +643,7 @@ checkpoint, which simplifies model re-instantiation after training.

.. code-block:: python
class LitMNIST(pl.LightningModule):
class LitMNIST(L.LightningModule):
def __init__(self, layer_1_dim=128, learning_rate=1e-2):
super().__init__()
# call this to save (layer_1_dim=128, learning_rate=1e-4) to the checkpoint
Expand All @@ -667,7 +667,7 @@ parameters should be provided back when reloading the LightningModule. In this c

.. code-block:: python
class LitMNIST(pl.LightningModule):
class LitMNIST(L.LightningModule):
def __init__(self, loss_fx, generator_network, layer_1_dim=128):
super().__init__()
self.layer_1_dim = layer_1_dim
Expand Down
Loading

0 comments on commit f6a36cf

Please sign in to comment.