Skip to content

Commit

Permalink
Remove metrics references from docs
Browse files Browse the repository at this point in the history
  • Loading branch information
kaushikb11 committed Nov 22, 2021
1 parent eb13e1d commit b20a503
Show file tree
Hide file tree
Showing 5 changed files with 2 additions and 13 deletions.
2 changes: 1 addition & 1 deletion docs/source/advanced/multi_gpu.rst
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,7 @@ This is done by adding ``sync_dist=True`` to all ``self.log`` calls in the valid
This ensures that each GPU worker has the same behaviour when tracking model checkpoints, which is important for later downstream tasks such as testing the best checkpoint across all workers.
The ``sync_dist`` option can also be used in logging calls during the step methods, but be aware that this can lead to significant communication overhead and slow down your training.

Note if you use any built in metrics or custom metrics that use the :doc:`Metrics API <../extensions/metrics>`, these do not need to be updated and are automatically handled for you.
Note if you use any built in metrics or custom metrics that use `TorchMetrics <https://torchmetrics.readthedocs.io/>`_, these do not need to be updated and are automatically handled for you.

.. testcode::

Expand Down
2 changes: 1 addition & 1 deletion docs/source/extensions/logging.rst
Original file line number Diff line number Diff line change
Expand Up @@ -111,7 +111,7 @@ The :func:`~~pytorch_lightning.core.lightning.LightningModule.log` method has a
.. note::

- Setting ``on_epoch=True`` will cache all your logged values during the full training epoch and perform a
reduction in ``on_train_epoch_end``. We recommend using the :doc:`metrics <../extensions/metrics>` API when working with custom reduction.
reduction in ``on_train_epoch_end``. We recommend using `TorchMetrics <https://torchmetrics.readthedocs.io/>`_, when working with custom reduction.

- Setting both ``on_step=True`` and ``on_epoch=True`` will create two keys per metric you log with
suffix ``_step`` and ``_epoch``, respectively. You can refer to these keys e.g. in the `monitor`
Expand Down
9 changes: 0 additions & 9 deletions docs/source/extensions/metrics.rst

This file was deleted.

1 change: 0 additions & 1 deletion docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,6 @@ PyTorch Lightning
extensions/callbacks
extensions/datamodules
extensions/logging
extensions/metrics
extensions/plugins
extensions/loops

Expand Down
1 change: 0 additions & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,6 @@ module = [
"pytorch_lightning.core.*",
"pytorch_lightning.loggers.*",
"pytorch_lightning.loops.*",
"pytorch_lightning.metrics.*",
"pytorch_lightning.overrides.*",
"pytorch_lightning.plugins.environments.*",
"pytorch_lightning.plugins.training_type.*",
Expand Down

0 comments on commit b20a503

Please sign in to comment.