diff --git a/docs/source/advanced/multi_gpu.rst b/docs/source/advanced/multi_gpu.rst
index 3e4a7b2335b628..8384209aff0e87 100644
--- a/docs/source/advanced/multi_gpu.rst
+++ b/docs/source/advanced/multi_gpu.rst
@@ -90,7 +90,7 @@ This is done by adding ``sync_dist=True`` to all ``self.log`` calls in the valid
This ensures that each GPU worker has the same behaviour when tracking model checkpoints, which is important for later downstream tasks such as testing the best checkpoint across all workers.
The ``sync_dist`` option can also be used in logging calls during the step methods, but be aware that this can lead to significant communication overhead and slow down your training.
-Note if you use any built in metrics or custom metrics that use the :doc:`Metrics API <../extensions/metrics>`, these do not need to be updated and are automatically handled for you.
+Note if you use any built in metrics or custom metrics that use `TorchMetrics `_, these do not need to be updated and are automatically handled for you.
.. testcode::
diff --git a/docs/source/extensions/logging.rst b/docs/source/extensions/logging.rst
index 1facdb93373eb1..e652adbecc4196 100644
--- a/docs/source/extensions/logging.rst
+++ b/docs/source/extensions/logging.rst
@@ -111,7 +111,7 @@ The :func:`~~pytorch_lightning.core.lightning.LightningModule.log` method has a
.. note::
- Setting ``on_epoch=True`` will cache all your logged values during the full training epoch and perform a
- reduction in ``on_train_epoch_end``. We recommend using the :doc:`metrics <../extensions/metrics>` API when working with custom reduction.
+ reduction in ``on_train_epoch_end``. We recommend using `TorchMetrics `_, when working with custom reduction.
- Setting both ``on_step=True`` and ``on_epoch=True`` will create two keys per metric you log with
suffix ``_step`` and ``_epoch``, respectively. You can refer to these keys e.g. in the `monitor`
diff --git a/docs/source/extensions/metrics.rst b/docs/source/extensions/metrics.rst
deleted file mode 100644
index 74a4a15deb2be2..00000000000000
--- a/docs/source/extensions/metrics.rst
+++ /dev/null
@@ -1,9 +0,0 @@
-#######
-Metrics
-#######
-
-``pytorch_lightning.metrics`` has been moved to a separate package `TorchMetrics `_.
-We will preserve compatibility for the next few releases, nevertheless, we encourage users to update to use this stand-alone package.
-
-.. warning::
- ``pytorch_lightning.metrics`` is deprecated from v1.3 and will be removed in v1.5.
diff --git a/docs/source/index.rst b/docs/source/index.rst
index 72da9c3e354c47..c1b20b958591be 100644
--- a/docs/source/index.rst
+++ b/docs/source/index.rst
@@ -84,7 +84,6 @@ PyTorch Lightning
extensions/callbacks
extensions/datamodules
extensions/logging
- extensions/metrics
extensions/plugins
extensions/loops
diff --git a/pyproject.toml b/pyproject.toml
index 08b7b50eee7708..c527ffaa856cfd 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -43,7 +43,6 @@ module = [
"pytorch_lightning.core.*",
"pytorch_lightning.loggers.*",
"pytorch_lightning.loops.*",
- "pytorch_lightning.metrics.*",
"pytorch_lightning.overrides.*",
"pytorch_lightning.plugins.environments.*",
"pytorch_lightning.plugins.training_type.*",