Skip to content

Commit

Permalink
docs: Fix broken links (#9523)
Browse files Browse the repository at this point in the history
  • Loading branch information
tara-det-ai authored Jun 13, 2024
1 parent 9adc092 commit b51bc93
Show file tree
Hide file tree
Showing 5 changed files with 16 additions and 18 deletions.
7 changes: 3 additions & 4 deletions docs/model-dev-guide/dtrain/dtrain-introduction.rst
Original file line number Diff line number Diff line change
Expand Up @@ -102,12 +102,11 @@ large batch sizes:
- Start with the ``original learning rate`` used for a single GPU and gradually increase it to
``number of slots`` * ``original learning rate`` throughout the first several epochs. For more
details, see `Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour
<https://arxiv.org/pdf/1706.02677.pdf>`_.
<http://arxiv.org/pdf/1706.02677>`_.

- Use custom optimizers designed for large batch training, such as `RAdam
<https://github.com/LiyuanLucasLiu/RAdam>`_, `LARS <https://arxiv.org/pdf/1708.03888.pdf>`_, or
`LAMB <https://arxiv.org/pdf/1904.00962.pdf>`_. In our experience, RAdam has been particularly
effective.
<https://github.com/LiyuanLucasLiu/RAdam>`_, `LARS <http://arxiv.org/pdf/1708.03888>`_, or `LAMB
<http://arxiv.org/pdf/1904.00962>`_. In our experience, RAdam has been particularly effective.

Applying these techniques often requires hyperparameter modifications. To help automate this
process, use the :ref:`hyperparameter-tuning` capabilities in Determined.
4 changes: 2 additions & 2 deletions docs/model-dev-guide/hyperparameter/search-methods/_index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,8 +8,8 @@ Determined supports a :ref:`variety of hyperparameter search algorithms <hyperpa
Aside from the ``single`` searcher, a searcher runs multiple trials and decides the hyperparameter
values to use in each trial. Every searcher is configured with the name of the validation metric to
optimize (via the ``metric`` field), in addition to other searcher-specific options. For example,
the ``adaptive_asha`` searcher (`arXiv:1810.0593 <https://arxiv.org/pdf/1810.05934.pdf>`_), suitable
for larger experiments with many trials, is configured with the maximum number of trials to run, the
the ``adaptive_asha`` searcher (`arXiv:1810.0593 <http://arxiv.org/pdf/1810.05934>`_), suitable for
larger experiments with many trials, is configured with the maximum number of trials to run, the
maximum training length allowed per trial, and the maximum number of trials that can be worked on
simultaneously:

Expand Down
2 changes: 1 addition & 1 deletion docs/release-notes.rst
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ Agent Resource Manager:
- AMD GPUs: Due to limited usage, we will limit supported accelerators to NVIDIA GPUs. If you have
a use case requiring AMD GPU support with the Agent Resource Manager, please reach out to us via
a `GitHub Issue <https://github.com/determined-ai/determined/issues>`__ or `community slack
<https://join.slack.com/t/determined-community/shared_invite/zt-1f4hj60z5-JMHb~wSr2xksLZVBN61g_Q>`__!
<https://determined-community.slack.com/join/shared_invite/zt-1f4hj60z5-JMHb~wSr2xksLZVBN61g_Q>`__!
This does not impact Kubernetes or Slurm environments.

Machine Architectures: PPC64/POWER builds across all environments are no longer supported.
Expand Down
2 changes: 1 addition & 1 deletion docs/setup-cluster/slurm/slurm-known-issues.rst
Original file line number Diff line number Diff line change
Expand Up @@ -290,7 +290,7 @@ Some constraints are due to differences in behavior between Docker and Singulari
Unable to allocate resources: Requested node configuration is not available``.

Slurm 22.05.5 through 22.05.8 are not supported due to `Slurm Bug 15857
<https://bugs.schedmd.com/show_bug.cgi?id=15857>`__. The bug was addressed in 22.05.09 or
<https://support.schedmd.com/show_bug.cgi?id=15857>`__. The bug was addressed in 22.05.09 or
23.02.00.

- A Determined experiment remains ``QUEUEUED`` for an extended period:
Expand Down
19 changes: 9 additions & 10 deletions docs/tutorials/pachyderm-cat-dog.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,9 +9,8 @@
.. meta::
:description: Follow along with this batch inferencing tutorial to see how to leverage Determined and Pachyderm together to streamline complex tasks.

In this guide, we'll help you create a simple batch inferencing project in `Pachyderm
<https://docs.pachyderm.com/latest/learn/glossary/pipeline/>`__, train your model using a Determined
cluster, and then use the model in an inferencing pipeline.
In this guide, we'll help you create a simple batch inferencing project in Pachyderm, train your
model using a Determined cluster, and then use the model in an inferencing pipeline.

.. note::

Expand All @@ -23,8 +22,8 @@ cluster, and then use the model in an inferencing pipeline.
************

After completing the steps in this tutorial, you will have a a fully-built batch inferencing
`pipeline <https://docs.pachyderm.com/latest/learn/glossary/pipeline/>`__ with results and you will
understand how to leverage Pachyderm when working with your Determined cluster.
pipeline with results and you will understand how to leverage Pachyderm when working with your
Determined cluster.

By following these instructions, you will:

Expand All @@ -51,8 +50,9 @@ The following prerequisites are required:
- To set up **Determined** locally, visit the quick installation instructions: :ref:`basic`

- To set up **Pachyderm** locally, visit `First-Time Setup
<https://docs.pachyderm.com/latest/get-started/first-time-setup/>`__ or `Pachyderm Local
Deployment Guide <https://docs.pachyderm.com/latest/set-up/local-deploy/>`_
<https://docs.ai-solutions.ext.hpe.com/products/mldm/latest/get-started/first-time-setup/>`__
or `Pachyderm Local Deployment Guide
<https://docs.ai-solutions.ext.hpe.com/products/mldm/latest/set-up/local-deploy/>`_

************************
Get the Tutorial Files
Expand Down Expand Up @@ -130,9 +130,8 @@ You are now ready to create a project repo.
Create Repos in Pachyderm for Training Data
*********************************************

To manage our training data effectively, we'll first need to create `repos
<https://docs.pachyderm.com/latest/learn/basic-concepts/#basic-concepts-repositories-repo>`_ for
storing the data. We'll use a typical 80:20 train/test split.
To manage our training data effectively, we'll first need to create repos for storing the data.
We'll use a typical 80:20 train/test split.

To create the train/test repos, run the following commands:

Expand Down

0 comments on commit b51bc93

Please sign in to comment.