diff --git a/docs/user/algorithms.rst b/docs/user/algorithms.rst index 6275448f6..c61abf12c 100644 --- a/docs/user/algorithms.rst +++ b/docs/user/algorithms.rst @@ -16,7 +16,7 @@ The following algorithms are implemented in the Spinning Up package: - `Twin Delayed DDPG`_ (TD3) - `Soft Actor-Critic`_ (SAC) -They are all implemented with `MLP`_ (non-recurrent) actor-critics, making them suitable for fully-observed, non-image-based RL environments, eg the `Gym Mujoco`_ environments. +They are all implemented with `MLP`_ (non-recurrent) actor-critics, making them suitable for fully-observed, non-image-based RL environments, e.g. the `Gym Mujoco`_ environments. .. _`Gym Mujoco`: https://gym.openai.com/envs/#mujoco .. _`Vanilla Policy Gradient`: ../algorithms/vpg.html @@ -83,7 +83,7 @@ Next, there is a single function which runs the algorithm, performing the follow 10) Setting up model saving through the logger - 11) Defining functions needed for running the main loop of the algorithm (eg the core update function, get action function, and test agent function, depending on the algorithm) + 11) Defining functions needed for running the main loop of the algorithm (e.g. the core update function, get action function, and test agent function, depending on the algorithm) 12) Running the main loop of the algorithm: