diff --git a/docs/api/paddle/optimizer/lr/LRScheduler_cn.rst b/docs/api/paddle/optimizer/lr/LRScheduler_cn.rst index 3b987c684ae..82d912caa73 100644 --- a/docs/api/paddle/optimizer/lr/LRScheduler_cn.rst +++ b/docs/api/paddle/optimizer/lr/LRScheduler_cn.rst @@ -39,6 +39,8 @@ LRScheduler * :code:`CyclicLR`: Cyclic 学习率衰减,其将学习率变化的过程视为一个又一个循环,学习率根据固定的频率在最小和最大学习率之间不停变化。请参考 :ref:`cn_api_paddle_optimizer_lr_CyclicLR`。 +* :code:`LinearLR`: 学习率随 step 数线性增加到指定学习率。 请参考 :ref:`cn_api_paddle_optimizer_lr_LinearLR`。 + 你可以继承该基类实现任意的学习率策略,导出基类的方法为 ``from paddle.optimizer.lr import LRScheduler`` , 必须要重写该基类的 ``get_lr()`` 函数,否则会抛出 ``NotImplementedError`` 异常。 diff --git a/docs/api/paddle/optimizer/lr/LinearLR_cn.rst b/docs/api/paddle/optimizer/lr/LinearLR_cn.rst new file mode 100644 index 00000000000..7953d27d24d --- /dev/null +++ b/docs/api/paddle/optimizer/lr/LinearLR_cn.rst @@ -0,0 +1,49 @@ +.. _cn_api_paddle_optimizer_lr_LinearLR: + +LinearLR +----------------------------------- + +.. py:class:: paddle.optimizer.lr.LinearLR(learning_rate, total_steps, start_factor=1./3, end_factor=1.0, last_epoch=-1, verbose=False) + + +该接口提供一种学习率优化策略-线性学习率对学习率进行调整。 + + +参数 +:::::::::::: + + - **learning_rate** (float) - 基础学习率,用于确定初始学习率和最终学习率。 + - **total_steps** (float) - 学习率从初始学习率线性增长到最终学习率所需要的步数。 + - **start_factor** (float) - 初始学习率因子,通过 `learning_rate * start_factor` 确定。 + - **end_factor** (float) - 最终学习率因子,通过 `learning_rate * end_factor` 确定。 + - **last_epoch** (int,可选) - 上一轮的轮数,重启训练时设置为上一轮的 epoch 数。默认值为 -1,则为初始学习率。 + - **verbose** (bool,可选) - 如果是 ``True``,则在每一轮更新时在标准输出 `stdout` 输出一条信息。默认值为 ``False`` 。 + +返回 +:::::::::::: +用于调整学习率的 ``LinearLR`` 实例对象。 + +代码示例 +:::::::::::: + +COPY-FROM: paddle.optimizer.lr.LinearLR:code-dynamic +COPY-FROM: paddle.optimizer.lr.LinearLR:code-static + +方法 +:::::::::::: +step(epoch=None) +''''''''' + +step 函数需要在优化器的 `optimizer.step()` 函数之后调用,调用之后将会根据 epoch 数来更新学习率,更新之后的学习率将会在优化器下一轮更新参数时使用。 + +**参数** + + - **epoch** (int,可选) - 指定具体的 epoch 数。默认值 None,此时将会从-1 自动累加 ``epoch`` 数。 + +**返回** + +无。 + +**代码示例** + +参照上述示例代码。 diff --git a/docs/api_guides/low_level/layers/learning_rate_scheduler.rst b/docs/api_guides/low_level/layers/learning_rate_scheduler.rst index bf835e5ef9b..b86a58ff97f 100644 --- a/docs/api_guides/low_level/layers/learning_rate_scheduler.rst +++ b/docs/api_guides/low_level/layers/learning_rate_scheduler.rst @@ -61,3 +61,6 @@ * :code:`CyclicLR`: 学习率根据指定的缩放策略以固定频率在最小和最大学习率之间进行循环。 相关 API Reference 请参考 :ref:`_cn_api_paddle_optimizer_lr_CyclicLR` + +* :code:`LinearLR`: 学习率随 step 数线性增加到指定学习率。 + 相关 API Reference 请参考 :ref:`_cn_api_paddle_optimizer_lr_LinearLR` diff --git a/docs/api_guides/low_level/layers/learning_rate_scheduler_en.rst b/docs/api_guides/low_level/layers/learning_rate_scheduler_en.rst index 11e96ed6d74..56fd05b229b 100755 --- a/docs/api_guides/low_level/layers/learning_rate_scheduler_en.rst +++ b/docs/api_guides/low_level/layers/learning_rate_scheduler_en.rst @@ -44,3 +44,5 @@ The following content describes the APIs related to the learning rate scheduler: * :code:`OneCycleLR`: One cycle decay. That is, the initial learning rate first increases to maximum learning rate, and then it decreases to minimum learning rate which is much less than initial learning rate. For related API Reference please refer to :ref:`cn_api_paddle_optimizer_lr_OneCycleLR` * :code:`CyclicLR`: Cyclic decay. That is, the learning rate cycles between minimum and maximum learning rate with a constant frequency in specified a scale method. For related API Reference please refer to :ref:`api_paddle_optimizer_lr_CyclicLR` + +* :code:`LinearLR`: Linear decay. That is, the learning rate will be firstly multiplied by start_factor and linearly increase to end learning rate. For related API Reference please refer to :ref:`api_paddle_optimizer_lr_LinearLR`