Skip to content

Commit

Permalink
Merge #1628
Browse files Browse the repository at this point in the history
1628: Update "Composing Optimisers" docs r=darsnack a=StevenWhitaker

Addresses #1627 (perhaps only partially).

Use `1` instead of `0.001` for the first argument of `ExpDecay` in the example, so that the sentence following the example, i.e.,

> Here we apply exponential decay to the `Descent` optimiser.

makes more sense.

It was also [suggested](#1627 (comment)) in the linked issue that it might be worth changing the default learning rate of `ExpDecay` to `1`. Since this PR doesn't address that, I'm not sure merging this PR should necessarily close the issue.

Co-authored-by: StevenWhitaker <[email protected]>
  • Loading branch information
bors[bot] and StevenWhitaker authored Jun 24, 2021
2 parents de76e08 + 6235c2a commit e7686b2
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion docs/src/training/optimisers.md
Original file line number Diff line number Diff line change
Expand Up @@ -107,7 +107,7 @@ Flux defines a special kind of optimiser simply called `Optimiser` which takes i
that will be fed into the next, and the resultant update will be applied to the parameter as usual. A classic use case is where adding decays is desirable. Flux defines some basic decays including `ExpDecay`, `InvDecay` etc.

```julia
opt = Optimiser(ExpDecay(0.001, 0.1, 1000, 1e-4), Descent())
opt = Optimiser(ExpDecay(1, 0.1, 1000, 1e-4), Descent())
```

Here we apply exponential decay to the `Descent` optimiser. The defaults of `ExpDecay` say that its learning rate will be decayed every 1000 steps.
Expand Down

0 comments on commit e7686b2

Please sign in to comment.