Skip to content

Commit

Permalink
Merge #1062
Browse files Browse the repository at this point in the history
1062: docstring ensure signature code formatting r=CarloLucibello a=visr

by using a four space indent instead of two

Fixes issues seen here:

![image](https://user-images.githubusercontent.com/4471859/75627427-54aa6600-5bd0-11ea-93d3-92901d44db59.png)

Where the type signature has no code formatting, and a code block is introduced that throws off the rest of the formatting.

Co-authored-by: Martijn Visser <[email protected]>
  • Loading branch information
bors[bot] and visr authored Mar 1, 2020
2 parents 069d228 + d67a2e4 commit 3cf131b
Show file tree
Hide file tree
Showing 5 changed files with 11 additions and 15 deletions.
2 changes: 1 addition & 1 deletion src/data/dataloader.jl
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ struct DataLoader
end

"""
DataLoader(data...; batchsize=1, shuffle=false, partial=true)
DataLoader(data...; batchsize=1, shuffle=false, partial=true)
An object that iterates over mini-batches of `data`, each mini-batch containing `batchsize` observations
(except possibly the last one).
Expand Down
2 changes: 0 additions & 2 deletions src/data/iris.jl
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,6 @@ function load()
end

"""
labels()
Get the labels of the iris dataset, a 150 element array of strings listing the
Expand All @@ -53,7 +52,6 @@ function labels()
end

"""
features()
Get the features of the iris dataset. This is a 4x150 matrix of Float64
Expand Down
16 changes: 7 additions & 9 deletions src/optimise/optimisers.jl
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ const ϵ = 1e-8
# TODO: should use weak refs

"""
Descent(η)
Descent(η)
Classic gradient descent optimiser with learning rate `η`.
For each parameter `p` and its gradient `δp`, this runs `p -= η*δp`
Expand Down Expand Up @@ -441,17 +441,16 @@ function apply!(o::Optimiser, x, Δ)
end

"""
InvDecay(γ)
InvDecay(γ)
Applies inverse time decay to an optimiser, i.e., the effective step size at iteration `n` is `eta / (1 + γ * n)` where `eta` is the initial step size. The wrapped optimiser's step size is not modified.
```
## Parameters
- gamma (γ): Defaults to `0.001`
## Example
```julia
Optimiser(InvDecay(..), Opt(..))
Optimiser(InvDecay(..), Opt(..))
```
"""
mutable struct InvDecay
Expand All @@ -470,7 +469,7 @@ function apply!(o::InvDecay, x, Δ)
end

"""
ExpDecay(eta, decay, decay_step, clip)
ExpDecay(eta, decay, decay_step, clip)
Discount the learning rate `eta` by a multiplicative factor `decay` every `decay_step` till a minimum of `clip`.
Expand All @@ -483,9 +482,8 @@ Discount the learning rate `eta` by a multiplicative factor `decay` every `decay
## Example
To apply exponential decay to an optimiser:
```julia
Optimiser(ExpDecay(..), Opt(..))
opt = Optimiser(ExpDecay(), ADAM())
Optimiser(ExpDecay(..), Opt(..))
opt = Optimiser(ExpDecay(), ADAM())
```
"""
mutable struct ExpDecay
Expand All @@ -509,7 +507,7 @@ function apply!(o::ExpDecay, x, Δ)
end

"""
WeightDecay(wd)
WeightDecay(wd)
Decays the weight by `wd`
Expand Down
4 changes: 2 additions & 2 deletions src/optimise/train.jl
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,8 @@ import Zygote: Params, gradient


"""
update!(opt, p, g)
update!(opt, ps::Params, gs)
update!(opt, p, g)
update!(opt, ps::Params, gs)
Perform an update step of the parameters `ps` (or the single parameter `p`)
according to optimizer `opt` and the gradients `gs` (the gradient `g`).
Expand Down
2 changes: 1 addition & 1 deletion src/utils.jl
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ head(x::Tuple) = reverse(Base.tail(reverse(x)))
squeezebatch(x) = reshape(x, head(size(x)))

"""
batch(xs)
batch(xs)
Batch the arrays in `xs` into a single array.
Expand Down

0 comments on commit 3cf131b

Please sign in to comment.