Skip to content

Commit

Permalink
Merge pull request #133 from ablaom/mlj-docstrings
Browse files Browse the repository at this point in the history
Minor MLJ docstring fixes
  • Loading branch information
pasq-cat authored Nov 26, 2024
2 parents d3f0ed5 + c53c2f3 commit aae3ee3
Showing 1 changed file with 14 additions and 8 deletions.
22 changes: 14 additions & 8 deletions src/direct_mlj.jl
Original file line number Diff line number Diff line change
Expand Up @@ -582,9 +582,12 @@ $(MMI.doc_header(LaplaceClassifier))
# Training data
In MLJ or MLJBase, given a dataset X,y and a Flux Chain adapt to the dataset, pass the chain to the model
In MLJ or MLJBase, given a dataset X,y and a `Flux_Chain` adapted to the dataset, pass the
chain to the model
```julia
laplace_model = LaplaceClassifier(model = Flux_Chain,kwargs...)
```
then bind an instance `laplace_model` to data with
Expand All @@ -605,7 +608,7 @@ Train the machine using `fit!(mach, rows=...)`.
# Hyperparameters (format: name-type-default value-restrictions)
- `model::Union{Flux.Chain,Nothing} = nothing`: Either nothing or a Flux model provided by the user and compatible with the dataset. In the former case, LaplaceRedux will use a standard MLP with 2 hidden layer with 20 neurons each.
- `model::Union{Flux.Chain,Nothing} = nothing`: Either nothing or a Flux model provided by the user and compatible with the dataset. In the former case, LaplaceRedux will use a standard MLP with 2 hidden layers with 20 neurons each.
- `flux_loss = Flux.Losses.logitcrossentropy` : a Flux loss function
Expand Down Expand Up @@ -642,8 +645,6 @@ Train the machine using `fit!(mach, rows=...)`.
- `predict_mode(mach, Xnew)`: instead return the mode of each
prediction above.
- `training_losses(mach)`: return the loss history from report
# Fitted parameters
Expand Down Expand Up @@ -675,6 +676,8 @@ The fields of `report(mach)` are:
# Accessor functions
- `training_losses(mach)`: return the loss history from report
# Examples
Expand Down Expand Up @@ -721,9 +724,12 @@ $(MMI.doc_header(LaplaceRegressor))
# Training data
In MLJ or MLJBase, given a dataset X,y and a Flux Chain adapt to the dataset, pass the chain to the model
In MLJ or MLJBase, given a dataset X,y and a `Flux_Chain` adapted to the dataset, pass the
chain to the model
```julia
laplace_model = LaplaceRegressor(model = Flux_Chain,kwargs...)
```
then bind an instance `laplace_model` to data with
Expand All @@ -743,7 +749,7 @@ Train the machine using `fit!(mach, rows=...)`.
# Hyperparameters (format: name-type-default value-restrictions)
- `model::Union{Flux.Chain,Nothing} = nothing`: Either nothing or a Flux model provided by the user and compatible with the dataset. In the former case, LaplaceRedux will use a standard MLP with 2 hidden layer with 20 neurons each.
- `model::Union{Flux.Chain,Nothing} = nothing`: Either nothing or a Flux model provided by the user and compatible with the dataset. In the former case, LaplaceRedux will use a standard MLP with 2 hidden layers with 20 neurons each.
- `flux_loss = Flux.Losses.logitcrossentropy` : a Flux loss function
- `optimiser = Adam()` a Flux optimiser
Expand Down Expand Up @@ -778,8 +784,6 @@ Train the machine using `fit!(mach, rows=...)`.
- `predict_mode(mach, Xnew)`: instead return the mode of each
prediction above.
- `training_losses(mach)`: return the loss history from report
# Fitted parameters
Expand Down Expand Up @@ -813,6 +817,8 @@ The fields of `report(mach)` are:
# Accessor functions
- `training_losses(mach)`: return the loss history from report
# Examples
Expand Down

0 comments on commit aae3ee3

Please sign in to comment.