From 9b1c2d0ca9e12adce6a450e7593a37bf3227b81c Mon Sep 17 00:00:00 2001 From: "Anthony D. Blaom" Date: Fri, 20 Jan 2023 16:14:40 +1300 Subject: [PATCH] minor doc improvements --- README.md | 10 ++++++++-- docs/src/index.md | 2 +- 2 files changed, 9 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index 53594f1..38a9d52 100644 --- a/README.md +++ b/README.md @@ -7,10 +7,16 @@ This is a package gathering functionalities to solve a number of generalised linear regression/classification problems which, inherently, correspond to an optimisation problem of the form ``` -L(y, Xθ) + P(θ) +L(y, X*θ) + P(θ) ``` -where `L` is a loss function and `P` is a penalty function (both of those can be scaled or composed). +where: + +- `L` is a loss function +- `X` is the `n` x `p` matrix of training observations +- `θ` the length `p` vector of weights to be optimized +- `P` is a penalty function + Additional regression/classification methods which do not directly correspond to this formulation may be added in the future. The core aims of this package are: diff --git a/docs/src/index.md b/docs/src/index.md index 4a58a54..1f4f1ef 100644 --- a/docs/src/index.md +++ b/docs/src/index.md @@ -11,7 +11,7 @@ where: * ``y`` is the **target** or **response**, a vector of length ``n`` either of real values (_regression_) or integers (_classification_), * ``X`` is the **design** or **feature** matrix, a matrix of real values of size ``n \times p`` where ``p`` is the number of _features_ or _dimensions_,\ * ``\theta`` is a vector of ``p`` real valued coefficients to determine, -* ``L`` is a **loss function**, a pre-determined function of ``\mathbb R^n`` to ``\mathbb R^+`` penalising the amplitude of the _residuals_ in a specific way, +* ``L`` is a **loss function**, a pre-determined function of ``\mathbb R^n \times \mathbb R^n`` to ``\mathbb R^+`` penalising the amplitude of the _residuals_ in a specific way, * ``P`` is a **penalty function**, a pre-determined function of ``\mathbb R^n`` to ``\mathbb R^+`` penalising the amplitude of the _coefficients_ in a specific way. A well known example is the [Ridge regression](https://en.wikipedia.org/wiki/Tikhonov_regularization) where the objective is to minimise: