Skip to content

Commit

Permalink
docs(Core): improve docstrings, and @ref link more
Browse files Browse the repository at this point in the history
  • Loading branch information
tecosaur committed Sep 27, 2024
1 parent 0b37bbd commit 1f1aaea
Show file tree
Hide file tree
Showing 9 changed files with 87 additions and 47 deletions.
3 changes: 2 additions & 1 deletion Core/src/model/dataplugin.jl
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,9 @@ Register the plugin given by the variable `plugin_variable`, along with its
documentation (fetched by `@doc`). Should `:default` be given as the second
argument the plugin is also added to the list of default plugins.
This effectievly serves as a minor, but appreciable, convenience for the
This effectively serves as a minor, but appreciable, convenience for the
following pattern:
```julia
push!(PLUGINS, myplugin)
PLUGINS_DOCUMENTATION[myplugin.name] = @doc myplugin
Expand Down
31 changes: 18 additions & 13 deletions Core/src/model/globals.jl
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
"""
The `DataCollection.version` set on all created `DataCollection`s, and assumed
The `DataCollection.version` set on all created [`DataCollection`](@ref)s, and assumed
when reading any Data.toml files which do not set `data_config_version`.
"""
const LATEST_DATA_CONFIG_VERSION = 0 # while in alpha
Expand Down Expand Up @@ -33,7 +33,7 @@ List of `(category::Symbol, named::Symbol) => docs::Any` forms.
const TRANSFORMER_DOCUMENTATION = Pair{Tuple{Symbol, Symbol}, Any}[]

"""
The set of packages loaded by each module via `@addpkg`, for import with `@require`.
The set of packages loaded by each module via [`@addpkg`](@ref), for import with [`@require`](@ref).
More specifically, when a module M invokes `@addpkg pkg id` then
`EXTRA_PACKAGES[M][pkg] = id` is set, and then this information is used
Expand All @@ -44,12 +44,12 @@ const EXTRA_PACKAGES = Dict{Module, Dict{Symbol, Base.PkgId}}()
# For use in construction

"""
The default `priority` field value for instances of `DataTransformer`.
The default `priority` field value for instances of [`DataTransformer`](@ref).
"""
const DEFAULT_DATATRANSFORMER_PRIORITY = 1

"""
The default `priority` field value for `Advice`s.
The default `priority` field value for [`Advice`](@ref)s.
"""
const DEFAULT_DATA_ADVISOR_PRIORITY = 1

Expand All @@ -62,7 +62,7 @@ const DATASET_REFERENCE_WRAPPER = ("📇DATASET<<", ">>")

"""
A regex which matches dataset references.
This is constructed from `DATASET_REFERENCE_WRAPPER`.
This is constructed from [`DATASET_REFERENCE_WRAPPER`](@ref).
"""
const DATASET_REFERENCE_REGEX =
Regex(string("^", DATASET_REFERENCE_WRAPPER[1],
Expand All @@ -72,9 +72,10 @@ const DATASET_REFERENCE_REGEX =
# For plugins / general information

"""
The data specification TOML format constructs a DataCollection, which itself
contains DataSets, comprised of metadata and DataTransformers.
```
The data specification TOML format constructs a [`DataCollection`](@ref), which itself
contains [`DataSet`](@ref)s, comprised of metadata and [`DataTransformer`](@ref)s.
```text
DataCollection
├─ DataSet
│  ├─ DataTransformer
Expand All @@ -85,9 +86,9 @@ DataCollection
Within each scope, there are certain reserved attributes. They are listed in
this Dict under the following keys:
- `:collection` for `DataCollection`
- `:dataset` for `DataSet`
- `:transformer` for `DataTransformer`
- `:collection` for [`DataCollection`](@ref)
- `:dataset` for [`DataSet`](@ref)
- `:transformer` for [`DataTransformer`](@ref)
"""
const DATA_CONFIG_RESERVED_ATTRIBUTES =
Dict(:collection => ["data_config_version", "name", "uuid", "plugins", "config"],
Expand Down Expand Up @@ -122,7 +123,11 @@ const DATA_CONFIG_KEY_SORT_MAPPING =

"""
A mapping from severity symbols to integers.
This is used to assist with more readable construction of `LintItem`s.
This is used to assist with more readable construction of [`LintItem`](@ref)s.
See also: [`LINT_SEVERITY_MESSAGES`](@ref) for the reverse mapping of integer to
severity title string.
"""
const LINT_SEVERITY_MAPPING =
Dict(:debug => 0x05,
Expand All @@ -132,7 +137,7 @@ const LINT_SEVERITY_MAPPING =
:error => 0x01)

"""
A mapping from severity numbers (see `LINT_SEVERITY_MAPPING`) to a tuple
A mapping from severity numbers (see [`LINT_SEVERITY_MAPPING`](@ref)) to a tuple
giving the color the message should be accented with and the severity
title string.
"""
Expand Down
2 changes: 1 addition & 1 deletion Core/src/model/identification.jl
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ Identifier(ident::Identifier, ::Nothing; replace::Bool=false) =
Identifier(dataset::DataSet, collection::Union{Symbol, Nothing}=:name,
name::Symbol=something(collection, :name))
Create an `Identifier` referring to `dataset`, specifying the collection
Create an [`Identifier`](@ref) referring to `dataset`, specifying the collection
`dataset` comes from as well (when `collection` is not `nothing`) as all of its
parameters, but without any type information.
Expand Down
10 changes: 5 additions & 5 deletions Core/src/model/parameters.jl
Original file line number Diff line number Diff line change
Expand Up @@ -6,13 +6,13 @@ Obtain a form (depending on `action`) of `value`, a property within `source`.
## Actions
**`:extract`** Look for DataSet references ("$(DATASET_REFERENCE_WRAPPER[1])...$(DATASET_REFERENCE_WRAPPER[2])") within
`value`, and turn them into `Identifier`s (the inverse of `:encode`).
**`:extract`** Look for [`DataSet`](@ref) references ("[`$(DATASET_REFERENCE_WRAPPER[1])$(DATASET_REFERENCE_WRAPPER[2])`](@ref DATASET_REFERENCE_WRAPPER)") within
`value`, and turn them into [`Identifier`](@ref)s (the inverse of `:encode`).
**`:resolve`** Look for `Identifier`s in `value`, and resolve them to the
referenced DataSet/value.
**`:resolve`** Look for [`Identifier`](@ref)s in `value`, and resolve them to the
referenced [`DataSet`](@ref)/value.
**`:encode`** Look for `Identifier`s in `value`, and turn them into DataSet references
**`:encode`** Look for [`Identifier`](@ref)s in `value`, and turn them into [`DataSet`](@ref) references
(the inverse of `:extract`).
"""
function dataset_parameters(collection::DataCollection, action::Val, params::Dict{String,Any})
Expand Down
11 changes: 6 additions & 5 deletions Core/src/model/parser.jl
Original file line number Diff line number Diff line change
Expand Up @@ -105,7 +105,8 @@ The list of types is dynamically generated based on the available methods for
the data transformer.
In some cases, it makes sense for this to be explicitly defined for a particular
transformer. """
transformer.
"""
function supportedtypes end # See `interaction/externals.jl` for method definitions.

supportedtypes(DT::Type{<:DataTransformer}, spec::Dict{String, Any}, _::DataSet) =
Expand All @@ -123,7 +124,7 @@ supportedtypes(DT::Type{<:DataTransformer}, _::Dict{String, Any}) =
"""
fromspec(DT::Type{<:DataTransformer}, dataset::DataSet, spec::Dict{String, Any})
Create an `DT` of `dataset` according to `spec`.
Create an [`DT`](@ref DataTransformer) of `dataset` according to `spec`.
`DT` can either contain the driver name as a type parameter, or it will be read
from the `"driver"` key in `spec`.
Expand Down Expand Up @@ -182,10 +183,10 @@ end
fromspec(::Type{DataCollection}, spec::Dict{String, Any};
path::Union{String, Nothing}=nothing, mod::Module=Base.Main)
Create a `DataCollection` from `spec`.
Create a [`DataCollection`](@ref) from `spec`.
The `path` and `mod` keywords are used as the values for the corresponding
fields in the DataCollection.
fields in the [`DataCollection`](@ref).
"""
function fromspec(::Type{DataCollection}, spec::Dict{String, Any};
path::Union{String, Nothing}=nothing, mod::Module=Base.Main)
Expand Down Expand Up @@ -246,7 +247,7 @@ end
"""
fromspec(::Type{DataSet}, collection::DataCollection, name::String, spec::Dict{String, Any})
Create a `DataSet` for `collection` called `name`, according to `spec`.
Create a [`DataSet`](@ref) for `collection` called `name`, according to `spec`.
"""
function fromspec(::Type{DataSet}, collection::DataCollection, name::String, spec::Dict{String, Any})
uuid = UUID(@something get(spec, "uuid", nothing) begin
Expand Down
4 changes: 2 additions & 2 deletions Core/src/model/stack.jl
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
"""
getlayer([::Nothing])
Return the first `DataCollection` on the `STACK`.
Return the first [`DataCollection`](@ref) on the [`STACK`](@ref).
"""
function getlayer(::Nothing = nothing)
length(STACK) == 0 && throw(EmptyStackError())
Expand All @@ -12,7 +12,7 @@ end
getlayer(name::AbstractString)
getlayer(uuid::UUID)
Find the `DataCollection` in `STACK` with `name`/`uuid`.
Find the [`DataCollection`](@ref) in [`STACK`](@ref) with `name`/`uuid`.
"""
function getlayer(name::AbstractString)
length(STACK) == 0 && throw(EmptyStackError())
Expand Down
39 changes: 24 additions & 15 deletions Core/src/model/types.jl
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,8 @@ type name and the module it belongs to as Symbols.
able to express the full gamut of Julia types. In future this will be improved,
but it will likely always be restricted to a certain subset.
See also: [`typeify`](@ref), [`issubset`](@ref).
# Subtyping
While the subtype operator cannot work on QualifiedTypes (`<:` is a built-in),
Expand Down Expand Up @@ -58,6 +60,8 @@ Four fields are used to describe the target DataSet:
- `type`, the type that should be loaded from the dataset.
- `parameters`, any extra parameters of the dataset that should match.
See also: [`resolve`](@ref), [`refine`](@ref).
# Constructors
```julia
Expand Down Expand Up @@ -86,8 +90,9 @@ struct Identifier
end

"""
The supertype for methods producing or consuming data.
```
The `DataTransformer` type is the supertype for methods producing or consuming data.
```text
╭────loader─────╮
╵ ▼
Storage ◀────▶ Data Information
Expand All @@ -96,9 +101,9 @@ Storage ◀────▶ Data Information
```
There are three subtypes:
- `DataStorage`
- `DataLoader`
- `DataWrite`
- [`DataStorage`](@ref)
- [`DataLoader`](@ref)
- [`DataWrite`](@ref)
Each subtype takes a `Symbol` type parameter designating
the driver which should be used to perform the data operation.
Expand All @@ -109,7 +114,7 @@ In addition, each subtype has the following fields:
compared to alternatives. Lower values have higher priority.
- `parameters::Dict{String, Any}`, any parameters applied to the method.
"""
struct DataTransformer{ kind, driver}
struct DataTransformer{kind, driver}
dataset
type::Vector{QualifiedType}
priority::Int
Expand All @@ -122,6 +127,7 @@ const DataWriter = DataTransformer{:writer}

"""
Advice{func, context} <: Function
Advices allow for composable, highly flexible modifications of data by
encapsulating a function call. They are inspired by elisp's advice system,
namely the most versatile form — `:around` advice, and Clojure's advisors.
Expand All @@ -137,7 +143,7 @@ Short-hand return values with `post` or `kargs` omitted are also accepted, in
which case default values (the `identity` function and `(;)` respectively) will
be automatically substituted in.
```
```text
input=(action args kwargs)
┃ ┏╸post=identity
╭─╂────advisor 1────╂─╮
Expand All @@ -164,7 +170,7 @@ entry of the advice forms (i.e. at each stage `post = post ∘ extra` is run).
The overall behaviour can be thought of as *shells* of advice.
```
```text
╭╌ advisor 1 ╌╌╌╌╌╌╌╌─╮
┆ ╭╌ advisor 2 ╌╌╌╌╌╮ ┆
┆ ┆ ┆ ┆
Expand Down Expand Up @@ -233,17 +239,16 @@ struct DataSet
end

"""
A collection of `Advices` sourced from available Plugins.
An `AdviceAmalgamation` is a collection of [`Advice`](@ref)s sourced from available [`Plugin`](@ref)s.
Like individual `Advices`, a `AdviceAmalgamation` can be called
Like individual `Advice`s, an `AdviceAmalgamation` can be called
as a function. However, it also supports the following convenience syntax:
```
(::AdviceAmalgamation)(f::Function, args...; kargs...) # -> result
```
(::AdviceAmalgamation)(f::Function, args...; kargs...) # -> result
# Constructors
```
```julia
AdviceAmalgamation(advisors::Vector{Advice}, plugins_wanted::Vector{String}, plugins_used::Vector{String})
AdviceAmalgamation(plugins::Vector{String})
AdviceAmalgamation(collection::DataCollection)
Expand Down Expand Up @@ -275,7 +280,9 @@ abstract type SystemPath end
Crude stand in for a file path type, which is strangely absent from Base.
This allows for load/write method dispatch, and the distinguishing of
file content (as a String) from file paths.
file content (as a `String`) from file paths.
See also: [`DirPath`](@ref).
# Examples
Expand All @@ -297,6 +304,8 @@ Signifies that a given string is in fact a path to a directory.
This allows for load/write method dispatch, and the distinguishing of
file content (as a String) from file paths.
See also: [`FilePath`](@ref).
# Examples
```julia-repl
Expand Down
32 changes: 28 additions & 4 deletions Core/src/model/usepkg.jl
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,8 @@ to lazy-load the package and return its module.
Failure to either locate `name` or require `pkg` will result in an exception
being thrown.
See also: [`@require`](@ref), [`@addpkg`](@ref), [`try_install_pkg`](@ref).
"""
function get_package(pkg::Base.PkgId)
if !Base.root_module_exists(pkg)
Expand All @@ -30,6 +32,17 @@ end

const PKG_ID = Base.PkgId(Base.UUID("44cfe95a-1eb2-52ea-b672-e2afdf69b78f"), "Pkg")

"""
try_install_pkg(pkg::Base.PkgId)
Attempt to install the package identified by `pkg` if it is not currently installed.
This function is called automatically by [`get_package`](@ref) if the package is not currently loaded,
and calls `Pkg`'s `try_prompt_pkg_add` method from its `REPLExt` package extension. If the `REPL` has not been
loaded, nothing will be done.
"""
function try_install_pkg end

@static if VERSION > v"1.11-alpha1"
function try_install_pkg(pkg::Base.PkgId)
Pkg = try
Expand Down Expand Up @@ -63,9 +76,9 @@ end
@addpkg name::Symbol uuid::String
Register the package identified by `name` with UUID `uuid`.
This package may now be used with `@require \$name`.
This package may now be used with [`@require \$name`](@ref @require).
All @addpkg statements should lie within a module's `__init__` function.
All `@addpkg` statements should lie within a module's `__init__` function.
# Example
Expand All @@ -86,8 +99,11 @@ end

"""
invokepkglatest(f, args...; kwargs...)
Call `f(args...; kwargs...)` via `invokelatest`, and re-run if
PkgRequiredRerunNeeded is returned.
Call `f(args...; kwargs...)` via [`invokelatest`](@ref), and re-run if
`PkgRequiredRerunNeeded` is returned.
See also: [`@require`](@ref).
"""
function invokepkglatest(@nospecialize(f), @nospecialize args...; kwargs...)
result = Base.invokelatest(f, args...; kwargs...)
Expand All @@ -101,6 +117,14 @@ end
"""
@require Package
@require Package = "UUID"
Require the package `Package`, either previously registered with [`@addpkg`](@ref) or by UUID.
This sets a variable `Package` to the module of the package.
If the package is not currently loaded, DataToolkit will attempt to lazy-load the package
via an early return `PkgRequiredRerunNeeded` singleton. So long as this is seen by a calling
[`invokepkglatest`](@ref) the package will be loaded and the function re-run.
"""
macro require(pkg::Symbol)
quote
Expand Down
2 changes: 1 addition & 1 deletion Core/src/model/writer.jl
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
"""
iswritable(dc::DataCollection)
Check whether a data collection is backed by a writable file.
Check whether the data collection `dc` is backed by a writable file.
"""
function Base.iswritable(dc::DataCollection)
!isnothing(dc.path) || return false
Expand Down

0 comments on commit 1f1aaea

Please sign in to comment.