Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve docs #38

Merged
merged 9 commits into from
Mar 2, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 6 additions & 0 deletions docs/make.jl
Original file line number Diff line number Diff line change
Expand Up @@ -17,11 +17,17 @@ makedocs(;
pages=[
"Home" => "index.md",
"Why use Chairmarks?" => "why.md",
"How To" => [
"...migrate from BenchmarkTools" => "migration.md",
"...install Charimarks ergonomically" => "autoload.md",
"...perform automated regression testing on a package" => "regressions.md",
],
"Reference" => "reference.md",
],
)

deploydocs(;
repo="github.com/LilithHafner/Chairmarks.jl",
devbranch="main",
push_preview=true,
)
60 changes: 60 additions & 0 deletions docs/src/autoload.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
```@meta
CurrentModule = Chairmarks
DocTestSetup = quote
using Chairmarks
end
DocTestFilters = [r"\d\d?\d?\.\d{3} [μmn]?s( \(.*\))?"]
```

# [How to integrate Chairmarks into your workflow](@id installation)

There are several ways to use Chairmarks in your interactive sessions, ordered from simplest
to install first to most streamlined user experience last.

1. Add Chairmarks to your default environment with `import Pkg; Pkg.activate(); Pkg.add("Chairmarks")`.
Chairmarks has no non-stdlib dependencies, and precompiles in less than one second, so
this should not have any adverse impacts on your environments nor slow load times nor
package instillation times.

2. Add Chairmarks to your default environment and put `isinteractive() && using Chairmarks`
in your startup.jl file. This will make Chairmarks available in all your REPL sessions
while still requiring and explicit load in scripts and packages. This will slow down
launching a new Julia session by a few milliseconds (for comparison, this is about 20x
faster than loading `Revise` in your startup.jl file).

3. [**Recommended**] Add Chairmarks to your default environment and put the following script in your
startup.jl file to automatically load it when you type `@b` or `@be` in the REPL:

```julia
if isinteractive() && (local REPL = get(Base.loaded_modules, Base.PkgId(Base.UUID("3fa0cd96-eef1-5676-8a61-b3b8758bbffb"), "REPL"), nothing); REPL !== nothing)
# https://github.com/fredrikekre/.dotfiles/blob/65b96f492da775702c05dd2fd460055f0706457b/.julia/config/startup.jl
# Automatically load tooling on demand. These packages should be stdlibs or part of the default environment.
# - Chairmarks.jl when encountering @b or @be
# - add more as desired...
local tooling = [
["@b", "@be"] => :Chairmarks,
# add more here...
]

local tooling_dict = Dict(Symbol(k) => v for (ks, v) in tooling for k in ks)
function load_tools(ast)
if ast isa Expr
if ast.head === :macrocall
pkg = get(tooling_dict, ast.args[1], nothing)
if pkg !== nothing && !isdefined(Main, pkg)
@info "Loading $pkg ..."
try
Core.eval(Main, :(using $pkg))
catch err
@info "Failed to automatically load $pkg" exception=err
end
end
end
foreach(load_tools, ast.args)
end
ast
end

pushfirst!(REPL.repl_ast_transforms, load_tools)
end
```
203 changes: 14 additions & 189 deletions docs/src/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,201 +8,26 @@ DocTestFilters = [r"\d\d?\d?\.\d{3} [μmn]?s( \(.*\))?"]

# Chairmarks

[Chairmarks.jl](https://github.com/LilithHafner/Chairmarks.jl) provides benchmarks with back support. Often hundreds of times faster than BenchmarkTools.jl without compromising on accuracy.
[Chairmarks.jl](https://github.com/LilithHafner/Chairmarks.jl) measures performance hundreds
of times faster than BenchmarkTools.jl without compromising on accuracy.

## [Installation / Integrating Chairmarks into your workflow](@id Installation)
Installation

### For interactive use

There are several ways to use Chairmarks in your interactive sessions, ordered from simplest
to install first to most streamlined user experience last.

1. Add Chairmarks to your default environment with `import Pkg; Pkg.activate(); Pkg.add("Chairmarks")`.
Chairmarks has no non-stdlib dependencies, and precompiles in less than one second, so
this should not have any adverse impacts on your environments nor slow load times nor
package instillation times.

2. Add Chairmarks to your default environment and put `isinteractive() && using Chairmarks`
in your startup.jl file. This will make Chairmarks available in all your REPL sessions
while still requiring and explicit load in scripts and packages. This will slow down
launching a new Julia session by a few milliseconds (for comparison, this is about 20x
faster than loading `Revise` in your startup.jl file).

3. [**Recommended**] Add Chairmarks to your default environment and put the following script in your
startup.jl file to automatically load it when you type `@b` or `@be` in the REPL:

```julia
if isinteractive() && (local REPL = get(Base.loaded_modules, Base.PkgId(Base.UUID("3fa0cd96-eef1-5676-8a61-b3b8758bbffb"), "REPL"), nothing); REPL !== nothing)
# https://github.com/fredrikekre/.dotfiles/blob/65b96f492da775702c05dd2fd460055f0706457b/.julia/config/startup.jl
# Automatically load tooling on demand. These packages should be stdlibs or part of the default environment.
# - Chairmarks.jl when encountering @b or @be
# - add more as desired...
local tooling = [
["@b", "@be"] => :Chairmarks,
# add more here...
]

local tooling_dict = Dict(Symbol(k) => v for (ks, v) in tooling for k in ks)
function load_tools(ast)
if ast isa Expr
if ast.head === :macrocall
pkg = get(tooling_dict, ast.args[1], nothing)
if pkg !== nothing && !isdefined(Main, pkg)
@info "Loading $pkg ..."
try
Core.eval(Main, :(using $pkg))
catch err
@info "Failed to automatically load $pkg" exception=err
end
end
end
foreach(load_tools, ast.args)
end
ast
end

pushfirst!(REPL.repl_ast_transforms, load_tools)
end
```julia-repl
julia> import Pkg; Pkg.add("Chairmarks")
```

### For regression testing

Use [`RegressionTests.jl`](https://github.com/LilithHafner/RegressionTests.jl)! Make a file
`bench/runbenchmarks.jl` with the following content:

```julia
using Chairmarks, RegressionTests
using MyPackage

@track @be MyPackage.compute_thing(1)
@track @be MyPackage.compute_thing(1000)
```

And add the following to your `test/runtests.jl`:

```julia
using RegressionTests

@testset "Regression tests" begin
RegressionTests.test(skip_unsupported_platforms=true)
end
```

See the [RegressionTests.jl documentation](https://github.com/LilithHafner/RegressionTests.jl)
for more information.


## Migrating from BenchmarkTools.jl

Chairmarks.jl has a similar samples/evals model to BenchmarkTools. It preserves the keyword
arguments `samples`, `evals`, and `seconds`. Unlike BenchmarkTools.jl, the `seconds` argument
is honored even as it drops down to the order of 30μs (`@b @b hash(rand()) seconds=.00003`).
While accuracy does decay as the total number of evaluations and samples decreases, it
remains quite reasonable (e.g. I see a noise of about 30% when benchmarking
`@b hash(rand()) seconds=.00003`). This makes it much more reasonable to perform
meta-analysis such as computing the time it takes to hash a thousand different lengthed
arrays with `[@b hash(rand(n)) seconds=.001 for n in 1:1000]`.

Both BenchmarkTools.jl and Chairmarks.jl use an evaluation model structured like this:

```julia
init()
samples = []
for _ in 1:samples
setup()
t0 = time()
for _ in 1:evals
f()
end
t1 = time()
push!(samples, t1 - t0)
teardown()
end
return samples
```

In BenchmarkTools, you specify `f` and `setup` with the invocation
`@benchmark f setup=(setup)`. In Chairmarks, you specify `f` and `setup` with the invocation
`@be setup f`. In BenchmarkTools, `setup` and `f` communicate via shared local variables in
code generated by BenchmarkTools.jl. In Chairmarks, the function `f` is passed the return
value of the function `setup` as an argument. Chairmarks also lets you specify `teardown`,
which is not possible with BenchmarkTools, and an `init` which can be emulated with
interpolation using BenchmarkTools.

Here are some examples of corresponding invocations in BenchmarkTools.jl and Chairmarks.jl:

| BenchmarkTools.jl | Charimarks |
|-------------------|-------------|
| `@btime rand();` | `@b rand()` |
| `@btime sort!(x) setup=(x=rand(100)) evals=1;` | `@b rand(100) sort! evals=1` |
| `@btime sort!(x, rev=true) setup=(x=rand(100)) evals=1;` | `@b rand(100) sort!(_, rev=true) evals=1` |
| `@btime issorted(sort!(x)) \|\| error() setup=(x=rand(100)) evals=1` | `@b rand(100) sort! issorted(_) \|\| error() evals=1` |
| `let X = rand(100); @btime issorted(sort!($X)) \|\| error() setup=(rand!($X)) evals=1 end` | `@b rand(100) rand! sort! issorted(_) \|\| error() evals=1` |

For automated regression tests, [RegressionTests.jl](https://github.com/LilithHafner/RegressionTests.jl)
is a work in progress replacement for the `BenchmarkGroup` and `@benchmarkable` system.
Because Chairmarks is efficiently and stably autotuned and RegressionTests.jl is inherently
robust to noise, there is no need for parameter caching.

### Toplevel API

Chairmarks always returns the benchmark result, while BenchmarkTools mirrors the more
diverse base API.

| BenchmarkTools | Chairmarks | Base |
|-------------------------|--------------------|----------------|
| `minimum(@benchmark _)` | `@b` | N/A |
| `@benchmark` | `@be` | N/A |
| `@belapsed` | `(@b _).time` | `@elapsed` |
| `@btime` | `display(@b _); _` | `@time` |
| N/A | `(@b _).allocs` | `@allocations` |
| `@ballocated` | `(@b _).bytes` | `@allocated` |

Chairmarks may provide `@belapsed`, `@btime`, `@ballocated`, and `@ballocations` in the
future.

### Fields

Benchmark results have the following fields:

| Chairmarks | BenchmarkTools | Description |
|------------------------|---------------------|------------------------|
| `x.time` | `x.time*1e9` | Runtime in seconds |
| `x.time/1e9` | `x.time` | Runtime in nanoseconds |
| `x.allocs` | `x.allocs` | Number of allocations |
| `x.bytes` | `x.memory` | Number of bytes allocated across all allocations |
| `x.gc_fraction` | `x.gctime / x.time` | Fraction of time spent in garbage collection |
| `x.gc_time*x.time` | `x.gctime` | Time spent in garbage collection |
| `x.compile_fraction` | N/A | Fraction of time spent compiling |
| `x.recompile_fraction` | N/A | Fraction of time spent compiling which was on recompilation |
| `x.warmup` | `true` | weather or not the sample had a warmup run before it |
| `x.checksum` | N/A | a checksum computed from the return values of the benchmarked code |
| `x.evals` | `x.params.evals` | the number of evaluations in the sample |

Note that these fields are likely to change in Chairmarks 1.0.

### Nonconstant globals and interpolation

The arguments to Chairmarks.jl are lowered to functions, not quoted expressions.
Consequently, there is no need to interpolate variables and interpolation is therefore not
supported. Like BenchmarkTools.jl, benchmarks that includes access to nonconstant globals
will receive a performance overhead for that access. Two possible ways to avoid this are
to make the global constant, and to include it in the setup or initiaization phase. For
example,
Usage

```jldoctest
julia> x = 6 # nonconstant global
6
julia> using Chairmarks

julia> @b rand(x) # slow
39.616 ns (1.02 allocs: 112.630 bytes)
julia> @b rand(1000) # How long does it take to generate a random array of length 1000?
720.214 ns (3 allocs: 7.875 KiB)

julia> @b x rand # fast
18.939 ns (1 allocs: 112 bytes)
julia> @b rand(1000) hash # How long does it take to hash that array?
1.689 μs

julia> const X = x
6

julia> @b rand(X) # fast
18.860 ns (1 allocs: 112 bytes)
```
julia> @b rand(1000) _.*5 # How long does it take to multiply it by 5 element wise?
172.970 ns (3 allocs: 7.875 KiB)
```
Loading
Loading