Skip to content

Commit

Permalink
Add docs for folks coming from BenchmarkTools.jl (#25)
Browse files Browse the repository at this point in the history
  • Loading branch information
LilithHafner authored Feb 25, 2024
1 parent 27491a7 commit 4bc68fd
Showing 1 changed file with 78 additions and 0 deletions.
78 changes: 78 additions & 0 deletions docs/src/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -200,3 +200,81 @@ for more information.
```@autodocs
Modules = [Chairmarks]
```

## Migrating from BenchmarkTools.jl

Chairmarks.jl has a similar samples/evals model to BenchmarkTools. It preserves the keyword
arguments `samples`, `evals`, and `seconds`. Unlike BenchmarkTools.jl, the `seconds` argument
is honored even as it drops down to the order of 30μs (`@b @b hash(rand()) seconds=.00003`).
While accuracy does decay as the total number of evaluations and samples decreases, it
remains quite reasonable (e.g. I see a noise of about 30% when benchmarking
`@b hash(rand()) seconds=.00003`). This makes it much more reasonable to perform
meta-analysis such as computing the time it takes to hash a thousand different lengthed
arrays with `[@b hash(rand(n)) seconds=.001 for n in 1:1000]`.

Both BenchmarkTools.jl and Chairmarks.jl use an evaluation model structured like this:

```julia
init()
samples = []
for _ in 1:samples
setup()
t0 = time()
for _ in 1:evals
f()
end
t1 = time()
push!(samples, t1 - t0)
teardown()
end
return samples
```

In BenchmarkTools, you specify `f` and `setup` with the invocation
`@benchmark f setup=(setup)`. In Chairmarks, you specify `f` and `setup` with the invocation
`@be setup f`. In BenchmarkTools, `setup` and `f` communicate via shared local variables in
code generated by BenchmarkTools.jl. In Chairmarks, the function `f` is passed the return
value of the function `setup` as an argument. Chairmarks also lets you specify `teardown`,
which is not possible with BenchmarkTools, and an `init` which can be emulated with
interpolation using BenchmarkTools.

Here are some examples of corresponding invocations in BenchmarkTools.jl and Chairmarks.jl:

| BenchmarkTools.jl | Charimarks |
|-------------------|-------------|
| `@btime rand();` | `@b rand()` |
| `@btime sort!(x) setup=(x=rand(100)) evals=1;` | `@b rand(100) sort! evals=1` |
| `@btime sort!(x, rev=true) setup=(x=rand(100)) evals=1;` | `@b rand(100) sort!(_, rev=true) evals=1` |
| `@btime issorted(sort!(x)) \|\| error() setup=(x=rand(100)) evals=1` | `@b rand(100) sort! issorted(_) \|\| error() evals=1` |
| `let X = rand(100); @btime issorted(sort!($X)) \|\| error() setup=(rand!($X)) evals=1 end` | `@b rand(100) rand! sort! issorted(_) \|\| error() evals=1` |

For automated regression tests, [RegressionTests.jl](https://github.com/LilithHafner/RegressionTests.jl)
is a work in progress replacement for the `BenchmarkGroup` and `judge` system. Because
Chairmarks is efficiently and stably autotuned and RegressionTests.jl is inherently robust
to noise, there is no need for parameter caching.

### Nonconstant globals and interpolation

The arguments to Chairmarks.jl are lowered to functions, not quoted expressions.
Consequently, there is no need to interpolate variables and interpolation is therefore not
supported. Like BenchmarkTools.jl, benchmarks that includes access to nonconstant globals
will receive a performance overhead for that access. Two possible ways to avoid this are
to make the global constant, and to include it in the setup or initiaization phase. For
example,

```jldoctest
julia> x = 6 # nonconstant global
6
julia> @b rand(x) # slow
39.616 ns (1.02 allocs: 112.630 bytes)
julia> @b x rand # fast
18.939 ns (1 allocs: 112 bytes)
julia> const X = x
6
julia> @b rand(X) # fast
18.860 ns (1 allocs: 112 bytes)
```

0 comments on commit 4bc68fd

Please sign in to comment.