Skip to content

Commit

Permalink
reorganize docs and make some broken-formatting fixups (#37)
Browse files Browse the repository at this point in the history
  • Loading branch information
LilithHafner authored Mar 2, 2024
1 parent c3e19b6 commit 175516c
Show file tree
Hide file tree
Showing 4 changed files with 154 additions and 134 deletions.
2 changes: 2 additions & 0 deletions docs/make.jl
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,8 @@ makedocs(;
),
pages=[
"Home" => "index.md",
"Why use Chairmarks?" => "why.md",
"Reference" => "reference.md",
],
)

Expand Down
155 changes: 21 additions & 134 deletions docs/src/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,112 +10,6 @@ DocTestFilters = [r"\d\d?\d?\.\d{3} [μmn]?s( \(.*\))?"]

[Chairmarks.jl](https://github.com/LilithHafner/Chairmarks.jl) provides benchmarks with back support. Often hundreds of times faster than BenchmarkTools.jl without compromising on accuracy.

## Precise

Capable of detecting 1% difference in runtime in ideal conditions

```jldoctest
julia> f(n) = sum(rand() for _ in 1:n)
f (generic function with 1 method)
julia> @b f(1000)
1.074 μs
julia> @b f(1000)
1.075 μs
julia> @b f(1000)
1.076 μs
julia> @b f(1010)
1.086 μs
julia> @b f(1010)
1.087 μs
julia> @b f(1010)
1.087 μs
```

## Concise

Chairmarks uses a concise pipeline syntax to define benchmarks. When providing a single argument, that argument is automatically wrapped in a function for higher performance and executed

```jldoctest
julia> @b sort(rand(100))
1.500 μs (3 allocs: 2.625 KiB)
```

When providing two arguments, the first is setup code and only the runtime of the second is measured

```jldoctest
julia> @b rand(100) sort
1.018 μs (2 allocs: 1.750 KiB)
```

You may use `_` in the later arguments to refer to the output of previous arguments

```jldoctest
julia> @b rand(100) sort(_, by=x -> exp(-x))
5.521 μs (2 allocs: 1.750 KiB)
```

A third argument can run a "teardown" function to integrate testing into the benchmark and ensure that the benchmarked code is behaving correctly

```jldoctest
julia> @b rand(100) sort(_, by=x -> exp(-x)) issorted(_) || error()
ERROR:
Stacktrace:
[1] error()
[...]
julia> @b rand(100) sort(_, by=x -> exp(-x)) issorted(_, rev=true) || error()
5.358 μs (2 allocs: 1.750 KiB)
```

See [`@b`](@ref) for more info

## Truthful

Charimarks.jl automatically computes a checksum based on the results of the provided
computations, and returns that checksum to the user along with benchmark results. This makes
it impossible for the compiler to elide any part of the computation that has an impact on
its return value.

While the checksums are fast, one negative side effect of this is that they add a bit of
overhead to the measured runtime, and that overhead can vary depending on the function being
benchmarked. These checksums are performed by computing a map over the returned values and a
reduction over those mapped values. You can disable this by passing the `checksum=false`
keyword argument, possibly in combination with a custom teardown function that verifies
computation results. Be aware that as the compiler improves, it may become better at eliding
benchmarks whose results are not saved.

```jldoctest; filter=r"\d\d?\d?\.\d{3} [μmn]?s( \(.*\))?|0 ns|<0.001 ns"
julia> @b 1
0.713 ns
julia> @b 1.0
1.135 ns
julia> @b 1.0 checksum=false
0 ns
```

You may experiment with custom reductions using the internal _map and _reduction keyword
arguments. The default maps and reductions (`Chairmarks.default_map` and
`Chairmarks.default_reduction`) are internal and subject to change and/or removal in future.

## Efficient

| | Chairmarks.jl | BenchmarkTools.jl | Ratio
|-----------|--------|---------------|--------|
|[TTFX](https://github.com/LilithHafner/Chairmarks.jl/blob/main/contrib/ttfx_rm_rf_julia.sh) | 3.4s | 13.4s | 4x
| Load time | 4.2ms | 131ms | 31x
| TTFX excluding precompile time | 43ms | 1118ms | 26x
| minimum runtime | 34μs | 459ms | 13,500x
|Width | Narrow | Wide | 2–4x
|Back Support | Almost Always | Sometimes | N/A

## [Installation / Integrating Chairmarks into your workflow](@id Installation)

### For interactive use
Expand Down Expand Up @@ -198,13 +92,6 @@ See the [RegressionTests.jl documentation](https://github.com/LilithHafner/Regre
for more information.


```@index
```

```@autodocs
Modules = [Chairmarks]
```

## Migrating from BenchmarkTools.jl

Chairmarks.jl has a similar samples/evals model to BenchmarkTools. It preserves the keyword
Expand Down Expand Up @@ -262,14 +149,14 @@ robust to noise, there is no need for parameter caching.
Chairmarks always returns the benchmark result, while BenchmarkTools mirrors the more
diverse base API.

| BenchmarkTools | Chairmarks | Base |
|-----------------------|------------------|--------------|
| minimum(@benchmark _) | @b | N/A |
| @benchmark | @be | N/A |
| @belapsed | (@b _).time | @elapsed |
| @btime | display(@b _); _ | @time |
| N/A | (@b _).allocs | @allocations |
| @ballocated | (@b _).bytes | @allocated |
| BenchmarkTools | Chairmarks | Base |
|-------------------------|--------------------|----------------|
| `minimum(@benchmark _)` | `@b` | N/A |
| `@benchmark` | `@be` | N/A |
| `@belapsed` | `(@b _).time` | `@elapsed` |
| `@btime` | `display(@b _); _` | `@time` |
| N/A | `(@b _).allocs` | `@allocations` |
| `@ballocated` | `(@b _).bytes` | `@allocated` |

Chairmarks may provide `@belapsed`, `@btime`, `@ballocated`, and `@ballocations` in the
future.
Expand All @@ -278,19 +165,19 @@ future.

Benchmark results have the following fields:

| Chairmarks | BenchmarkTools | Description |
|----------------------|------------- -----|------------------------|
| x.time | x.time*1e9 | Runtime in seconds |
| x.time/1e9 | x.time | Runtime in nanoseconds |
| x.allocs | x.allocs | Number of allocations |
| x.bytes | x.memory | Number of bytes allocated across all allocations |
| x.gc_fraction | x.gctime / x.time | Fraction of time spent in garbage collection |
| x.gc_time*x.time | x.gctime | Time spent in garbage collection |
| x.compile_fraction | N/A | Fraction of time spent compiling |
| x.recompile_fraction | N/A | Fraction of time spent compiling which was on recompilation |
| x.warmup | true | weather or not the sample had a warmup run before it |
| x.checksum | N/A | a checksum computed from the return values of the benchmarked code |
| x.evals | x.params.evals | the number of evaluations in the sample |
| Chairmarks | BenchmarkTools | Description |
|------------------------|---------------------|------------------------|
| `x.time` | `x.time*1e9` | Runtime in seconds |
| `x.time/1e9` | `x.time` | Runtime in nanoseconds |
| `x.allocs` | `x.allocs` | Number of allocations |
| `x.bytes` | `x.memory` | Number of bytes allocated across all allocations |
| `x.gc_fraction` | `x.gctime / x.time` | Fraction of time spent in garbage collection |
| `x.gc_time*x.time` | `x.gctime` | Time spent in garbage collection |
| `x.compile_fraction` | N/A | Fraction of time spent compiling |
| `x.recompile_fraction` | N/A | Fraction of time spent compiling which was on recompilation |
| `x.warmup` | `true` | weather or not the sample had a warmup run before it |
| `x.checksum` | N/A | a checksum computed from the return values of the benchmarked code |
| `x.evals` | `x.params.evals` | the number of evaluations in the sample |

Note that these fields are likely to change in Chairmarks 1.0.

Expand Down
6 changes: 6 additions & 0 deletions docs/src/reference.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
```@index
```

```@autodocs
Modules = [Chairmarks]
```
125 changes: 125 additions & 0 deletions docs/src/why.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,125 @@
```@meta
CurrentModule = Chairmarks
DocTestSetup = quote
using Chairmarks
end
DocTestFilters = [r"\d\d?\d?\.\d{3} [μmn]?s( \(.*\))?"]
```

## Precise

Capable of detecting 1% difference in runtime in ideal conditions

```jldoctest
julia> f(n) = sum(rand() for _ in 1:n)
f (generic function with 1 method)
julia> @b f(1000)
1.074 μs
julia> @b f(1000)
1.075 μs
julia> @b f(1000)
1.076 μs
julia> @b f(1010)
1.086 μs
julia> @b f(1010)
1.087 μs
julia> @b f(1010)
1.087 μs
```

## Concise

Chairmarks uses a concise pipeline syntax to define benchmarks. When providing a single
argument, that argument is automatically wrapped in a function for higher performance and
executed

```jldoctest
julia> @b sort(rand(100))
1.500 μs (3 allocs: 2.625 KiB)
```

When providing two arguments, the first is setup code and only the runtime of the second is
measured

```jldoctest
julia> @b rand(100) sort
1.018 μs (2 allocs: 1.750 KiB)
```

You may use `_` in the later arguments to refer to the output of previous arguments

```jldoctest
julia> @b rand(100) sort(_, by=x -> exp(-x))
5.521 μs (2 allocs: 1.750 KiB)
```

A third argument can run a "teardown" function to integrate testing into the benchmark and
ensure that the benchmarked code is behaving correctly

```jldoctest
julia> @b rand(100) sort(_, by=x -> exp(-x)) issorted(_) || error()
ERROR:
Stacktrace:
[1] error()
[...]
julia> @b rand(100) sort(_, by=x -> exp(-x)) issorted(_, rev=true) || error()
5.358 μs (2 allocs: 1.750 KiB)
```

See [`@b`](@ref) for more info

## Truthful

Charimarks.jl automatically computes a checksum based on the results of the provided
computations, and returns that checksum to the user along with benchmark results. This makes
it impossible for the compiler to elide any part of the computation that has an impact on
its return value.

While the checksums are fast, one negative side effect of this is that they add a bit of
overhead to the measured runtime, and that overhead can vary depending on the function being
benchmarked. These checksums are performed by computing a map over the returned values and a
reduction over those mapped values. You can disable this by passing the `checksum=false`
keyword argument, possibly in combination with a custom teardown function that verifies
computation results. Be aware that as the compiler improves, it may become better at eliding
benchmarks whose results are not saved.

```jldoctest; filter=r"\d\d?\d?\.\d{3} [μmn]?s( \(.*\))?|0 ns|<0.001 ns"
julia> @b 1
0.713 ns
julia> @b 1.0
1.135 ns
julia> @b 1.0 checksum=false
0 ns
```

You may experiment with custom reductions using the internal `_map` and `_reduction` keyword
arguments. The default maps and reductions (`Chairmarks.default_map` and
`Chairmarks.default_reduction`) are internal and subject to change and/or removal in
future.

## Efficient

| | Chairmarks.jl | BenchmarkTools.jl | Ratio
|---------------|----------------|-------------------|-------|
| TTFX | 3.4s | 13.4s | 4x
| TTFX excluding precompilation | 43ms | 1118ms | 26x
| Load time | 4.2ms | 131ms | 31x
| minimum runtime | 34μs | 459ms | 13,500x

See [https://github.com/LilithHafner/Chairmarks.jl/blob/main/contrib/ttfx\_rm\_rf\_julia.sh](https://github.com/LilithHafner/Chairmarks.jl/blob/main/contrib/ttfx_rm_rf_julia.sh)
for methodology.

## Innate qualities

Charimarks is inherently narrower than BenchmarkTools by construction. It also has more
reliable back support. Back support is a defining feature of chairs while benches are known
to sometimes lack back support.

0 comments on commit 175516c

Please sign in to comment.