Skip to content

Commit

Permalink
Document the feature
Browse files Browse the repository at this point in the history
  • Loading branch information
sabiwara committed Dec 5, 2024
1 parent 3f19858 commit 75283cb
Show file tree
Hide file tree
Showing 2 changed files with 12 additions and 2 deletions.
7 changes: 6 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -226,7 +226,12 @@ The available options are the following (also documented in [hexdocs](https://he
* `inputs` - a map or list of two element tuples. If a map, the keys are descriptive input names and values are the actual input values. If a list of tuples, the first element in each tuple is the input name, and the second element in each tuple is the actual input value. Your benchmarking jobs will then be run with each of these inputs. For this to work your benchmarking function gets the current input passed in as an argument into the function. Defaults to `nil`, aka no input specified and functions are called without an argument. See [Inputs](#inputs).
* `formatters` - list of formatters either as a module implementing the formatter behaviour, a tuple of said module and options it should take or formatter functions. They are run when using `Benchee.run/2` or you can invoke them through `Benchee.Formatter.output/1`. Functions need to accept one argument (which is the benchmarking suite with all data) and then use that to produce output. Used for plugins & configuration. Also allows the configuration of the console formatter to print extended statistics. Defaults to the builtin console formatter `Benchee.Formatters.Console`. See [Formatters](#formatters).
* `measure_function_call_overhead` - Measure how long an empty function call takes and deduct this from each measured run time. This overhead should be negligible for all but the most micro benchmarks. Defaults to false.
* `pre_check` - whether or not to run each job with each input - including all given before or after scenario or each hooks - before the benchmarks are measured to ensure that your code executes without error. This can save time while developing your suites. Defaults to `false`.
* `pre_check` - whether or not to run each job with each input - including all given before or after scenario or each hooks - before the benchmarks are measured to ensure that your code executes without error. This can save time while developing your suites. Defaults to `false`. Possible values are:
* `false` - no pre check is run
* `true` - each scenario is run but the return value is ignored
* `:all_same` - raises if all scenarios are not returning the same value for
each input. This is useful when benchmarking alternative implementations of a
deterministic function.
* `parallel` - the function of each benchmarking job will be executed in `parallel` number processes. If `parallel: 4` then 4 processes will be spawned that all execute the _same_ function for the given time. When `time` seconds have passed, 4 new processes will be spawned for the next scenario (meaning a new input or another function to be benchmarked). This gives you more data in the same time, but also puts load on the system interfering with benchmark results. For more on the pros and cons of parallel benchmarking [check the wiki](https://github.com/bencheeorg/benchee/wiki/Parallel-Benchmarking). Defaults to 1 (no parallel execution).
* `save` - specify a `path` where to store the results of the current benchmarking suite, tagged with the specified `tag`. See [Saving & Loading](#saving-loading-and-comparing-previous-runs).
* `load` - load saved suite or suites to compare your current benchmarks against. Can be a string or a list of strings or patterns. See [Saving & Loading](#saving-loading-and-comparing-previous-runs).
Expand Down
7 changes: 6 additions & 1 deletion lib/benchee/configuration.ex
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,12 @@ defmodule Benchee.Configuration do
* `pre_check` - whether or not to run each job with each input - including all
given before or after scenario or each hooks - before the benchmarks are
measured to ensure that your code executes without error. This can save time
while developing your suites. Defaults to `false`.
while developing your suites. Defaults to `false`. Possible values are:
* `false` - no pre check is run
* `true` - each scenario is run but the return value is ignored
* `:all_same` - raises if all scenarios are not returning the same value for
each input. This is useful when benchmarking alternative implementations of a
deterministic function.
* `parallel` - each the function of each job will be executed in
`parallel` number processes. If `parallel` is `4` then 4 processes will be
spawned that all execute the _same_ function for the given time. When these
Expand Down

0 comments on commit 75283cb

Please sign in to comment.