Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Build: Remove shadowing from benchmarks #32475

Merged
merged 5 commits into from
Jul 31, 2018
Merged
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
35 changes: 18 additions & 17 deletions benchmarks/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,36 +4,37 @@ This directory contains the microbenchmark suite of Elasticsearch. It relies on

## Purpose

We do not want to microbenchmark everything but the kitchen sink and should typically rely on our
[macrobenchmarks](https://elasticsearch-benchmarks.elastic.co/app/kibana#/dashboard/Nightly-Benchmark-Overview) with
[Rally](http://github.com/elastic/rally). Microbenchmarks are intended to spot performance regressions in performance-critical components.
We do not want to microbenchmark everything but the kitchen sink and should typically rely on our
[macrobenchmarks](https://elasticsearch-benchmarks.elastic.co/app/kibana#/dashboard/Nightly-Benchmark-Overview) with
[Rally](http://github.com/elastic/rally). Microbenchmarks are intended to spot performance regressions in performance-critical components.
The microbenchmark suite is also handy for ad-hoc microbenchmarks but please remove them again before merging your PR.

## Getting Started

Just run `gradle :benchmarks:jmh` from the project root directory. It will build all microbenchmarks, execute them and print the result.
Just run `gradlew -p benchmarks run` from the project root
directory. It will build all microbenchmarks, execute them and print
the result.

## Running Microbenchmarks

Benchmarks are always run via Gradle with `gradle :benchmarks:jmh`.

Running via an IDE is not supported as the results are meaningless (we have no control over the JVM running the benchmarks).
Running via an IDE is not supported as the results are meaningless
because we have no control over the JVM running the benchmarks.

If you want to run a specific benchmark class, e.g. `org.elasticsearch.benchmark.MySampleBenchmark` or have special requirements
generate the uberjar with `gradle :benchmarks:jmhJar` and run it directly with:
If you want to run a specific benchmark class like, say,
`MemoryStatsBenchmark`, you can use `--args`:

```
java -jar benchmarks/build/distributions/elasticsearch-benchmarks-*.jar
gradlew -p benchmarks run --args 'MemoryStatsBenchmark'
```

JMH supports lots of command line parameters. Add `-h` to the command above to see the available command line options.
Everything in the `'` gets sent on the command line to JMH.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Out of curiosity I tried the following: ./gradlew -p benchmarks run --args '-lprof' which should list all profilers that JMH supports.

However, I only get:

FAILURE: Build failed with an exception.

* What went wrong:
Problem configuring task :benchmarks:run from command line.
> No argument was provided for command-line option '--args'.

If I run (on master):

./gradlew :benchmarks:jmhJar
java -jar benchmarks/build/distributions/elasticsearch-benchmarks-*.jar -lprof

I get the expected output:

Supported profilers:
          cl: Classloader profiling via standard MBeans
        comp: JIT compiler profiling via standard MBeans
          gc: GC profiling via standard MBeans
       hs_cl: HotSpot (tm) classloader profiling via implementation-specific MBeans
...

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yikes! I tried out a bunch of things and they seemed to work ok. Let me have a look.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've filed gradle/gradle#6140 .

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And I've updated the READMEs with a workaround.


## Adding Microbenchmarks

Before adding a new microbenchmark, make yourself familiar with the JMH API. You can check our existing microbenchmarks and also the
Before adding a new microbenchmark, make yourself familiar with the JMH API. You can check our existing microbenchmarks and also the
[JMH samples](http://hg.openjdk.java.net/code-tools/jmh/file/tip/jmh-samples/src/main/java/org/openjdk/jmh/samples/).

In contrast to tests, the actual name of the benchmark class is not relevant to JMH. However, stick to the naming convention and
In contrast to tests, the actual name of the benchmark class is not relevant to JMH. However, stick to the naming convention and
end the class name of a benchmark with `Benchmark`. To have JMH execute a benchmark, annotate the respective methods with `@Benchmark`.

## Tips and Best Practices
Expand All @@ -42,15 +43,15 @@ To get realistic results, you should exercise care when running benchmarks. Here

### Do

* Ensure that the system executing your microbenchmarks has as little load as possible. Shutdown every process that can cause unnecessary
* Ensure that the system executing your microbenchmarks has as little load as possible. Shutdown every process that can cause unnecessary
runtime jitter. Watch the `Error` column in the benchmark results to see the run-to-run variance.
* Ensure to run enough warmup iterations to get the benchmark into a stable state. If you are unsure, don't change the defaults.
* Avoid CPU migrations by pinning your benchmarks to specific CPU cores. On Linux you can use `taskset`.
* Fix the CPU frequency to avoid Turbo Boost from kicking in and skewing your results. On Linux you can use `cpufreq-set` and the
* Fix the CPU frequency to avoid Turbo Boost from kicking in and skewing your results. On Linux you can use `cpufreq-set` and the
`performance` CPU governor.
* Vary the problem input size with `@Param`.
* Use the integrated profilers in JMH to dig deeper if benchmark results to not match your hypotheses:
* Run the generated uberjar directly and use `-prof gc` to check whether the garbage collector runs during a microbenchmarks and skews
* Run the generated uberjar directly and use `-prof gc` to check whether the garbage collector runs during a microbenchmarks and skews
your results. If so, try to force a GC between runs (`-gc true`) but watch out for the caveats.
* Use `-prof perf` or `-prof perfasm` (both only available on Linux) to see hotspots.
* Have your benchmarks peer-reviewed.
Expand All @@ -59,4 +60,4 @@ To get realistic results, you should exercise care when running benchmarks. Here

* Blindly believe the numbers that your microbenchmark produces but verify them by measuring e.g. with `-prof perfasm`.
* Run more threads than your number of CPU cores (in case you run multi-threaded microbenchmarks).
* Look only at the `Score` column and ignore `Error`. Instead take countermeasures to keep `Error` low / variance explainable.
* Look only at the `Score` column and ignore `Error`. Instead take countermeasures to keep `Error` low / variance explainable.
28 changes: 3 additions & 25 deletions benchmarks/build.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -18,11 +18,8 @@
*/

apply plugin: 'elasticsearch.build'

// order of this section matters, see: https://github.com/johnrengelman/shadow/issues/336
apply plugin: 'application' // have the shadow plugin provide the runShadow task
apply plugin: 'application'
mainClassName = 'org.openjdk.jmh.Main'
apply plugin: 'com.github.johnrengelman.shadow' // build an uberjar with all benchmarks

// Not published so no need to assemble
tasks.remove(assemble)
Expand Down Expand Up @@ -50,10 +47,8 @@ compileJava.options.compilerArgs << "-Xlint:-cast,-deprecation,-rawtypes,-try,-u
// needs to be added separately otherwise Gradle will quote it and javac will fail
compileJava.options.compilerArgs.addAll(["-processor", "org.openjdk.jmh.generators.BenchmarkProcessor"])

forbiddenApis {
// classes generated by JMH can use all sorts of forbidden APIs but we have no influence at all and cannot exclude these classes
ignoreFailures = true
}
// classes generated by JMH can use all sorts of forbidden APIs but we have no influence at all and cannot exclude these classes
forbiddenApisMain.enabled = false

// No licenses for our benchmark deps (we don't ship benchmarks)
dependencyLicenses.enabled = false
Expand All @@ -69,20 +64,3 @@ thirdPartyAudit.excludes = [
'org.openjdk.jmh.profile.HotspotRuntimeProfiler',
'org.openjdk.jmh.util.Utils'
]

runShadow {
executable = new File(project.runtimeJavaHome, 'bin/java')
}

// alias the shadowJar and runShadow tasks to abstract from the concrete plugin that we are using and provide a more consistent interface
task jmhJar(
dependsOn: shadowJar,
description: 'Generates an uberjar with the microbenchmarks and all dependencies',
group: 'Benchmark'
)

task jmh(
dependsOn: runShadow,
description: 'Runs all microbenchmarks',
group: 'Benchmark'
)
31 changes: 18 additions & 13 deletions client/benchmark/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,43 +2,50 @@

1. Build `client-benchmark-noop-api-plugin` with `gradle :client:client-benchmark-noop-api-plugin:assemble`
2. Install it on the target host with `bin/elasticsearch-plugin install file:///full/path/to/client-benchmark-noop-api-plugin.zip`
3. Start Elasticsearch on the target host (ideally *not* on the same machine)
4. Build an uberjar with `gradle :client:benchmark:shadowJar` and execute it.
3. Start Elasticsearch on the target host (ideally *not* on the machine
that runs the benchmarks)
4. Run the benchmark with
```
./gradlew -p client/benchmark run --args 'params go here'
```

Repeat all steps above for the other benchmark candidate.
See below for some example invocations.

### Example benchmark

In general, you should define a few GC-related settings `-Xms8192M -Xmx8192M -XX:+UseConcMarkSweepGC -verbose:gc -XX:+PrintGCDetails` and keep an eye on GC activity. You can also define `-XX:+PrintCompilation` to see JIT activity.

#### Bulk indexing

Download benchmark data from http://benchmarks.elastic.co/corpora/geonames/documents.json.bz2 and decompress them.
Download benchmark data from http://benchmarks.elasticsearch.org.s3.amazonaws.com/corpora/geonames and decompress them.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this correct? I'm not sure we should link to specific s3 buckets, as those can and may change in the future.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I borrowed the link from rally-tracks. The old link was broken and I figured this was better. @danielmitterdorfer , do you know of a better link?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We either have that one or https://s3.amazonaws.com/benchmarks.elasticsearch.org/corpora/geonames/ because there is no reverse proxy in front of those buckets.


Example command line parameters:
Example invocation:

```
rest bulk 192.168.2.2 ./documents.json geonames type 8647880 5000
wget http://benchmarks.elasticsearch.org.s3.amazonaws.com/corpora/geonames/documents-2.json.bz2
bzip2 -d documents-2.json.bz2
mv documents-2.json client/benchmark/build
gradlew -p client/benchmark run --args 'rest bulk localhost build/documents-2.json geonames type 8647880 5000'
```

The parameters are in order:
The parameters are all in the `'`s and are in order:

* Client type: Use either "rest" or "transport"
* Benchmark type: Use either "bulk" or "search"
* Benchmark target host IP (the host where Elasticsearch is running)
* full path to the file that should be bulk indexed
* name of the index
* name of the (sole) type in the index
* name of the (sole) type in the index
* number of documents in the file
* bulk size


#### Bulk indexing
#### Search

Example command line parameters:
Example invocation:

```
rest search 192.168.2.2 geonames "{ \"query\": { \"match_phrase\": { \"name\": \"Sankt Georgen\" } } }\"" 500,1000,1100,1200
gradlew -p client/benchmark run --args 'rest search localhost geonames {"query":{"match_phrase":{"name":"Sankt Georgen"}}} 500,1000,1100,1200'
```

The parameters are in order:
Expand All @@ -49,5 +56,3 @@ The parameters are in order:
* name of the index
* a search request body (remember to escape double quotes). The `TransportClientBenchmark` uses `QueryBuilders.wrapperQuery()` internally which automatically adds a root key `query`, so it must not be present in the command line parameter.
* A comma-separated list of target throughput rates


4 changes: 0 additions & 4 deletions client/benchmark/build.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -18,9 +18,6 @@
*/

apply plugin: 'elasticsearch.build'
// build an uberjar with all benchmarks
apply plugin: 'com.github.johnrengelman.shadow'
// have the shadow plugin provide the runShadow task
apply plugin: 'application'

group = 'org.elasticsearch.client'
Expand All @@ -32,7 +29,6 @@ build.dependsOn.remove('assemble')
archivesBaseName = 'client-benchmarks'
mainClassName = 'org.elasticsearch.client.benchmark.BenchmarkMain'


// never try to invoke tests on the benchmark project - there aren't any
test.enabled = false

Expand Down