|
7 jaren geleden | |
---|---|---|
.. | ||
src | c8712e9531 Limit AllocationService dependency injection hack (#24479) | 8 jaren geleden |
README.md | 889d802115 Refine wording in benchmark README and correct typos | 9 jaren geleden |
build.gradle | 84660ffd5f Solve Gradle deprecation warnings around shadowJar (#30483) | 7 jaren geleden |
This directory contains the microbenchmark suite of Elasticsearch. It relies on JMH.
We do not want to microbenchmark everything but the kitchen sink and should typically rely on our macrobenchmarks with Rally. Microbenchmarks are intended to spot performance regressions in performance-critical components. The microbenchmark suite is also handy for ad-hoc microbenchmarks but please remove them again before merging your PR.
Just run gradle :benchmarks:jmh
from the project root directory. It will build all microbenchmarks, execute them and print the result.
Benchmarks are always run via Gradle with gradle :benchmarks:jmh
.
Running via an IDE is not supported as the results are meaningless (we have no control over the JVM running the benchmarks).
If you want to run a specific benchmark class, e.g. org.elasticsearch.benchmark.MySampleBenchmark
or have special requirements
generate the uberjar with gradle :benchmarks:jmhJar
and run it directly with:
java -jar benchmarks/build/distributions/elasticsearch-benchmarks-*.jar
JMH supports lots of command line parameters. Add -h
to the command above to see the available command line options.
Before adding a new microbenchmark, make yourself familiar with the JMH API. You can check our existing microbenchmarks and also the JMH samples.
In contrast to tests, the actual name of the benchmark class is not relevant to JMH. However, stick to the naming convention and
end the class name of a benchmark with Benchmark
. To have JMH execute a benchmark, annotate the respective methods with @Benchmark
.
To get realistic results, you should exercise care when running benchmarks. Here are a few tips:
Error
column in the benchmark results to see the run-to-run variance.taskset
.cpufreq-set
and the
performance
CPU governor.@Param
.-prof gc
to check whether the garbage collector runs during a microbenchmarks and skews
your results. If so, try to force a GC between runs (-gc true
) but watch out for the caveats.-prof perf
or -prof perfasm
(both only available on Linux) to see hotspots.-prof perfasm
.Score
column and ignore Error
. Instead take countermeasures to keep Error
low / variance explainable.