To build a distribution for your local OS and print its output location upon completion, run:
./gradlew localDistro
To create a platform-specific build, use the following depending on your operating system:
./gradlew :distribution:archives:linux-tar:assemble ./gradlew :distribution:archives:darwin(-aarch64)-tar:assemble ./gradlew :distribution:archives:windows-zip:assemble
You can build a Docker image with:
./gradlew build(Aarch64)DockerImage
Note: you almost certainly don’t want to run ./gradlew assemble
as this
will attempt build every single Elasticsearch distribution.
In order to run Elasticsearch from source without building a package, you can run it using Gradle:
./gradlew run
If you want to run and debug Elasticsearch from your IDE, the ./gradlew run
task
supports a remote debugging option. Run the following from your terminal:
./gradlew run --debug-jvm
Next start the "Debug Elasticsearch" run configuration in IntelliJ. This will enable the IDE to connect to the process and allow debug functionality.
As such the IDE needs to be instructed to listen for connections on the debug port. Since we might run multiple JVMs as part of configuring and starting the cluster it’s recommended to configure the IDE to initiate multiple listening attempts. In case of IntelliJ, this option is called "Auto restart" and needs to be checked.
Note
|
If you have imported the project into IntelliJ according to the instructions in CONTRIBUTING.md then a debug run configuration named "Debug Elasticsearch" will be created for you and configured appropriately. |
The gradle task does not start the Elasticsearch server process directly; like in the Elasticsearch distribution,
the job of starting the server process is delegated to a launcher CLI tool. If you need to debug the launcher itself,
add the following option to the run
task:
./gradlew run --debug-cli-jvm
This option can be specified in isolation or combined with --debug-jvm
. Since the CLI launcher lifespan may overlap
with the server process lifespan, the CLI launcher process will be started on a different port (5107 for the first node,
5108 and following for additional cluster nodes).
As with the --debug-jvm
command, the IDE needs to be instructed to listen for connections on the debug port.
You need to configure and start an appropriate Remote JVM Debug configuration, e.g. by cloning and editing
the "Debug Elasticsearch" run configuration to point to the correct debug port.
When running Elasticsearch with ./gradlew run
, assertions are enabled by
default. To disable them, add the following command line option:
-Dtests.jvm.argline="-da -dsa"
By default a node is started with the zip distribution.
In order to start with a different distribution use the -Drun.distribution
argument.
To for example start the open source distribution:
./gradlew run -Drun.distribution=oss
By default a node is started with the basic
license type.
In order to start with a different license type use the -Drun.license_type
argument.
In order to start a node with a trial license execute the following command:
./gradlew run -Drun.license_type=trial
This enables security and other paid features and adds a superuser with the username: elastic-admin
and
password: elastic-password
.
-
In order to start a node with a different max heap space add:
-Dtests.heap.size=4G
-
In order to use a custom data directory:
--data-dir=/tmp/foo
-
In order to preserve data in between executions:
--preserve-data
-
In order to remotely attach a debugger to the server process:
--debug-jvm
-
In order to remotely attach a debugger to the CLI launcher process:
--debug-cli-jvm
-
In order to set a different keystore password:
--keystore-password
-
In order to set an Elasticsearch setting, provide a setting with the following prefix:
-Dtests.es.
-
In order to pass a JVM setting, e.g. to disable assertions:
-Dtests.jvm.argline="-da"
-
In order to use HTTPS: ./gradlew run --https
-
In order to start a mock logging APM server on port 9999 and configure ES cluster to connect to it, use
./gradlew run --with-apm-server
You may need to customize the cluster configuration for the ./gradlew run task. The settings can be set via the command line, but other options require updating the task itself. You can simply find the task in the source code and configure it there. (The task is currently defined in build-tools-internal/src/main/groovy/elasticsearch.run.gradle) However, this requires modifying a source controlled file and is subject to accidental commits. Alternatively, you can use a Gradle init script to inject custom build logic with the -I flag to configure this task locally.
For example:
To use a custom certificate for HTTPS with ./gradlew run
, you can do the following.
Create a file (for example ~/custom-run.gradle) with the following contents:
rootProject { if(project.name == 'elasticsearch') { afterEvaluate { testClusters.matching { it.name == "runTask"}.configureEach { extraConfigFile 'http.p12', file("<path/to/file>/http.p12") } } } }
Now tell Gradle to use this init script:
./gradlew run -I ~/custom-run.gradle \ -Dtests.es.xpack.security.http.ssl.enabled=true \ -Dtests.es.xpack.security.http.ssl.keystore.path=http.p12
Now the http.p12 file will be placed in the config directory of the running cluster and available for use. Assuming you have the http.ssl.keystore setup correctly, you can now use HTTPS with ./gradlew run without the risk of accidentally committing your local configurations.
Another desired customization for ./gradlew run might be to run multiple nodes with different setting for each node. For example, you may want to debug a coordinating only node that fans out to one or more data nodes. To do this, increase the numberOfNodes and add specific configuration for each of the nodes. For example, the following will instruct the first node (:9200) to be a coordinating only node, and all other nodes to be master, data_hot, data_content nodes.
testClusters.register("runTask") { ... numberOfNodes = 2 def cluster = testClusters.named("runTask").get() cluster.getNodes().each { node -> node.setting('cluster.initial_master_nodes', cluster.getLastNode().getName()) node.setting('node.roles', '[master,data_hot,data_content]') } cluster.getFirstNode().setting('node.roles', '[]') ... }
You can also place this config in custom init script (see above) to avoid accidental commits. If you are passing in the --debug-jvm flag with multiple nodes, you will need multiple remote debuggers running. One for each node listening at port 5007, 5008, 5009, and so on. Ensure that each remote debugger has auto restart enabled.
Use ./gradlew run-ccs to launch 2 clusters wired together for the purposes of cross cluster search. For example send a search request "my_remote_cluster:*/_search" to the querying cluster (:9200) to query data in the fulfilling cluster.
If you are passing in the --debug-jvm flag, you will need two remote debuggers running. One at port 5007 and another one at port 5008. Ensure that each remote debugger has auto restart enabled.
You can run a single test, provided that you specify the Gradle project. See the documentation on simple name pattern filtering.
Run a single test case in the server
project:
./gradlew :server:test --tests org.elasticsearch.package.ClassName
Run all tests in a package and its sub-packages:
./gradlew :server:test --tests 'org.elasticsearch.package.*'
Run all tests that are waiting for a bugfix (disabled by default)
./gradlew test -Dtests.filter=@awaitsfix
Run with a given seed (seed is a hex-encoded long).
./gradlew test -Dtests.seed=DEADBEEF
Every test repetition will have a different method seed (derived from a single random master seed).
./gradlew :server:test -Dtests.iters=N --tests org.elasticsearch.package.ClassName
Every test repetition will have exactly the same master (0xdead) and method-level (0xbeef) seed.
./gradlew :server:test -Dtests.iters=N -Dtests.seed=DEAD:BEEF --tests org.elasticsearch.package.ClassName
(note the filters - individual test repetitions are given suffixes, ie: testFoo[0], testFoo[1], etc… so using testmethod or tests.method ending in a glob is necessary to ensure iterations are run).
./gradlew :server:test -Dtests.iters=N --tests org.elasticsearch.package.ClassName.methodName
Repeats N times but skips any tests after the first failure or M initial failures.
./gradlew test -Dtests.iters=N -Dtests.failfast=true ... ./gradlew test -Dtests.iters=N -Dtests.maxfailures=M ...
Test groups can be enabled or disabled (true/false).
Default value provided below in [brackets].
./gradlew test -Dtests.awaitsfix=[false] - known issue (@AwaitsFix)
By default the tests run on multiple processes using all the available cores on all available CPUs. Not including hyper-threading. If you want to explicitly specify the number of JVMs you can do so on the command line:
./gradlew test -Dtests.jvms=8
Or in ~/.gradle/gradle.properties
:
systemProp.tests.jvms=8
It’s difficult to pick the "right" number here. Hypercores don’t count for CPU intensive tests and you should leave some slack for JVM-internal threads like the garbage collector. And you have to have enough RAM to handle each JVM.
It is possible to provide a version that allows to adapt the tests behaviour to older features or bugs that have been changed or fixed in the meantime.
./gradlew test -Dtests.compatibility=1.0.0
Run all tests without stopping on errors (inspect log files).
./gradlew test -Dtests.haltonfailure=false
Run more verbose output (slave JVM parameters, etc.).
./gradlew test -verbose
Change the default suite timeout to 5 seconds for all tests (note the exclamation mark).
./gradlew test -Dtests.timeoutSuite=5000! ...
Change the logging level of ES (not Gradle)
./gradlew test -Dtests.es.logger.level=DEBUG
Print all the logging output from the test runs to the commandline even if tests are passing.
./gradlew test -Dtests.output=always
Configure the heap size.
./gradlew test -Dtests.heap.size=512m
Pass arbitrary jvm arguments.
# specify heap dump path ./gradlew test -Dtests.jvm.argline="-XX:HeapDumpPath=/path/to/heapdumps" # enable gc logging ./gradlew test -Dtests.jvm.argline="-verbose:gc" # enable security debugging ./gradlew test -Dtests.jvm.argline="-Djava.security.debug=access,failure"
Pass build arguments.
# Run tests against a release build. License key must be provided, but usually can be anything. ./gradlew test -Dbuild.snapshot=false -Dlicense.key="x-pack/license-tools/src/test/resources/public.key"
To run all verification tasks, including static checks, unit tests, and integration tests:
./gradlew check
Note that this will also run the unit tests and precommit tasks first. If you want to just run the in memory cluster integration tests (because you are debugging them):
./gradlew internalClusterTest
If you want to just run the precommit checks:
./gradlew precommit
Some of these checks will require docker-compose
installed for bringing up
test fixtures. If it’s not present those checks will be skipped automatically.
The host running Docker (or VM if you’re using Docker Desktop) needs 4GB of
memory or some of the containers will fail to start. You can tell that you
are short of memory if containers are exiting quickly after starting with
code 137 (128 + 9, where 9 means SIGKILL).
If you would like to debug your tests themselves, simply pass the --debug-jvm
flag to the testing task and connect a debugger on the default port of 5005
.
./gradlew :server:test --debug-jvm
For REST tests, if you’d like to debug the Elasticsearch server itself, and
not your test code, use the --debug-server-jvm
flag and use the
"Debug Elasticsearch" run configuration in IntelliJ to listen on the default
port of 5007
.
./gradlew :rest-api-spec:yamlRestTest --debug-server-jvm
Note
|
In the case of test clusters using multiple nodes, multiple debuggers
will need to be attached on incrementing ports. For example, for a 3 node
cluster ports 5007 , 5008 , and 5009 will attempt to attach to a listening
debugger. You can use the "Debug Elasticsearch (node 2)" and "(node 3)" run
configurations should you need to debug a multi-node cluster.
|
You can also use a combination of both flags to debug both tests and server. This is only applicable to Java REST tests.
./gradlew :modules:kibana:javaRestTest --debug-jvm --debug-server-jvm
The REST layer is tested through specific tests that are executed against a cluster that is configured and initialized via Gradle. The tests themselves can be written in either Java or with a YAML based DSL.
YAML based REST tests should be preferred since these are shared between all the elasticsearch official clients. The YAML based tests describe the operations to be executed and the obtained results that need to be tested.
The YAML tests support various operators defined in the rest-api-spec and adhere to the Elasticsearch REST API JSON specification In order to run the YAML tests, the relevant API specification needs to be on the test classpath. Any gradle project that has support for REST tests will get the primary API on it’s class path. However, to better support Gradle incremental builds, it is recommended to explicitly declare which parts of the API the tests depend upon.
For example:
restResources { restApi { includeCore '_common', 'indices', 'index', 'cluster', 'nodes', 'get', 'ingest' } }
YAML REST tests that include x-pack specific APIs need to explicitly declare
which APIs are required through a similar includeXpack
configuration.
The REST tests are run automatically when executing the "./gradlew check" command. To run only the YAML REST tests use the following command (modules and plugins may also include YAML REST tests):
./gradlew :rest-api-spec:yamlRestTest
A specific test case can be run with the following command:
./gradlew ':rest-api-spec:yamlRestTest' \ --tests "org.elasticsearch.test.rest.ClientYamlTestSuiteIT" \ -Dtests.method="test {yaml=cat.segments/10_basic/Help}"
You can run a group of YAML test by using wildcards:
./gradlew :rest-api-spec:yamlRestTest \ --tests "org.elasticsearch.test.rest.ClientYamlTestSuiteIT.test {yaml=index/*/*}"
or
./gradlew :rest-api-spec:yamlRestTest \ --tests org.elasticsearch.test.rest.ClientYamlTestSuiteIT -Dtests.method="test {yaml=cat.segments/10_basic/*}"
The latter method is preferable when the YAML suite name contains .
(period).
Note that if the selected test via the --tests
filter is not a valid test, i.e., the YAML test
runner is not able to parse and load it, you might get an error message indicating that the test
was not found. In such cases, running the whole suite without using the --tests
could show more
specific error messages about why the test runner is not able to parse or load a certain test.
The YAML REST tests support all the options provided by the randomized runner, plus the following:
-
tests.rest.blacklist
: comma separated globs that identify tests that are blacklisted and need to be skipped e.g. -Dtests.rest.blacklist=index//Index document,get/10_basic/
Java REST tests can be run with the "javaRestTest" task.
For example :
./gradlew :modules:mapper-extras:javaRestTest
A specific test case can be run with the following syntax (fqn.test {params}):
./gradlew ':modules:mapper-extras:javaRestTest' \ --tests "org.elasticsearch.index.mapper.TokenCountFieldMapperIntegrationIT.testSearchByTokenCount {storeCountedFields=true loadCountedFields=false}"
yamlRestTest’s and javaRestTest’s are easy to identify, since they are found in a respective source directory. However, there are some more specialized REST tests that use custom task names. These are usually found in "qa" projects commonly use the "integTest" task.
If in doubt about which command to use, simply run <gradle path>:check
The packaging tests are run on different build vm cloud instances to verify that installing and running Elasticsearch distributions works correctly on supported operating systems. These tests should really only be run on ephemeral systems because they’re destructive; that is, these tests install and remove packages and freely modify system settings, so you will probably regret it if you execute them on your development machine.
To reproduce or debug packaging tests failures we recommend using using our provided buildkite tools
Backwards compatibility tests exist to test upgrading from each supported version to the current version. To run them all use:
./gradlew bwcTest
A specific version can be tested as well. For example, to test bwc with version 5.3.2 run:
./gradlew v5.3.2#bwcTest
Use -Dtests.class and -Dtests.method to run a specific bwcTest test. For example to run a specific tests from the x-pack rolling upgrade from 7.7.0:
./gradlew :x-pack:qa:rolling-upgrade:v7.7.0#bwcTest \ -Dtests.class=org.elasticsearch.upgrades.UpgradeClusterClientYamlTestSuiteIT \ -Dtests.method="test {p0=*/40_ml_datafeed_crud/*}"
Tests are ran for versions that are not yet released but with which the current version will be compatible with. These are automatically checked out and built from source. See BwcVersions and distribution/bwc/build.gradle for more information.
When running ./gradlew check
, minimal bwc checks are also run against compatible versions that are not yet released.
Sometimes a backward compatibility change spans two versions.
A common case is a new functionality that needs a BWC bridge in an unreleased versioned of a release branch (for example, 5.x).
Another use case, since the introduction of serverless, is to test BWC against main in addition to the other released branches.
To do so, specify the bwc.refspec
remote and branch to use for the BWC build as origin/main
.
To test against main, you will also need to create a new version in Version.java,
increment elasticsearch
in version.properties, and hard-code the project.version
for ml-cpp
in ml/build.gradle.
In general, to test the changes, you can instruct Gradle to build the BWC version from another remote/branch combination instead of pulling the release branch from GitHub.
You do so using the bwc.refspec.{VERSION}
system property:
./gradlew check -Dtests.bwc.refspec.8.15=origin/main
The branch needs to be available on the remote that the BWC makes of the repository you run the tests from. Using the remote is a handy trick to make sure that a branch is available and is up to date in the case of multiple runs.
Example:
Say you need to make a change to main
and have a BWC layer in 5.x
. You
will need to:
. Create a branch called index_req_change
off your remote ${remote}
. This
will contain your change.
. Create a branch called index_req_bwc_5.x
off 5.x
. This will contain your bwc layer.
. Push both branches to your remote repository.
. Run the tests with ./gradlew check -Dbwc.remote=${remote} -Dbwc.refspec.5.x=index_req_bwc_5.x
.
We have a CI matrix job that periodically runs all our tests with the JVM configured to be FIPS 140-2 compliant with the use of the BouncyCastle FIPS approved Security Provider. FIPS 140-2 imposes certain requirements that affect how our tests should be set up or what can be tested. This section summarizes what one needs to take into consideration so that tests won’t fail when run in fips mode.
If the following limitations cannot be observed, or there is a need to actually test some use case that is not available/allowed in fips mode, the test can be muted. For unit tests or Java rest tests one can use
assumeFalse("Justification why this cannot be run in FIPS mode", inFipsJvm());
For specific YAML rest tests one can use
- skip: features: fips_140 reason: "Justification why this cannot be run in FIPS mode"
For disabling entire types of tests for subprojects, one can use for example:
if (buildParams.inFipsJvm) { // This test cluster is using a BASIC license and FIPS 140 mode is not supported in BASIC tasks.named("javaRestTest").configure{enabled = false } }
in build.gradle
.
The following should be taken into consideration when writing new tests or adjusting existing ones:
JKS
and PKCS#12
keystores cannot be used in FIPS mode. If the test depends on being able to use
a keystore, it can be muted when needed ( see ESTestCase#inFipsJvm
). Alternatively, one can use
PEM encoded files for keys and certificates for the tests or for setting up TLS in a test cluster.
Also, when in FIPS 140 mode, hostname verification for TLS cannot be turned off so if you are using
*.verification_mode: none
, you’d need to mute the test in fips mode.
When using TLS, ensure that private keys used are longer than 2048 bits, or mute the test in fips mode.
Test clusters are configured with xpack.security.fips_mode.enabled
set to true. This means that
FIPS 140-2 related bootstrap checks are enabled and the test cluster will fail to form if the
password hashing algorithm is set to something else than a PBKDF2 based one. You can delegate the choice
of algorithm to i.e. SecurityIntegTestCase#getFastStoredHashAlgoForTests
if you don’t mind the
actual algorithm used, or depend on default values for the test cluster nodes.
While using pbkdf2
as the password hashing algorithm, FIPS 140-2 imposes a requirement that
passwords are longer than 14 characters. You can either ensure that all test user passwords in
your test are longer than 14 characters and use i.e. SecurityIntegTestCase#getFastStoredHashAlgoForTests
to randomly select a hashing algorithm, or use pbkdf2_stretch
that doesn’t have the same
limitation.
In FIPS 140-2 mode, the elasticsearch keystore needs to be password protected with a password
of appropriate length. This is handled automatically in fips.gradle
and the keystore is unlocked
on startup by the test clusters tooling in order to have secure settings available. However, you
might need to take into consideration that the keystore is password-protected with keystore-password
if you need to interact with it in a test.
There are multiple base classes for tests:
-
ESTestCase
: The base class of all tests. It is typically extended directly by unit tests. -
ESSingleNodeTestCase
: This test case sets up a cluster that has a single node. -
ESIntegTestCase
: An integration test case that creates a cluster that might have multiple nodes. -
ESRestTestCase
: An integration tests that interacts with an external cluster via the REST API. This is used for Java based REST tests. -
ESClientYamlSuiteTestCase
: A subclass ofESRestTestCase
used to run YAML based REST tests.
Unit tests are the preferred way to test some functionality: most of the time they are simpler to understand, more likely to reproduce, and unlikely to be affected by changes that are unrelated to the piece of functionality that is being tested.
The reason why ESSingleNodeTestCase
exists is that all our components used to
be very hard to set up in isolation, which had led us to having a number of
integration tests but close to no unit tests. ESSingleNodeTestCase
is a
workaround for this issue which provides an easy way to spin up a node and get
access to components that are hard to instantiate like IndicesService
.
Whenever practical, you should prefer unit tests.
Many tests extend ESIntegTestCase
, mostly because this is how most tests used
to work in the early days of Elasticsearch. However the complexity of these
tests tends to make them hard to debug. Whenever the functionality that is
being tested isn’t intimately dependent on how Elasticsearch behaves as a
cluster, it is recommended to write unit tests or REST tests instead.
In short, most new functionality should come with unit tests, and optionally REST tests to test integration.
Unfortunately, a large part of our code base is still hard to unit test.
Sometimes because some classes have lots of dependencies that make them hard to
instantiate. Sometimes because API contracts make tests hard to write. Code
refactors that make functionality easier to unit test are encouraged. If this
sounds very abstract to you, you can have a look at
this pull request for
instance, which is a good example. It refactors IndicesRequestCache
in such
a way that:
- it no longer depends on objects that are hard to instantiate such as
IndexShard
or SearchContext
,
- time-based eviction is applied on top of the cache rather than internally,
which makes it easier to assert on what the cache is expected to contain at
a given time.
In general, randomization should be used for parameters that are not expected
to affect the behavior of the functionality that is being tested. For instance
the number of shards should not impact date_histogram
aggregations, and the
choice of the store
type (niofs
vs mmapfs
) does not affect the results of
a query. Such randomization helps improve confidence that we are not relying on
implementation details of one component or specifics of some setup.
However it should not be used for coverage. For instance if you are testing a piece of functionality that enters different code paths depending on whether the index has 1 shards or 2+ shards, then we shouldn’t just test against an index with a random number of shards: there should be one test for the 1-shard case, and another test for the 2+ shards case.
Generating test coverage reports for Elasticsearch is currently not possible through Gradle. However, it is possible to gain insight in code coverage using IntelliJ’s built-in coverage analysis tool that can measure coverage upon executing specific tests.
Test coverage reporting used to be possible with JaCoCo when Elasticsearch was using Maven as its build system. Since the switch to Gradle though, this is no longer possible, seeing as the code currently used to build Elasticsearch does not allow JaCoCo to recognize its tests. For more information on this, see the discussion in issue #28867.
Additional plugins may be built alongside elasticsearch, where their dependency on elasticsearch will be substituted with the local elasticsearch build. To add your plugin, create a directory called elasticsearch-extra as a sibling of elasticsearch. Checkout your plugin underneath elasticsearch-extra and the build will automatically pick it up. You can verify the plugin is included as part of the build by checking the projects of the build.
./gradlew projects
There is a known issue with macOS localhost resolve strategy that can cause
some integration tests to fail. This is because integration tests have timings
for cluster formation, discovery, etc. that can be exceeded if name resolution
takes a long time.
To fix this, make sure you have your computer name (as returned by hostname
)
inside /etc/hosts
, e.g.:
127.0.0.1 localhost ElasticMBP.local 255.255.255.255 broadcasthost ::1 localhost ElasticMBP.local`
For changes that might affect the performance characteristics of Elasticsearch you should also run macrobenchmarks. We maintain a macrobenchmarking tool called Rally which you can use to measure the performance impact. It comes with a set of default benchmarks that we also run every night. To get started, please see Rally’s documentation.
The Elasticsearch docs are in AsciiDoc format. You can test and build the docs locally using the Elasticsearch documentation build process. See https://github.com/elastic/docs.