diff --git a/docs/painless/painless-debugging.asciidoc b/docs/painless/painless-debugging.asciidoc index 7c4484938b03e..e50090fe71d97 100644 --- a/docs/painless/painless-debugging.asciidoc +++ b/docs/painless/painless-debugging.asciidoc @@ -1,8 +1,6 @@ [[painless-debugging]] === Painless Debugging -experimental[The Painless scripting language is new and is still marked as experimental. The syntax or API may be changed in the future in non-backwards compatible ways if required.] - ==== Debug.Explain Painless doesn't have a diff --git a/docs/painless/painless-getting-started.asciidoc b/docs/painless/painless-getting-started.asciidoc index 7948e90991fe1..98209fc7da9e5 100644 --- a/docs/painless/painless-getting-started.asciidoc +++ b/docs/painless/painless-getting-started.asciidoc @@ -1,8 +1,6 @@ [[painless-getting-started]] == Getting Started with Painless -experimental[The Painless scripting language is new and is still marked as experimental. The syntax or API may be changed in the future in non-backwards compatible ways if required.] - include::painless-description.asciidoc[] [[painless-examples]] diff --git a/docs/painless/painless-syntax.asciidoc b/docs/painless/painless-syntax.asciidoc index 79e830c05d21c..c68ed5168c01b 100644 --- a/docs/painless/painless-syntax.asciidoc +++ b/docs/painless/painless-syntax.asciidoc @@ -1,8 +1,6 @@ [[painless-syntax]] === Painless Syntax -experimental[The Painless scripting language is new and is still marked as experimental. The syntax or API may be changed in the future in non-backwards compatible ways if required.] - [float] [[control-flow]] ==== Control flow diff --git a/docs/plugins/analysis-icu.asciidoc b/docs/plugins/analysis-icu.asciidoc index e269c8675c9e4..e9155860bbc5e 100644 --- a/docs/plugins/analysis-icu.asciidoc +++ b/docs/plugins/analysis-icu.asciidoc @@ -113,7 +113,7 @@ PUT icu_sample ===== Rules customization -experimental[] +experimental[This functionality is marked as experimental in Lucene] You can customize the `icu-tokenizer` behavior by specifying per-script rule files, see the http://userguide.icu-project.org/boundaryanalysis#TOC-RBBI-Rules[RBBI rules syntax reference] diff --git a/docs/reference/aggregations/bucket/adjacency-matrix-aggregation.asciidoc b/docs/reference/aggregations/bucket/adjacency-matrix-aggregation.asciidoc index 8e4102683f147..4029b3a2902e0 100644 --- a/docs/reference/aggregations/bucket/adjacency-matrix-aggregation.asciidoc +++ b/docs/reference/aggregations/bucket/adjacency-matrix-aggregation.asciidoc @@ -6,7 +6,7 @@ The request provides a collection of named filter expressions, similar to the `f request. Each bucket in the response represents a non-empty cell in the matrix of intersecting filters. -experimental[The `adjacency_matrix` aggregation is a new feature and we may evolve its design as we get feedback on its use. As a result, the API for this feature may change in non-backwards compatible ways] +beta[The `adjacency_matrix` aggregation is a new feature and we may evolve its design as we get feedback on its use. As a result, the API for this feature may change in non-backwards compatible ways] Given filters named `A`, `B` and `C` the response would return buckets with the following names: diff --git a/docs/reference/aggregations/bucket/diversified-sampler-aggregation.asciidoc b/docs/reference/aggregations/bucket/diversified-sampler-aggregation.asciidoc index 8366e79787691..cca87d5e1664c 100644 --- a/docs/reference/aggregations/bucket/diversified-sampler-aggregation.asciidoc +++ b/docs/reference/aggregations/bucket/diversified-sampler-aggregation.asciidoc @@ -1,8 +1,6 @@ [[search-aggregations-bucket-diversified-sampler-aggregation]] === Diversified Sampler Aggregation -experimental[] - Like the `sampler` aggregation this is a filtering aggregation used to limit any sub aggregations' processing to a sample of the top-scoring documents. The `diversified_sampler` aggregation adds the ability to limit the number of matches that share a common value such as an "author". diff --git a/docs/reference/aggregations/bucket/sampler-aggregation.asciidoc b/docs/reference/aggregations/bucket/sampler-aggregation.asciidoc index 4957901920e23..c5ac91e9d3ad8 100644 --- a/docs/reference/aggregations/bucket/sampler-aggregation.asciidoc +++ b/docs/reference/aggregations/bucket/sampler-aggregation.asciidoc @@ -1,8 +1,6 @@ [[search-aggregations-bucket-sampler-aggregation]] === Sampler Aggregation -experimental[] - A filtering aggregation used to limit any sub aggregations' processing to a sample of the top-scoring documents. .Example use cases: diff --git a/docs/reference/aggregations/bucket/significantterms-aggregation.asciidoc b/docs/reference/aggregations/bucket/significantterms-aggregation.asciidoc index ba8bce6736995..6b37eeedb0141 100644 --- a/docs/reference/aggregations/bucket/significantterms-aggregation.asciidoc +++ b/docs/reference/aggregations/bucket/significantterms-aggregation.asciidoc @@ -3,8 +3,6 @@ An aggregation that returns interesting or unusual occurrences of terms in a set. -experimental[The `significant_terms` aggregation can be very heavy when run on large indices. Work is in progress to provide more lightweight sampling techniques. As a result, the API for this feature may change in non-backwards compatible ways] - .Example use cases: * Suggesting "H5N1" when users search for "bird flu" in text * Identifying the merchant that is the "common point of compromise" from the transaction history of credit card owners reporting loss diff --git a/docs/reference/aggregations/bucket/terms-aggregation.asciidoc b/docs/reference/aggregations/bucket/terms-aggregation.asciidoc index 8b32220255b9e..721b1e9eccaea 100644 --- a/docs/reference/aggregations/bucket/terms-aggregation.asciidoc +++ b/docs/reference/aggregations/bucket/terms-aggregation.asciidoc @@ -197,8 +197,6 @@ could have the 4th highest document count. ==== Per bucket document count error -experimental[] - The second error value can be enabled by setting the `show_term_doc_count_error` parameter to true. This shows an error value for each term returned by the aggregation which represents the 'worst case' error in the document count and can be useful when deciding on a value for the `shard_size` parameter. This is calculated by summing the document counts for the last term returned @@ -728,8 +726,6 @@ collection mode need to replay the query on the second pass but only for the doc [[search-aggregations-bucket-terms-aggregation-execution-hint]] ==== Execution hint -experimental[The automated execution optimization is experimental, so this parameter is provided temporarily as a way to override the default behaviour] - There are different mechanisms by which terms aggregations can be executed: - by using field values directly in order to aggregate data per-bucket (`map`) @@ -767,7 +763,7 @@ in inner aggregations. } -------------------------------------------------- -<1> experimental[] the possible values are `map`, `global_ordinals`, `global_ordinals_hash` and `global_ordinals_low_cardinality` +<1> The possible values are `map`, `global_ordinals`, `global_ordinals_hash` and `global_ordinals_low_cardinality` Please note that Elasticsearch will ignore this execution hint if it is not applicable and that there is no backward compatibility guarantee on these hints. diff --git a/docs/reference/aggregations/metrics/cardinality-aggregation.asciidoc b/docs/reference/aggregations/metrics/cardinality-aggregation.asciidoc index 8b5c05900b38f..c861a0ef9b690 100644 --- a/docs/reference/aggregations/metrics/cardinality-aggregation.asciidoc +++ b/docs/reference/aggregations/metrics/cardinality-aggregation.asciidoc @@ -43,8 +43,6 @@ Response: This aggregation also supports the `precision_threshold` option: -experimental[The `precision_threshold` option is specific to the current internal implementation of the `cardinality` agg, which may change in the future] - [source,js] -------------------------------------------------- POST /sales/_search?size=0 diff --git a/docs/reference/aggregations/metrics/percentile-aggregation.asciidoc b/docs/reference/aggregations/metrics/percentile-aggregation.asciidoc index 9c93a4bada72e..dcf65a597ebe7 100644 --- a/docs/reference/aggregations/metrics/percentile-aggregation.asciidoc +++ b/docs/reference/aggregations/metrics/percentile-aggregation.asciidoc @@ -247,8 +247,6 @@ it. It would not be the case on more skewed distributions. [[search-aggregations-metrics-percentile-aggregation-compression]] ==== Compression -experimental[The `compression` parameter is specific to the current internal implementation of percentiles, and may change in the future] - Approximate algorithms must balance memory utilization with estimation accuracy. This balance can be controlled using a `compression` parameter: @@ -287,8 +285,6 @@ the TDigest will use less memory. ==== HDR Histogram -experimental[] - https://github.com/HdrHistogram/HdrHistogram[HDR Histogram] (High Dynamic Range Histogram) is an alternative implementation that can be useful when calculating percentiles for latency measurements as it can be faster than the t-digest implementation with the trade-off of a larger memory footprint. This implementation maintains a fixed worse-case percentage error (specified diff --git a/docs/reference/aggregations/metrics/percentile-rank-aggregation.asciidoc b/docs/reference/aggregations/metrics/percentile-rank-aggregation.asciidoc index dc5cb8ceeecfe..ac2cfe6d2d8c2 100644 --- a/docs/reference/aggregations/metrics/percentile-rank-aggregation.asciidoc +++ b/docs/reference/aggregations/metrics/percentile-rank-aggregation.asciidoc @@ -159,8 +159,6 @@ This will interpret the `script` parameter as an `inline` script with the `painl ==== HDR Histogram -experimental[] - https://github.com/HdrHistogram/HdrHistogram[HDR Histogram] (High Dynamic Range Histogram) is an alternative implementation that can be useful when calculating percentile ranks for latency measurements as it can be faster than the t-digest implementation with the trade-off of a larger memory footprint. This implementation maintains a fixed worse-case percentage error (specified as a diff --git a/docs/reference/aggregations/pipeline.asciidoc b/docs/reference/aggregations/pipeline.asciidoc index 540a5f0ff290e..aa9d3f00ebe15 100644 --- a/docs/reference/aggregations/pipeline.asciidoc +++ b/docs/reference/aggregations/pipeline.asciidoc @@ -2,8 +2,6 @@ == Pipeline Aggregations -experimental[] - Pipeline aggregations work on the outputs produced from other aggregations rather than from document sets, adding information to the output tree. There are many different types of pipeline aggregation, each computing different information from other aggregations, but these types can be broken down into two families: diff --git a/docs/reference/aggregations/pipeline/avg-bucket-aggregation.asciidoc b/docs/reference/aggregations/pipeline/avg-bucket-aggregation.asciidoc index b1b618ee2b7d1..274efcbce62fc 100644 --- a/docs/reference/aggregations/pipeline/avg-bucket-aggregation.asciidoc +++ b/docs/reference/aggregations/pipeline/avg-bucket-aggregation.asciidoc @@ -1,8 +1,6 @@ [[search-aggregations-pipeline-avg-bucket-aggregation]] === Avg Bucket Aggregation -experimental[] - A sibling pipeline aggregation which calculates the (mean) average value of a specified metric in a sibling aggregation. The specified metric must be numeric and the sibling aggregation must be a multi-bucket aggregation. diff --git a/docs/reference/aggregations/pipeline/bucket-script-aggregation.asciidoc b/docs/reference/aggregations/pipeline/bucket-script-aggregation.asciidoc index aff1f8e6f5425..1825b37f0c734 100644 --- a/docs/reference/aggregations/pipeline/bucket-script-aggregation.asciidoc +++ b/docs/reference/aggregations/pipeline/bucket-script-aggregation.asciidoc @@ -1,8 +1,6 @@ [[search-aggregations-pipeline-bucket-script-aggregation]] === Bucket Script Aggregation -experimental[] - A parent pipeline aggregation which executes a script which can perform per bucket computations on specified metrics in the parent multi-bucket aggregation. The specified metric must be numeric and the script must return a numeric value. diff --git a/docs/reference/aggregations/pipeline/bucket-selector-aggregation.asciidoc b/docs/reference/aggregations/pipeline/bucket-selector-aggregation.asciidoc index c8b1d1c85978e..1dc44876c5361 100644 --- a/docs/reference/aggregations/pipeline/bucket-selector-aggregation.asciidoc +++ b/docs/reference/aggregations/pipeline/bucket-selector-aggregation.asciidoc @@ -1,8 +1,6 @@ [[search-aggregations-pipeline-bucket-selector-aggregation]] === Bucket Selector Aggregation -experimental[] - A parent pipeline aggregation which executes a script which determines whether the current bucket will be retained in the parent multi-bucket aggregation. The specified metric must be numeric and the script must return a boolean value. If the script language is `expression` then a numeric return value is permitted. In this case 0.0 will be evaluated as `false` diff --git a/docs/reference/aggregations/pipeline/cumulative-sum-aggregation.asciidoc b/docs/reference/aggregations/pipeline/cumulative-sum-aggregation.asciidoc index 816d4551d9d62..748946f8bd671 100644 --- a/docs/reference/aggregations/pipeline/cumulative-sum-aggregation.asciidoc +++ b/docs/reference/aggregations/pipeline/cumulative-sum-aggregation.asciidoc @@ -1,8 +1,6 @@ [[search-aggregations-pipeline-cumulative-sum-aggregation]] === Cumulative Sum Aggregation -experimental[] - A parent pipeline aggregation which calculates the cumulative sum of a specified metric in a parent histogram (or date_histogram) aggregation. The specified metric must be numeric and the enclosing histogram must have `min_doc_count` set to `0` (default for `histogram` aggregations). diff --git a/docs/reference/aggregations/pipeline/derivative-aggregation.asciidoc b/docs/reference/aggregations/pipeline/derivative-aggregation.asciidoc index 0e50465829d04..8479d1f45aea1 100644 --- a/docs/reference/aggregations/pipeline/derivative-aggregation.asciidoc +++ b/docs/reference/aggregations/pipeline/derivative-aggregation.asciidoc @@ -1,8 +1,6 @@ [[search-aggregations-pipeline-derivative-aggregation]] === Derivative Aggregation -experimental[] - A parent pipeline aggregation which calculates the derivative of a specified metric in a parent histogram (or date_histogram) aggregation. The specified metric must be numeric and the enclosing histogram must have `min_doc_count` set to `0` (default for `histogram` aggregations). diff --git a/docs/reference/aggregations/pipeline/extended-stats-bucket-aggregation.asciidoc b/docs/reference/aggregations/pipeline/extended-stats-bucket-aggregation.asciidoc index c6a3bb56765a8..eeef705a6468d 100644 --- a/docs/reference/aggregations/pipeline/extended-stats-bucket-aggregation.asciidoc +++ b/docs/reference/aggregations/pipeline/extended-stats-bucket-aggregation.asciidoc @@ -1,8 +1,6 @@ [[search-aggregations-pipeline-extended-stats-bucket-aggregation]] === Extended Stats Bucket Aggregation -experimental[] - A sibling pipeline aggregation which calculates a variety of stats across all bucket of a specified metric in a sibling aggregation. The specified metric must be numeric and the sibling aggregation must be a multi-bucket aggregation. diff --git a/docs/reference/aggregations/pipeline/max-bucket-aggregation.asciidoc b/docs/reference/aggregations/pipeline/max-bucket-aggregation.asciidoc index 3330cfccb87d7..8881315f50ab4 100644 --- a/docs/reference/aggregations/pipeline/max-bucket-aggregation.asciidoc +++ b/docs/reference/aggregations/pipeline/max-bucket-aggregation.asciidoc @@ -1,8 +1,6 @@ [[search-aggregations-pipeline-max-bucket-aggregation]] === Max Bucket Aggregation -experimental[] - A sibling pipeline aggregation which identifies the bucket(s) with the maximum value of a specified metric in a sibling aggregation and outputs both the value and the key(s) of the bucket(s). The specified metric must be numeric and the sibling aggregation must be a multi-bucket aggregation. diff --git a/docs/reference/aggregations/pipeline/min-bucket-aggregation.asciidoc b/docs/reference/aggregations/pipeline/min-bucket-aggregation.asciidoc index b170442c0f0e9..ad6aaa28c90dd 100644 --- a/docs/reference/aggregations/pipeline/min-bucket-aggregation.asciidoc +++ b/docs/reference/aggregations/pipeline/min-bucket-aggregation.asciidoc @@ -1,8 +1,6 @@ [[search-aggregations-pipeline-min-bucket-aggregation]] === Min Bucket Aggregation -experimental[] - A sibling pipeline aggregation which identifies the bucket(s) with the minimum value of a specified metric in a sibling aggregation and outputs both the value and the key(s) of the bucket(s). The specified metric must be numeric and the sibling aggregation must be a multi-bucket aggregation. diff --git a/docs/reference/aggregations/pipeline/movavg-aggregation.asciidoc b/docs/reference/aggregations/pipeline/movavg-aggregation.asciidoc index 58f42d07c2f91..db73510216be0 100644 --- a/docs/reference/aggregations/pipeline/movavg-aggregation.asciidoc +++ b/docs/reference/aggregations/pipeline/movavg-aggregation.asciidoc @@ -1,8 +1,6 @@ [[search-aggregations-pipeline-movavg-aggregation]] === Moving Average Aggregation -experimental[] - Given an ordered series of data, the Moving Average aggregation will slide a window across the data and emit the average value of that window. For example, given the data `[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]`, we can calculate a simple moving average with windows size of `5` as follows: @@ -513,6 +511,8 @@ POST /_search ==== Prediction +experimental[] + All the moving average model support a "prediction" mode, which will attempt to extrapolate into the future given the current smoothed, moving average. Depending on the model and parameter, these predictions may or may not be accurate. diff --git a/docs/reference/aggregations/pipeline/percentiles-bucket-aggregation.asciidoc b/docs/reference/aggregations/pipeline/percentiles-bucket-aggregation.asciidoc index fec2fe41d4f67..6c4329f6f20d0 100644 --- a/docs/reference/aggregations/pipeline/percentiles-bucket-aggregation.asciidoc +++ b/docs/reference/aggregations/pipeline/percentiles-bucket-aggregation.asciidoc @@ -1,8 +1,6 @@ [[search-aggregations-pipeline-percentiles-bucket-aggregation]] === Percentiles Bucket Aggregation -experimental[] - A sibling pipeline aggregation which calculates percentiles across all bucket of a specified metric in a sibling aggregation. The specified metric must be numeric and the sibling aggregation must be a multi-bucket aggregation. diff --git a/docs/reference/aggregations/pipeline/serial-diff-aggregation.asciidoc b/docs/reference/aggregations/pipeline/serial-diff-aggregation.asciidoc index d23fd4d614778..70aea68f88c34 100644 --- a/docs/reference/aggregations/pipeline/serial-diff-aggregation.asciidoc +++ b/docs/reference/aggregations/pipeline/serial-diff-aggregation.asciidoc @@ -1,8 +1,6 @@ [[search-aggregations-pipeline-serialdiff-aggregation]] === Serial Differencing Aggregation -experimental[] - Serial differencing is a technique where values in a time series are subtracted from itself at different time lags or periods. For example, the datapoint f(x) = f(x~t~) - f(x~t-n~), where n is the period being used. diff --git a/docs/reference/aggregations/pipeline/stats-bucket-aggregation.asciidoc b/docs/reference/aggregations/pipeline/stats-bucket-aggregation.asciidoc index b4131ef494441..b9c52ae981f75 100644 --- a/docs/reference/aggregations/pipeline/stats-bucket-aggregation.asciidoc +++ b/docs/reference/aggregations/pipeline/stats-bucket-aggregation.asciidoc @@ -1,8 +1,6 @@ [[search-aggregations-pipeline-stats-bucket-aggregation]] === Stats Bucket Aggregation -experimental[] - A sibling pipeline aggregation which calculates a variety of stats across all bucket of a specified metric in a sibling aggregation. The specified metric must be numeric and the sibling aggregation must be a multi-bucket aggregation. diff --git a/docs/reference/aggregations/pipeline/sum-bucket-aggregation.asciidoc b/docs/reference/aggregations/pipeline/sum-bucket-aggregation.asciidoc index b99ff6569eeda..b39cf472323c2 100644 --- a/docs/reference/aggregations/pipeline/sum-bucket-aggregation.asciidoc +++ b/docs/reference/aggregations/pipeline/sum-bucket-aggregation.asciidoc @@ -1,8 +1,6 @@ [[search-aggregations-pipeline-sum-bucket-aggregation]] === Sum Bucket Aggregation -experimental[] - A sibling pipeline aggregation which calculates the sum across all bucket of a specified metric in a sibling aggregation. The specified metric must be numeric and the sibling aggregation must be a multi-bucket aggregation. diff --git a/docs/reference/analysis/normalizers.asciidoc b/docs/reference/analysis/normalizers.asciidoc index 313f26b7c9515..4f2b08e6a6174 100644 --- a/docs/reference/analysis/normalizers.asciidoc +++ b/docs/reference/analysis/normalizers.asciidoc @@ -1,7 +1,7 @@ [[analysis-normalizers]] == Normalizers -experimental[] +beta[] Normalizers are similar to analyzers except that they may only emit a single token. As a consequence, they do not have a tokenizer and only accept a subset diff --git a/docs/reference/analysis/tokenfilters/flatten-graph-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/flatten-graph-tokenfilter.asciidoc index 90b6136898306..1495e8a91b2a7 100644 --- a/docs/reference/analysis/tokenfilters/flatten-graph-tokenfilter.asciidoc +++ b/docs/reference/analysis/tokenfilters/flatten-graph-tokenfilter.asciidoc @@ -1,7 +1,7 @@ [[analysis-flatten-graph-tokenfilter]] === Flatten Graph Token Filter -experimental[] +experimental[This functionality is marked as experimental in Lucene] The `flatten_graph` token filter accepts an arbitrary graph token stream, such as that produced by diff --git a/docs/reference/analysis/tokenfilters/synonym-graph-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/synonym-graph-tokenfilter.asciidoc index e1f77332fd471..13cf0e46f6860 100644 --- a/docs/reference/analysis/tokenfilters/synonym-graph-tokenfilter.asciidoc +++ b/docs/reference/analysis/tokenfilters/synonym-graph-tokenfilter.asciidoc @@ -1,7 +1,7 @@ [[analysis-synonym-graph-tokenfilter]] === Synonym Graph Token Filter -experimental[] +experimental[This functionality is marked as experimental in Lucene] The `synonym_graph` token filter allows to easily handle synonyms, including multi-word synonyms correctly during the analysis process. diff --git a/docs/reference/analysis/tokenfilters/word-delimiter-graph-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/word-delimiter-graph-tokenfilter.asciidoc index c221075b49f1f..183d587090b96 100644 --- a/docs/reference/analysis/tokenfilters/word-delimiter-graph-tokenfilter.asciidoc +++ b/docs/reference/analysis/tokenfilters/word-delimiter-graph-tokenfilter.asciidoc @@ -1,7 +1,7 @@ [[analysis-word-delimiter-graph-tokenfilter]] === Word Delimiter Graph Token Filter -experimental[] +experimental[This functionality is marked as experimental in Lucene] Named `word_delimiter_graph`, it splits words into subwords and performs optional transformations on subword groups. Words are split into diff --git a/docs/reference/analysis/tokenizers/simplepattern-tokenizer.asciidoc b/docs/reference/analysis/tokenizers/simplepattern-tokenizer.asciidoc index 3f235fa635833..adc5fc05deeb9 100644 --- a/docs/reference/analysis/tokenizers/simplepattern-tokenizer.asciidoc +++ b/docs/reference/analysis/tokenizers/simplepattern-tokenizer.asciidoc @@ -1,7 +1,7 @@ [[analysis-simplepattern-tokenizer]] === Simple Pattern Tokenizer -experimental[] +experimental[This functionality is marked as experimental in Lucene] The `simple_pattern` tokenizer uses a regular expression to capture matching text as terms. The set of regular expression features it supports is more diff --git a/docs/reference/analysis/tokenizers/simplepatternsplit-tokenizer.asciidoc b/docs/reference/analysis/tokenizers/simplepatternsplit-tokenizer.asciidoc index 59b77936cb956..fc2e186f97267 100644 --- a/docs/reference/analysis/tokenizers/simplepatternsplit-tokenizer.asciidoc +++ b/docs/reference/analysis/tokenizers/simplepatternsplit-tokenizer.asciidoc @@ -1,7 +1,7 @@ [[analysis-simplepatternsplit-tokenizer]] === Simple Pattern Split Tokenizer -experimental[] +experimental[This functionality is marked as experimental in Lucene] The `simple_pattern_split` tokenizer uses a regular expression to split the input into terms at pattern matches. The set of regular expression features it diff --git a/docs/reference/cluster/tasks.asciidoc b/docs/reference/cluster/tasks.asciidoc index f1f9a66931285..f0a5b4f8eb9a8 100644 --- a/docs/reference/cluster/tasks.asciidoc +++ b/docs/reference/cluster/tasks.asciidoc @@ -1,7 +1,7 @@ [[tasks]] == Task Management API -experimental[The Task Management API is new and should still be considered experimental. The API may change in ways that are not backwards compatible] +beta[The Task Management API is new and should still be considered a beta feature. The API may change in ways that are not backwards compatible] [float] === Current Tasks Information diff --git a/docs/reference/index-modules.asciidoc b/docs/reference/index-modules.asciidoc index ed91307e2c95e..0ec0724359a50 100644 --- a/docs/reference/index-modules.asciidoc +++ b/docs/reference/index-modules.asciidoc @@ -47,7 +47,7 @@ specific index module: `index.shard.check_on_startup`:: + -- -experimental[] Whether or not shards should be checked for corruption before opening. When +Whether or not shards should be checked for corruption before opening. When corruption is detected, it will prevent the shard from being opened. Accepts: `false`:: @@ -69,7 +69,7 @@ corruption is detected, it will prevent the shard from being opened. Accepts: as corrupted will be automatically removed. This option *may result in data loss*. Use with extreme caution! -Checking shards may take a lot of time on large indices. +WARNING: Expert only. Checking shards may take a lot of time on large indices. -- [[index-codec]] `index.codec`:: diff --git a/docs/reference/index-modules/index-sorting.asciidoc b/docs/reference/index-modules/index-sorting.asciidoc index 018775413fa8c..9dfe3b9eeea29 100644 --- a/docs/reference/index-modules/index-sorting.asciidoc +++ b/docs/reference/index-modules/index-sorting.asciidoc @@ -1,7 +1,7 @@ [[index-modules-index-sorting]] == Index Sorting -experimental[] +beta[] When creating a new index in elasticsearch it is possible to configure how the Segments inside each Shard will be sorted. By default Lucene does not apply any sort. diff --git a/docs/reference/index-modules/store.asciidoc b/docs/reference/index-modules/store.asciidoc index e33378ddd70be..8b7a0c614f5ec 100644 --- a/docs/reference/index-modules/store.asciidoc +++ b/docs/reference/index-modules/store.asciidoc @@ -32,7 +32,7 @@ PUT /my_index } --------------------------------- -experimental[This is an expert-only setting and may be removed in the future] +WARNING: This is an expert-only setting and may be removed in the future. The following sections lists all the different storage types supported. @@ -73,8 +73,6 @@ compatibility. === Pre-loading data into the file system cache -experimental[This is an expert-only setting and may be removed in the future] - By default, elasticsearch completely relies on the operating system file system cache for caching I/O operations. It is possible to set `index.store.preload` in order to tell the operating system to load the content of hot index diff --git a/docs/reference/indices/analyze.asciidoc b/docs/reference/indices/analyze.asciidoc index e29a5b2432a54..6ba34bb1dabd3 100644 --- a/docs/reference/indices/analyze.asciidoc +++ b/docs/reference/indices/analyze.asciidoc @@ -144,7 +144,7 @@ GET _analyze If you want to get more advanced details, set `explain` to `true` (defaults to `false`). It will output all token attributes for each token. You can filter token attributes you want to output by setting `attributes` option. -experimental[The format of the additional detail information is experimental and can change at any time] +experimental[The format of the additional detail information is experimental as it depends on the output returned by Lucene] [source,js] -------------------------------------------------- diff --git a/docs/reference/indices/segments.asciidoc b/docs/reference/indices/segments.asciidoc index 7dfcd2ba42a68..7e556a5fe1399 100644 --- a/docs/reference/indices/segments.asciidoc +++ b/docs/reference/indices/segments.asciidoc @@ -79,7 +79,7 @@ compound:: Whether the segment is stored in a compound file. When true, this To add additional information that can be used for debugging, use the `verbose` flag. -experimental[The format of the additional verbose information is experimental and can change at any time] +experimental[The format of the additional detail information is experimental as it depends on the output returned by Lucene] [source,js] -------------------------------------------------- diff --git a/docs/reference/ingest/ingest-node.asciidoc b/docs/reference/ingest/ingest-node.asciidoc index 68edbc431dbe0..c24542a1bc030 100644 --- a/docs/reference/ingest/ingest-node.asciidoc +++ b/docs/reference/ingest/ingest-node.asciidoc @@ -1010,11 +1010,6 @@ to the requester. [[foreach-processor]] === Foreach Processor -experimental[This processor may change or be replaced by something else that provides similar functionality. This -processor executes in its own context, which makes it different compared to all other processors and for features like -verbose simulation the subprocessor isn't visible. The reason we still expose this processor, is that it is the only -processor that can operate on an array] - Processes elements in an array of unknown length. All processors can operate on elements inside an array, but if all elements of an array need to diff --git a/docs/reference/mapping/types/keyword.asciidoc b/docs/reference/mapping/types/keyword.asciidoc index 1bb4254a86fef..821fb0557442f 100644 --- a/docs/reference/mapping/types/keyword.asciidoc +++ b/docs/reference/mapping/types/keyword.asciidoc @@ -98,7 +98,6 @@ The following parameters are accepted by `keyword` fields: <>:: - experimental[] How to pre-process the keyword prior to indexing. Defaults to `null`, meaning the keyword is kept as-is. diff --git a/docs/reference/modules/scripting/painless.asciidoc b/docs/reference/modules/scripting/painless.asciidoc index 0993701033b4b..ac48aad73d28f 100644 --- a/docs/reference/modules/scripting/painless.asciidoc +++ b/docs/reference/modules/scripting/painless.asciidoc @@ -1,8 +1,6 @@ [[modules-scripting-painless]] === Painless Scripting Language -experimental[The Painless scripting language is new and is still marked as experimental. The syntax or API may be changed in the future in non-backwards compatible ways if required.] - include::../../../painless/painless-description.asciidoc[] Ready to start scripting with Painless? See {painless}/painless-getting-started.html[Getting Started with Painless] in the guide to the diff --git a/docs/reference/search/field-caps.asciidoc b/docs/reference/search/field-caps.asciidoc index 42211342682d4..8329d96131dff 100644 --- a/docs/reference/search/field-caps.asciidoc +++ b/docs/reference/search/field-caps.asciidoc @@ -1,8 +1,6 @@ [[search-field-caps]] == Field Capabilities API -experimental[] - The field capabilities API allows to retrieve the capabilities of fields among multiple indices. The field capabilities api by default executes on all indices: diff --git a/docs/reference/search/profile.asciidoc b/docs/reference/search/profile.asciidoc index 5bef95cf931b5..db72026aa1412 100644 --- a/docs/reference/search/profile.asciidoc +++ b/docs/reference/search/profile.asciidoc @@ -1,7 +1,7 @@ [[search-profile]] == Profile API -experimental[] +WARNING: The Profile API is a debugging tool and adds signficant overhead to search execution. The Profile API provides detailed timing information about the execution of individual components in a search request. It gives the user insight into how search requests are executed at a low level so that