Skip to content

Commit

Permalink
Update experimental labels in the docs
Browse files Browse the repository at this point in the history
Relates #19798

Removed experimental label from:
* Painless
* Diversified Sampler Agg
* Sampler Agg
* Significant Terms Agg
* Terms Agg document count error and execution_hint
* Cardinality Agg precision_threshold
* Percentile Agg compression and HDR Histogram
* Percentile Rank Agg HDR Histogram
* Pipeline Aggregations
* index.shard.check_on_startup
* index.store.type (added warning)
* Preloading data into the file system cache
* foreach ingest processor
* Field caps API
* Profile API

Added experimental label to:
* Moving Average Agg Prediction


Changed experimental to beta for:
* Adjacency matrix agg
* Normalizers
* Tasks API
* Index sorting

Labelled experimental in Lucene:
* ICU plugin custom rules file
* Flatten graph token filter
* Synonym graph token filter
* Word delimiter graph token filter
* Simple pattern tokenizer
* Simple pattern split tokenizer
* Analysis explain output format
* Segments verbose output format
  • Loading branch information
clintongormley committed Jul 14, 2017
1 parent 8f0b357 commit 447de98
Show file tree
Hide file tree
Showing 43 changed files with 19 additions and 81 deletions.
2 changes: 0 additions & 2 deletions docs/painless/painless-debugging.asciidoc
Original file line number Diff line number Diff line change
@@ -1,8 +1,6 @@
[[painless-debugging]]
=== Painless Debugging

experimental[The Painless scripting language is new and is still marked as experimental. The syntax or API may be changed in the future in non-backwards compatible ways if required.]

==== Debug.Explain

Painless doesn't have a
Expand Down
2 changes: 0 additions & 2 deletions docs/painless/painless-getting-started.asciidoc
Original file line number Diff line number Diff line change
@@ -1,8 +1,6 @@
[[painless-getting-started]]
== Getting Started with Painless

experimental[The Painless scripting language is new and is still marked as experimental. The syntax or API may be changed in the future in non-backwards compatible ways if required.]

include::painless-description.asciidoc[]

[[painless-examples]]
Expand Down
2 changes: 0 additions & 2 deletions docs/painless/painless-syntax.asciidoc
Original file line number Diff line number Diff line change
@@ -1,8 +1,6 @@
[[painless-syntax]]
=== Painless Syntax

experimental[The Painless scripting language is new and is still marked as experimental. The syntax or API may be changed in the future in non-backwards compatible ways if required.]

[float]
[[control-flow]]
==== Control flow
Expand Down
2 changes: 1 addition & 1 deletion docs/plugins/analysis-icu.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -113,7 +113,7 @@ PUT icu_sample

===== Rules customization

experimental[]
experimental[This functionality is marked as experimental in Lucene]

You can customize the `icu-tokenizer` behavior by specifying per-script rule files, see the
http://userguide.icu-project.org/boundaryanalysis#TOC-RBBI-Rules[RBBI rules syntax reference]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ The request provides a collection of named filter expressions, similar to the `f
request.
Each bucket in the response represents a non-empty cell in the matrix of intersecting filters.

experimental[The `adjacency_matrix` aggregation is a new feature and we may evolve its design as we get feedback on its use. As a result, the API for this feature may change in non-backwards compatible ways]
beta[The `adjacency_matrix` aggregation is a new feature and we may evolve its design as we get feedback on its use. As a result, the API for this feature may change in non-backwards compatible ways]


Given filters named `A`, `B` and `C` the response would return buckets with the following names:
Expand Down
Original file line number Diff line number Diff line change
@@ -1,8 +1,6 @@
[[search-aggregations-bucket-diversified-sampler-aggregation]]
=== Diversified Sampler Aggregation

experimental[]

Like the `sampler` aggregation this is a filtering aggregation used to limit any sub aggregations' processing to a sample of the top-scoring documents.
The `diversified_sampler` aggregation adds the ability to limit the number of matches that share a common value such as an "author".

Expand Down
Original file line number Diff line number Diff line change
@@ -1,8 +1,6 @@
[[search-aggregations-bucket-sampler-aggregation]]
=== Sampler Aggregation

experimental[]

A filtering aggregation used to limit any sub aggregations' processing to a sample of the top-scoring documents.

.Example use cases:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,6 @@

An aggregation that returns interesting or unusual occurrences of terms in a set.

experimental[The `significant_terms` aggregation can be very heavy when run on large indices. Work is in progress to provide more lightweight sampling techniques. As a result, the API for this feature may change in non-backwards compatible ways]

.Example use cases:
* Suggesting "H5N1" when users search for "bird flu" in text
* Identifying the merchant that is the "common point of compromise" from the transaction history of credit card owners reporting loss
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -197,8 +197,6 @@ could have the 4th highest document count.

==== Per bucket document count error

experimental[]

The second error value can be enabled by setting the `show_term_doc_count_error` parameter to true. This shows an error value
for each term returned by the aggregation which represents the 'worst case' error in the document count and can be useful when
deciding on a value for the `shard_size` parameter. This is calculated by summing the document counts for the last term returned
Expand Down Expand Up @@ -728,8 +726,6 @@ collection mode need to replay the query on the second pass but only for the doc
[[search-aggregations-bucket-terms-aggregation-execution-hint]]
==== Execution hint

experimental[The automated execution optimization is experimental, so this parameter is provided temporarily as a way to override the default behaviour]

There are different mechanisms by which terms aggregations can be executed:

- by using field values directly in order to aggregate data per-bucket (`map`)
Expand Down Expand Up @@ -767,7 +763,7 @@ in inner aggregations.
}
--------------------------------------------------

<1> experimental[] the possible values are `map`, `global_ordinals`, `global_ordinals_hash` and `global_ordinals_low_cardinality`
<1> The possible values are `map`, `global_ordinals`, `global_ordinals_hash` and `global_ordinals_low_cardinality`

Please note that Elasticsearch will ignore this execution hint if it is not applicable and that there is no backward compatibility guarantee on these hints.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -43,8 +43,6 @@ Response:

This aggregation also supports the `precision_threshold` option:

experimental[The `precision_threshold` option is specific to the current internal implementation of the `cardinality` agg, which may change in the future]

[source,js]
--------------------------------------------------
POST /sales/_search?size=0
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -247,8 +247,6 @@ it. It would not be the case on more skewed distributions.
[[search-aggregations-metrics-percentile-aggregation-compression]]
==== Compression

experimental[The `compression` parameter is specific to the current internal implementation of percentiles, and may change in the future]

Approximate algorithms must balance memory utilization with estimation accuracy.
This balance can be controlled using a `compression` parameter:

Expand Down Expand Up @@ -287,8 +285,6 @@ the TDigest will use less memory.

==== HDR Histogram

experimental[]

https://github.com/HdrHistogram/HdrHistogram[HDR Histogram] (High Dynamic Range Histogram) is an alternative implementation
that can be useful when calculating percentiles for latency measurements as it can be faster than the t-digest implementation
with the trade-off of a larger memory footprint. This implementation maintains a fixed worse-case percentage error (specified
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -159,8 +159,6 @@ This will interpret the `script` parameter as an `inline` script with the `painl

==== HDR Histogram

experimental[]

https://github.com/HdrHistogram/HdrHistogram[HDR Histogram] (High Dynamic Range Histogram) is an alternative implementation
that can be useful when calculating percentile ranks for latency measurements as it can be faster than the t-digest implementation
with the trade-off of a larger memory footprint. This implementation maintains a fixed worse-case percentage error (specified as a
Expand Down
2 changes: 0 additions & 2 deletions docs/reference/aggregations/pipeline.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,6 @@

== Pipeline Aggregations

experimental[]

Pipeline aggregations work on the outputs produced from other aggregations rather than from document sets, adding
information to the output tree. There are many different types of pipeline aggregation, each computing different information from
other aggregations, but these types can be broken down into two families:
Expand Down
Original file line number Diff line number Diff line change
@@ -1,8 +1,6 @@
[[search-aggregations-pipeline-avg-bucket-aggregation]]
=== Avg Bucket Aggregation

experimental[]

A sibling pipeline aggregation which calculates the (mean) average value of a specified metric in a sibling aggregation.
The specified metric must be numeric and the sibling aggregation must be a multi-bucket aggregation.

Expand Down
Original file line number Diff line number Diff line change
@@ -1,8 +1,6 @@
[[search-aggregations-pipeline-bucket-script-aggregation]]
=== Bucket Script Aggregation

experimental[]

A parent pipeline aggregation which executes a script which can perform per bucket computations on specified metrics
in the parent multi-bucket aggregation. The specified metric must be numeric and the script must return a numeric value.

Expand Down
Original file line number Diff line number Diff line change
@@ -1,8 +1,6 @@
[[search-aggregations-pipeline-bucket-selector-aggregation]]
=== Bucket Selector Aggregation

experimental[]

A parent pipeline aggregation which executes a script which determines whether the current bucket will be retained
in the parent multi-bucket aggregation. The specified metric must be numeric and the script must return a boolean value.
If the script language is `expression` then a numeric return value is permitted. In this case 0.0 will be evaluated as `false`
Expand Down
Original file line number Diff line number Diff line change
@@ -1,8 +1,6 @@
[[search-aggregations-pipeline-cumulative-sum-aggregation]]
=== Cumulative Sum Aggregation

experimental[]

A parent pipeline aggregation which calculates the cumulative sum of a specified metric in a parent histogram (or date_histogram)
aggregation. The specified metric must be numeric and the enclosing histogram must have `min_doc_count` set to `0` (default
for `histogram` aggregations).
Expand Down
Original file line number Diff line number Diff line change
@@ -1,8 +1,6 @@
[[search-aggregations-pipeline-derivative-aggregation]]
=== Derivative Aggregation

experimental[]

A parent pipeline aggregation which calculates the derivative of a specified metric in a parent histogram (or date_histogram)
aggregation. The specified metric must be numeric and the enclosing histogram must have `min_doc_count` set to `0` (default
for `histogram` aggregations).
Expand Down
Original file line number Diff line number Diff line change
@@ -1,8 +1,6 @@
[[search-aggregations-pipeline-extended-stats-bucket-aggregation]]
=== Extended Stats Bucket Aggregation

experimental[]

A sibling pipeline aggregation which calculates a variety of stats across all bucket of a specified metric in a sibling aggregation.
The specified metric must be numeric and the sibling aggregation must be a multi-bucket aggregation.

Expand Down
Original file line number Diff line number Diff line change
@@ -1,8 +1,6 @@
[[search-aggregations-pipeline-max-bucket-aggregation]]
=== Max Bucket Aggregation

experimental[]

A sibling pipeline aggregation which identifies the bucket(s) with the maximum value of a specified metric in a sibling aggregation
and outputs both the value and the key(s) of the bucket(s). The specified metric must be numeric and the sibling aggregation must
be a multi-bucket aggregation.
Expand Down
Original file line number Diff line number Diff line change
@@ -1,8 +1,6 @@
[[search-aggregations-pipeline-min-bucket-aggregation]]
=== Min Bucket Aggregation

experimental[]

A sibling pipeline aggregation which identifies the bucket(s) with the minimum value of a specified metric in a sibling aggregation
and outputs both the value and the key(s) of the bucket(s). The specified metric must be numeric and the sibling aggregation must
be a multi-bucket aggregation.
Expand Down
Original file line number Diff line number Diff line change
@@ -1,8 +1,6 @@
[[search-aggregations-pipeline-movavg-aggregation]]
=== Moving Average Aggregation

experimental[]

Given an ordered series of data, the Moving Average aggregation will slide a window across the data and emit the average
value of that window. For example, given the data `[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]`, we can calculate a simple moving
average with windows size of `5` as follows:
Expand Down Expand Up @@ -513,6 +511,8 @@ POST /_search

==== Prediction

experimental[]

All the moving average model support a "prediction" mode, which will attempt to extrapolate into the future given the
current smoothed, moving average. Depending on the model and parameter, these predictions may or may not be accurate.

Expand Down
Original file line number Diff line number Diff line change
@@ -1,8 +1,6 @@
[[search-aggregations-pipeline-percentiles-bucket-aggregation]]
=== Percentiles Bucket Aggregation

experimental[]

A sibling pipeline aggregation which calculates percentiles across all bucket of a specified metric in a sibling aggregation.
The specified metric must be numeric and the sibling aggregation must be a multi-bucket aggregation.

Expand Down
Original file line number Diff line number Diff line change
@@ -1,8 +1,6 @@
[[search-aggregations-pipeline-serialdiff-aggregation]]
=== Serial Differencing Aggregation

experimental[]

Serial differencing is a technique where values in a time series are subtracted from itself at
different time lags or periods. For example, the datapoint f(x) = f(x~t~) - f(x~t-n~), where n is the period being used.

Expand Down
Original file line number Diff line number Diff line change
@@ -1,8 +1,6 @@
[[search-aggregations-pipeline-stats-bucket-aggregation]]
=== Stats Bucket Aggregation

experimental[]

A sibling pipeline aggregation which calculates a variety of stats across all bucket of a specified metric in a sibling aggregation.
The specified metric must be numeric and the sibling aggregation must be a multi-bucket aggregation.

Expand Down
Original file line number Diff line number Diff line change
@@ -1,8 +1,6 @@
[[search-aggregations-pipeline-sum-bucket-aggregation]]
=== Sum Bucket Aggregation

experimental[]

A sibling pipeline aggregation which calculates the sum across all bucket of a specified metric in a sibling aggregation.
The specified metric must be numeric and the sibling aggregation must be a multi-bucket aggregation.

Expand Down
2 changes: 1 addition & 1 deletion docs/reference/analysis/normalizers.asciidoc
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
[[analysis-normalizers]]
== Normalizers

experimental[]
beta[]

Normalizers are similar to analyzers except that they may only emit a single
token. As a consequence, they do not have a tokenizer and only accept a subset
Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
[[analysis-flatten-graph-tokenfilter]]
=== Flatten Graph Token Filter

experimental[]
experimental[This functionality is marked as experimental in Lucene]

The `flatten_graph` token filter accepts an arbitrary graph token
stream, such as that produced by
Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
[[analysis-synonym-graph-tokenfilter]]
=== Synonym Graph Token Filter

experimental[]
experimental[This functionality is marked as experimental in Lucene]

The `synonym_graph` token filter allows to easily handle synonyms,
including multi-word synonyms correctly during the analysis process.
Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
[[analysis-word-delimiter-graph-tokenfilter]]
=== Word Delimiter Graph Token Filter

experimental[]
experimental[This functionality is marked as experimental in Lucene]

Named `word_delimiter_graph`, it splits words into subwords and performs
optional transformations on subword groups. Words are split into
Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
[[analysis-simplepattern-tokenizer]]
=== Simple Pattern Tokenizer

experimental[]
experimental[This functionality is marked as experimental in Lucene]

The `simple_pattern` tokenizer uses a regular expression to capture matching
text as terms. The set of regular expression features it supports is more
Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
[[analysis-simplepatternsplit-tokenizer]]
=== Simple Pattern Split Tokenizer

experimental[]
experimental[This functionality is marked as experimental in Lucene]

The `simple_pattern_split` tokenizer uses a regular expression to split the
input into terms at pattern matches. The set of regular expression features it
Expand Down
2 changes: 1 addition & 1 deletion docs/reference/cluster/tasks.asciidoc
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
[[tasks]]
== Task Management API

experimental[The Task Management API is new and should still be considered experimental. The API may change in ways that are not backwards compatible]
beta[The Task Management API is new and should still be considered a beta feature. The API may change in ways that are not backwards compatible]

[float]
=== Current Tasks Information
Expand Down
4 changes: 2 additions & 2 deletions docs/reference/index-modules.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ specific index module:
`index.shard.check_on_startup`::
+
--
experimental[] Whether or not shards should be checked for corruption before opening. When
Whether or not shards should be checked for corruption before opening. When
corruption is detected, it will prevent the shard from being opened. Accepts:

`false`::
Expand All @@ -69,7 +69,7 @@ corruption is detected, it will prevent the shard from being opened. Accepts:
as corrupted will be automatically removed. This option *may result in data loss*.
Use with extreme caution!

Checking shards may take a lot of time on large indices.
WARNING: Expert only. Checking shards may take a lot of time on large indices.
--

[[index-codec]] `index.codec`::
Expand Down
2 changes: 1 addition & 1 deletion docs/reference/index-modules/index-sorting.asciidoc
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
[[index-modules-index-sorting]]
== Index Sorting

experimental[]
beta[]

When creating a new index in elasticsearch it is possible to configure how the Segments
inside each Shard will be sorted. By default Lucene does not apply any sort.
Expand Down
4 changes: 1 addition & 3 deletions docs/reference/index-modules/store.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ PUT /my_index
}
---------------------------------

experimental[This is an expert-only setting and may be removed in the future]
WARNING: This is an expert-only setting and may be removed in the future.

The following sections lists all the different storage types supported.

Expand Down Expand Up @@ -73,8 +73,6 @@ compatibility.

=== Pre-loading data into the file system cache

experimental[This is an expert-only setting and may be removed in the future]

By default, elasticsearch completely relies on the operating system file system
cache for caching I/O operations. It is possible to set `index.store.preload`
in order to tell the operating system to load the content of hot index
Expand Down
2 changes: 1 addition & 1 deletion docs/reference/indices/analyze.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -144,7 +144,7 @@ GET _analyze
If you want to get more advanced details, set `explain` to `true` (defaults to `false`). It will output all token attributes for each token.
You can filter token attributes you want to output by setting `attributes` option.

experimental[The format of the additional detail information is experimental and can change at any time]
experimental[The format of the additional detail information is experimental as it depends on the output returned by Lucene]

[source,js]
--------------------------------------------------
Expand Down
2 changes: 1 addition & 1 deletion docs/reference/indices/segments.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ compound:: Whether the segment is stored in a compound file. When true, this

To add additional information that can be used for debugging, use the `verbose` flag.

experimental[The format of the additional verbose information is experimental and can change at any time]
experimental[The format of the additional detail information is experimental as it depends on the output returned by Lucene]

[source,js]
--------------------------------------------------
Expand Down
Loading

0 comments on commit 447de98

Please sign in to comment.