From 29c5b4950cf8c8639c01a275204f21d3ef2aee51 Mon Sep 17 00:00:00 2001 From: Benjamin Trent Date: Tue, 25 Jun 2024 13:32:37 -0400 Subject: [PATCH 01/10] Add some docs explaining filter performance and behavior for HNSW (#110108) (#110142) --- .../search-your-data/knn-search.asciidoc | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/docs/reference/search/search-your-data/knn-search.asciidoc b/docs/reference/search/search-your-data/knn-search.asciidoc index 0973e70ff1a1a..22a6678a284f8 100644 --- a/docs/reference/search/search-your-data/knn-search.asciidoc +++ b/docs/reference/search/search-your-data/knn-search.asciidoc @@ -278,6 +278,24 @@ post-filtering approach, where the filter is applied **after** the approximate kNN search completes. Post-filtering has the downside that it sometimes returns fewer than k results, even when there are enough matching documents. +[discrete] +[[approximate-knn-search-and-filtering]] +==== Approximate kNN search and filtering + +Unlike conventional query filtering, where more restrictive filters typically lead to faster queries, +applying filters in an approximate kNN search with an HNSW index can decrease performance. +This is because searching the HNSW graph requires additional exploration to obtain the `num_candidates` +that meet the filter criteria. + +To avoid significant performance drawbacks, Lucene implements the following strategies per segment: + +* If the filtered document count is less than or equal to num_candidates, the search bypasses the HNSW graph and +uses a brute force search on the filtered documents. + +* While exploring the HNSW graph, if the number of nodes explored exceeds the number of documents that satisfy the filter, +the search will stop exploring the graph and switch to a brute force search over the filtered documents. + + [discrete] ==== Combine approximate kNN with other features From c91fa9a34cf49b532fec8d3df39a0c289dd9b51a Mon Sep 17 00:00:00 2001 From: Martijn van Groningen Date: Thu, 4 Jul 2024 09:47:37 +0200 Subject: [PATCH 02/10] Add some information about the impact of index.codec setting. (#110413) (#110463) --- docs/reference/index-modules.asciidoc | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/docs/reference/index-modules.asciidoc b/docs/reference/index-modules.asciidoc index 4f15bb1c1d694..8ab287191aaf2 100644 --- a/docs/reference/index-modules.asciidoc +++ b/docs/reference/index-modules.asciidoc @@ -79,7 +79,9 @@ breaking change]. compression ratio, at the expense of slower stored fields performance. If you are updating the compression type, the new one will be applied after segments are merged. Segment merging can be forced using - <>. + <>. Experiments with indexing log datasets + have shown that `best_compression` gives up to ~18% lower storage usage + compared to `default` while only minimally affecting indexing throughput (~2%). [[routing-partition-size]] `index.routing_partition_size`:: From d939f25cd0b831e3ccdfa7fe5d9fb9732316af57 Mon Sep 17 00:00:00 2001 From: Martijn van Groningen Date: Fri, 5 Jul 2024 12:35:10 +0200 Subject: [PATCH 03/10] Slightly adjust wording around potential savings mentioned in the description of the index.codec setting (#110468) (#110511) --- docs/reference/index-modules.asciidoc | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/docs/reference/index-modules.asciidoc b/docs/reference/index-modules.asciidoc index 8ab287191aaf2..a08c7789643d6 100644 --- a/docs/reference/index-modules.asciidoc +++ b/docs/reference/index-modules.asciidoc @@ -80,8 +80,9 @@ breaking change]. If you are updating the compression type, the new one will be applied after segments are merged. Segment merging can be forced using <>. Experiments with indexing log datasets - have shown that `best_compression` gives up to ~18% lower storage usage - compared to `default` while only minimally affecting indexing throughput (~2%). + have shown that `best_compression` gives up to ~18% lower storage usage in + the most ideal scenario compared to `default` while only minimally affecting + indexing throughput (~2%). [[routing-partition-size]] `index.routing_partition_size`:: From b0fb9d248a9b57b080bc003c30d39cfa571daa7b Mon Sep 17 00:00:00 2001 From: Benjamin Trent Date: Wed, 10 Jul 2024 08:00:52 -0400 Subject: [PATCH 04/10] Fix search template examples by removing params on put (#110660) (#110701) --- docs/reference/search/multi-search-template-api.asciidoc | 3 --- docs/reference/search/render-search-template-api.asciidoc | 3 --- docs/reference/search/search-template-api.asciidoc | 3 --- .../reference/search/search-your-data/search-template.asciidoc | 3 --- 4 files changed, 12 deletions(-) diff --git a/docs/reference/search/multi-search-template-api.asciidoc b/docs/reference/search/multi-search-template-api.asciidoc index a680795737ff4..47c7205863435 100644 --- a/docs/reference/search/multi-search-template-api.asciidoc +++ b/docs/reference/search/multi-search-template-api.asciidoc @@ -22,9 +22,6 @@ PUT _scripts/my-search-template }, "from": "{{from}}", "size": "{{size}}" - }, - "params": { - "query_string": "My query string" } } } diff --git a/docs/reference/search/render-search-template-api.asciidoc b/docs/reference/search/render-search-template-api.asciidoc index 1f259dddf6879..0c782f26068e6 100644 --- a/docs/reference/search/render-search-template-api.asciidoc +++ b/docs/reference/search/render-search-template-api.asciidoc @@ -22,9 +22,6 @@ PUT _scripts/my-search-template }, "from": "{{from}}", "size": "{{size}}" - }, - "params": { - "query_string": "My query string" } } } diff --git a/docs/reference/search/search-template-api.asciidoc b/docs/reference/search/search-template-api.asciidoc index 539048a324746..6cac1245ff99c 100644 --- a/docs/reference/search/search-template-api.asciidoc +++ b/docs/reference/search/search-template-api.asciidoc @@ -21,9 +21,6 @@ PUT _scripts/my-search-template }, "from": "{{from}}", "size": "{{size}}" - }, - "params": { - "query_string": "My query string" } } } diff --git a/docs/reference/search/search-your-data/search-template.asciidoc b/docs/reference/search/search-your-data/search-template.asciidoc index 9fb6ee85ffcca..35c6e3399ba5c 100644 --- a/docs/reference/search/search-your-data/search-template.asciidoc +++ b/docs/reference/search/search-your-data/search-template.asciidoc @@ -42,9 +42,6 @@ PUT _scripts/my-search-template }, "from": "{{from}}", "size": "{{size}}" - }, - "params": { - "query_string": "My query string" } } } From 4c93ecdef3f1becb86e0dcb1c5ce97d3e3443193 Mon Sep 17 00:00:00 2001 From: Ioana Tagirta Date: Thu, 11 Jul 2024 16:49:01 +0200 Subject: [PATCH 05/10] Document how to query for a specific feature within rank_features (#110749) (#110773) --- docs/reference/mapping/types/rank-features.asciidoc | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/docs/reference/mapping/types/rank-features.asciidoc b/docs/reference/mapping/types/rank-features.asciidoc index b54e99ede3fae..25d5278ca220d 100644 --- a/docs/reference/mapping/types/rank-features.asciidoc +++ b/docs/reference/mapping/types/rank-features.asciidoc @@ -70,6 +70,15 @@ GET my-index-000001/_search } } } + +GET my-index-000001/_search +{ + "query": { <6> + "term": { + "topics": "economics" + } + } +} -------------------------------------------------- <1> Rank features fields must use the `rank_features` field type @@ -77,6 +86,7 @@ GET my-index-000001/_search <3> Rank features fields must be a hash with string keys and strictly positive numeric values <4> This query ranks documents by how much they are about the "politics" topic. <5> This query ranks documents inversely to the number of "1star" reviews they received. +<6> This query returns documents that store the "economics" feature in the "topics" field. NOTE: `rank_features` fields only support single-valued features and strictly From c0939224316174484520ac9da0ca3086d6a1e1eb Mon Sep 17 00:00:00 2001 From: Carlos Delgado <6339205+carlosdelest@users.noreply.github.com> Date: Thu, 18 Jul 2024 14:27:33 +0200 Subject: [PATCH 06/10] [8.11] Clarify synonyms docs (#110822) (#111018) --- .../synonym-graph-tokenfilter.asciidoc | 135 ++++++++++++----- .../tokenfilters/synonym-tokenfilter.asciidoc | 141 ++++++++++++------ .../tokenfilters/synonyms-format.asciidoc | 2 +- .../search-with-synonyms.asciidoc | 21 +++ .../synonyms/apis/synonyms-apis.asciidoc | 17 +++ 5 files changed, 229 insertions(+), 87 deletions(-) diff --git a/docs/reference/analysis/tokenfilters/synonym-graph-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/synonym-graph-tokenfilter.asciidoc index 076d04651f290..751ddceb3a4bd 100644 --- a/docs/reference/analysis/tokenfilters/synonym-graph-tokenfilter.asciidoc +++ b/docs/reference/analysis/tokenfilters/synonym-graph-tokenfilter.asciidoc @@ -78,45 +78,45 @@ Additional settings are: <> search analyzers to pick up changes to synonym files. Only to be used for search analyzers. * `expand` (defaults to `true`). -* `lenient` (defaults to `false`). If `true` ignores exceptions while parsing the synonym configuration. It is important -to note that only those synonym rules which cannot get parsed are ignored. For instance consider the following request: - -[source,console] --------------------------------------------------- -PUT /test_index -{ - "settings": { - "index": { - "analysis": { - "analyzer": { - "synonym": { - "tokenizer": "standard", - "filter": [ "my_stop", "synonym_graph" ] - } - }, - "filter": { - "my_stop": { - "type": "stop", - "stopwords": [ "bar" ] - }, - "synonym_graph": { - "type": "synonym_graph", - "lenient": true, - "synonyms": [ "foo, bar => baz" ] - } - } - } - } - } -} --------------------------------------------------- +Expands definitions for equivalent synonym rules. +See <>. +* `lenient` (defaults to `false`). +If `true` ignores errors while parsing the synonym configuration. +It is important to note that only those synonym rules which cannot get parsed are ignored. +See <> for an example of `lenient` behaviour for invalid synonym rules. + +[discrete] +[[synonym-graph-tokenizer-expand-equivalent-synonyms]] +===== `expand` equivalent synonym rules + +The `expand` parameter controls whether to expand equivalent synonym rules. +Consider a synonym defined like: + +`foo, bar, baz` + +Using `expand: true`, the synonym rule would be expanded into: -With the above request the word `bar` gets skipped but a mapping `foo => baz` is still added. However, if the mapping -being added was `foo, baz => bar` nothing would get added to the synonym list. This is because the target word for the -mapping is itself eliminated because it was a stop word. Similarly, if the mapping was "bar, foo, baz" and `expand` was -set to `false` no mapping would get added as when `expand=false` the target mapping is the first word. However, if -`expand=true` then the mappings added would be equivalent to `foo, baz => foo, baz` i.e, all mappings other than the -stop word. +``` +foo => foo +foo => bar +foo => baz +bar => foo +bar => bar +bar => baz +baz => foo +baz => bar +baz => baz +``` + +When `expand` is set to `false`, the synonym rule is not expanded and the first synonym is treated as the canonical representation. The synonym would be equivalent to: + +``` +foo => foo +bar => foo +baz => foo +``` + +The `expand` parameter does not affect explicit synonym rules, like `foo, bar => baz`. [discrete] [[synonym-graph-tokenizer-ignore_case-deprecated]] @@ -153,12 +153,65 @@ Text will be processed first through filters preceding the synonym filter before In the above example, text will be lowercased by the `lowercase` filter before being processed by the `synonyms_filter`. This means that all the synonyms defined there needs to be in lowercase, or they won't be found by the synonyms filter. -The synonym rules should not contain words that are removed by a filter that appears later in the chain (like a `stop` filter). -Removing a term from a synonym rule means there will be no matching for it at query time. - Because entries in the synonym map cannot have stacked positions, some token filters may cause issues here. Token filters that produce multiple versions of a token may choose which version of the token to emit when parsing synonyms. For example, `asciifolding` will only produce the folded version of the token. Others, like `multiplexer`, `word_delimiter_graph` or `ngram` will throw an error. If you need to build analyzers that include both multi-token filters and synonym filters, consider using the <> filter, with the multi-token filters in one branch and the synonym filter in the other. + +[discrete] +[[synonym-graph-tokenizer-stop-token-filter]] +===== Synonyms and `stop` token filters + +Synonyms and <> interact with each other in the following ways: + +[discrete] +====== Stop token filter *before* synonym token filter + +Stop words will be removed from the synonym rule definition. +This can can cause errors on the synonym rule. + +[WARNING] +==== +Invalid synonym rules can cause errors when applying analyzer changes. +For reloadable analyzers, this prevents reloading and applying changes. +You must correct errors in the synonym rules and reload the analyzer. + +An index with invalid synonym rules cannot be reopened, making it inoperable when: + +* A node containing the index starts +* The index is opened from a closed state +* A node restart occurs (which reopens the node assigned shards) +==== + +For *explicit synonym rules* like `foo, bar => baz` with a stop filter that removes `bar`: + +- If `lenient` is set to `false`, an error will be raised as `bar` would be removed from the left hand side of the synonym rule. +- If `lenient` is set to `true`, the rule `foo => baz` will be added and `bar => baz` will be ignored. + +If the stop filter removed `baz` instead: + +- If `lenient` is set to `false`, an error will be raised as `baz` would be removed from the right hand side of the synonym rule. +- If `lenient` is set to `true`, the synonym will have no effect as the target word is removed. + +For *equivalent synonym rules* like `foo, bar, baz` and `expand: true, with a stop filter that removes `bar`: + +- If `lenient` is set to `false`, an error will be raised as `bar` would be removed from the synonym rule. +- If `lenient` is set to `true`, the synonyms added would be equivalent to the following synonym rules, which do not contain the removed word: + +``` +foo => foo +foo => baz +baz => foo +baz => baz +``` + +[discrete] +====== Stop token filter *after* synonym token filter + +The stop filter will remove the terms from the resulting synonym expansion. + +For example, a synonym rule like `foo, bar => baz` and a stop filter that removes `baz` will get no matches for `foo` or `bar`, as both would get expanded to `baz` which is removed by the stop filter. + +If the stop filter removed `foo` instead, then searching for `foo` would get expanded to `baz`, which is not removed by the stop filter thus potentially providing matches for `baz`. diff --git a/docs/reference/analysis/tokenfilters/synonym-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/synonym-tokenfilter.asciidoc index 91c0a49f41066..bcd6a038cc5b5 100644 --- a/docs/reference/analysis/tokenfilters/synonym-tokenfilter.asciidoc +++ b/docs/reference/analysis/tokenfilters/synonym-tokenfilter.asciidoc @@ -66,47 +66,45 @@ Additional settings are: <> search analyzers to pick up changes to synonym files. Only to be used for search analyzers. * `expand` (defaults to `true`). -* `lenient` (defaults to `false`). If `true` ignores exceptions while parsing the synonym configuration. It is important -to note that only those synonym rules which cannot get parsed are ignored. For instance consider the following request: - - -[source,console] --------------------------------------------------- -PUT /test_index -{ - "settings": { - "index": { - "analysis": { - "analyzer": { - "synonym": { - "tokenizer": "standard", - "filter": [ "my_stop", "synonym" ] - } - }, - "filter": { - "my_stop": { - "type": "stop", - "stopwords": [ "bar" ] - }, - "synonym": { - "type": "synonym", - "lenient": true, - "synonyms": [ "foo, bar => baz" ] - } - } - } - } - } -} --------------------------------------------------- +Expands definitions for equivalent synonym rules. +See <>. +* `lenient` (defaults to `false`). +If `true` ignores errors while parsing the synonym configuration. +It is important to note that only those synonym rules which cannot get parsed are ignored. +See <> for an example of `lenient` behaviour for invalid synonym rules. + +[discrete] +[[synonym-tokenizer-expand-equivalent-synonyms]] +===== `expand` equivalent synonym rules + +The `expand` parameter controls whether to expand equivalent synonym rules. +Consider a synonym defined like: + +`foo, bar, baz` + +Using `expand: true`, the synonym rule would be expanded into: -With the above request the word `bar` gets skipped but a mapping `foo => baz` is still added. However, if the mapping -being added was `foo, baz => bar` nothing would get added to the synonym list. This is because the target word for the -mapping is itself eliminated because it was a stop word. Similarly, if the mapping was "bar, foo, baz" and `expand` was -set to `false` no mapping would get added as when `expand=false` the target mapping is the first word. However, if -`expand=true` then the mappings added would be equivalent to `foo, baz => foo, baz` i.e, all mappings other than the -stop word. +``` +foo => foo +foo => bar +foo => baz +bar => foo +bar => bar +bar => baz +baz => foo +baz => bar +baz => baz +``` +When `expand` is set to `false`, the synonym rule is not expanded and the first synonym is treated as the canonical representation. The synonym would be equivalent to: + +``` +foo => foo +bar => foo +baz => foo +``` + +The `expand` parameter does not affect explicit synonym rules, like `foo, bar => baz`. [discrete] [[synonym-tokenizer-ignore_case-deprecated]] @@ -128,7 +126,7 @@ To apply synonyms, you will need to include a synonym token filters into an anal "my_analyzer": { "type": "custom", "tokenizer": "standard", - "filter": ["lowercase", "synonym"] + "filter": ["stemmer", "synonym"] } } ---- @@ -140,11 +138,8 @@ To apply synonyms, you will need to include a synonym token filters into an anal Order is important for your token filters. Text will be processed first through filters preceding the synonym filter before being processed by the synonym filter. -In the above example, text will be lowercased by the `lowercase` filter before being processed by the `synonyms_filter`. -This means that all the synonyms defined there needs to be in lowercase, or they won't be found by the synonyms filter. - -The synonym rules should not contain words that are removed by a filter that appears later in the chain (like a `stop` filter). -Removing a term from a synonym rule means there will be no matching for it at query time. +{es} will also use the token filters preceding the synonym filter in a tokenizer chain to parse the entries in a synonym file or synonym set. +In the above example, the synonyms token filter is placed after a stemmer. The stemmer will also be applied to the synonym entries. Because entries in the synonym map cannot have stacked positions, some token filters may cause issues here. Token filters that produce multiple versions of a token may choose which version of the token to emit when parsing synonyms. @@ -152,3 +147,59 @@ For example, `asciifolding` will only produce the folded version of the token. Others, like `multiplexer`, `word_delimiter_graph` or `ngram` will throw an error. If you need to build analyzers that include both multi-token filters and synonym filters, consider using the <> filter, with the multi-token filters in one branch and the synonym filter in the other. + +[discrete] +[[synonym-tokenizer-stop-token-filter]] +===== Synonyms and `stop` token filters + +Synonyms and <> interact with each other in the following ways: + +[discrete] +====== Stop token filter *before* synonym token filter + +Stop words will be removed from the synonym rule definition. +This can can cause errors on the synonym rule. + +[WARNING] +==== +Invalid synonym rules can cause errors when applying analyzer changes. +For reloadable analyzers, this prevents reloading and applying changes. +You must correct errors in the synonym rules and reload the analyzer. + +An index with invalid synonym rules cannot be reopened, making it inoperable when: + +* A node containing the index starts +* The index is opened from a closed state +* A node restart occurs (which reopens the node assigned shards) +==== + +For *explicit synonym rules* like `foo, bar => baz` with a stop filter that removes `bar`: + +- If `lenient` is set to `false`, an error will be raised as `bar` would be removed from the left hand side of the synonym rule. +- If `lenient` is set to `true`, the rule `foo => baz` will be added and `bar => baz` will be ignored. + +If the stop filter removed `baz` instead: + +- If `lenient` is set to `false`, an error will be raised as `baz` would be removed from the right hand side of the synonym rule. +- If `lenient` is set to `true`, the synonym will have no effect as the target word is removed. + +For *equivalent synonym rules* like `foo, bar, baz` and `expand: true, with a stop filter that removes `bar`: + +- If `lenient` is set to `false`, an error will be raised as `bar` would be removed from the synonym rule. +- If `lenient` is set to `true`, the synonyms added would be equivalent to the following synonym rules, which do not contain the removed word: + +``` +foo => foo +foo => baz +baz => foo +baz => baz +``` + +[discrete] +====== Stop token filter *after* synonym token filter + +The stop filter will remove the terms from the resulting synonym expansion. + +For example, a synonym rule like `foo, bar => baz` and a stop filter that removes `baz` will get no matches for `foo` or `bar`, as both would get expanded to `baz` which is removed by the stop filter. + +If the stop filter removed `foo` instead, then searching for `foo` would get expanded to `baz`, which is not removed by the stop filter thus potentially providing matches for `baz`. diff --git a/docs/reference/analysis/tokenfilters/synonyms-format.asciidoc b/docs/reference/analysis/tokenfilters/synonyms-format.asciidoc index 63dd72dade8d0..e780c24963312 100644 --- a/docs/reference/analysis/tokenfilters/synonyms-format.asciidoc +++ b/docs/reference/analysis/tokenfilters/synonyms-format.asciidoc @@ -15,7 +15,7 @@ This format uses two different definitions: ipod, i-pod, i pod computer, pc, laptop ---- -* Explicit mappings: Matches a group of words to other words. Words on the left hand side of the rule definition are expanded into all the possibilities described on the right hand side. Example: +* Explicit synonyms: Matches a group of words to other words. Words on the left hand side of the rule definition are expanded into all the possibilities described on the right hand side. Example: + [source,synonyms] ---- diff --git a/docs/reference/search/search-your-data/search-with-synonyms.asciidoc b/docs/reference/search/search-your-data/search-with-synonyms.asciidoc index fb6abd6d36099..c458d945e0af2 100644 --- a/docs/reference/search/search-your-data/search-with-synonyms.asciidoc +++ b/docs/reference/search/search-your-data/search-with-synonyms.asciidoc @@ -75,6 +75,27 @@ A large number of inline synonyms increases cluster size unnecessarily and can l Once your synonyms sets are created, you can start configuring your token filters and analyzers to use them. + +[WARNING] +====== +Synonyms sets must exist before they can be added to indices. +If an index is created referencing a nonexistent synonyms set, the index will remain in a partially created and inoperable state. +The only way to recover from this scenario is to ensure the synonyms set exists then either delete and re-create the index, or close and re-open the index. +====== + +[WARNING] +====== +Invalid synonym rules can cause errors when applying analyzer changes. +For reloadable analyzers, this prevents reloading and applying changes. +You must correct errors in the synonym rules and reload the analyzer. + +An index with invalid synonym rules cannot be reopened, making it inoperable when: + +* A node containing the index starts +* The index is opened from a closed state +* A node restart occurs (which reopens the node assigned shards) +====== + {es} uses synonyms as part of the <>. You can use two types of <> to include synonyms: diff --git a/docs/reference/synonyms/apis/synonyms-apis.asciidoc b/docs/reference/synonyms/apis/synonyms-apis.asciidoc index 0c13384bf39ef..53f9d67f1c317 100644 --- a/docs/reference/synonyms/apis/synonyms-apis.asciidoc +++ b/docs/reference/synonyms/apis/synonyms-apis.asciidoc @@ -24,6 +24,23 @@ NOTE: Synonyms sets are limited to a maximum of 10,000 synonym rules per set. Synonym sets with more than 10,000 synonym rules will provide inconsistent search results. If you need to manage more synonym rules, you can create multiple synonyms sets. +WARNING: Synonyms sets must exist before they can be added to indices. +If an index is created referencing a nonexistent synonyms set, the index will remain in a partially created and inoperable state. +The only way to recover from this scenario is to ensure the synonyms set exists then either delete and re-create the index, or close and re-open the index. + +[WARNING] +==== +Invalid synonym rules can cause errors when applying analyzer changes. +For reloadable analyzers, this prevents reloading and applying changes. +You must correct errors in the synonym rules and reload the analyzer. + +An index with invalid synonym rules cannot be reopened, making it inoperable when: + +* A node containing the index starts +* The index is opened from a closed state +* A node restart occurs (which reopens the node assigned shards) +==== + [discrete] [[synonyms-sets-apis]] === Synonyms sets APIs From df8f9c0855943077b2268a38c7318571709f7bf3 Mon Sep 17 00:00:00 2001 From: Liam Thompson <32779855+leemthompo@users.noreply.github.com> Date: Wed, 14 Aug 2024 15:53:01 +0100 Subject: [PATCH 07/10] [DOCS] Fix response value in esql-query-api.asciidoc (#111882) (#111888) --- docs/reference/esql/esql-query-api.asciidoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/reference/esql/esql-query-api.asciidoc b/docs/reference/esql/esql-query-api.asciidoc index afa9ab7254cfa..7c6a979a9d188 100644 --- a/docs/reference/esql/esql-query-api.asciidoc +++ b/docs/reference/esql/esql-query-api.asciidoc @@ -87,6 +87,6 @@ Column headings for the search results. Each object is a column. (string) Data type for the column. ==== -`rows`:: +`values`:: (array of arrays) Values for the search results. From c3d21f0707ad8e2b44b6e6190c18eefa6f54d0a5 Mon Sep 17 00:00:00 2001 From: Ignacio Vera Date: Wed, 4 Sep 2024 11:37:40 +0200 Subject: [PATCH 08/10] [DOC] geo_shape field type supports geo_hex aggregation (#112448) (#112497) --- docs/reference/mapping/types/geo-shape.asciidoc | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/docs/reference/mapping/types/geo-shape.asciidoc b/docs/reference/mapping/types/geo-shape.asciidoc index 37ef340733932..9d644d77494d4 100644 --- a/docs/reference/mapping/types/geo-shape.asciidoc +++ b/docs/reference/mapping/types/geo-shape.asciidoc @@ -18,9 +18,8 @@ Documents using this type can be used: ** a <> (for example, intersecting polygons). * to aggregate documents by geographic grids: ** either <> -** or <>. - -Grid aggregations over `geo_hex` grids are not supported for `geo_shape` fields. +** or <> +** or <> [[geo-shape-mapping-options]] [discrete] From 0756bf73cc4a27c4618c61e7aaf0bc55e4e87b80 Mon Sep 17 00:00:00 2001 From: Liam Thompson <32779855+leemthompo@users.noreply.github.com> Date: Thu, 24 Oct 2024 15:38:42 +0200 Subject: [PATCH 09/10] Add documentation for minimum_should_match (#113043) (#115535) (cherry picked from commit 28715b791a88de6b3f2ccb6b4f097a9881f01007) Co-authored-by: mspielberg <9729801+mspielberg@users.noreply.github.com> --- .../reference/query-dsl/terms-set-query.asciidoc | 16 +++++++++++++--- 1 file changed, 13 insertions(+), 3 deletions(-) diff --git a/docs/reference/query-dsl/terms-set-query.asciidoc b/docs/reference/query-dsl/terms-set-query.asciidoc index 2abfe54d53976..27717af3ac171 100644 --- a/docs/reference/query-dsl/terms-set-query.asciidoc +++ b/docs/reference/query-dsl/terms-set-query.asciidoc @@ -159,12 +159,22 @@ GET /job-candidates/_search `terms`:: + -- -(Required, array of strings) Array of terms you wish to find in the provided +(Required, array) Array of terms you wish to find in the provided ``. To return a document, a required number of terms must exactly match the field values, including whitespace and capitalization. -The required number of matching terms is defined in the -`minimum_should_match_field` or `minimum_should_match_script` parameter. +The required number of matching terms is defined in the `minimum_should_match`, +`minimum_should_match_field` or `minimum_should_match_script` parameters. Exactly +one of these parameters must be provided. +-- + +`minimum_should_match`:: ++ +-- +(Optional) Specification for the number of matching terms required to return +a document. + +For valid values, see <>. -- `minimum_should_match_field`:: From d186cab67ad74734b8236c0ef12762629e9e49bd Mon Sep 17 00:00:00 2001 From: shainaraskas <58563081+shainaraskas@users.noreply.github.com> Date: Tue, 19 Nov 2024 15:46:14 -0500 Subject: [PATCH 10/10] [DOCS] document `Content-Type` breaking change from 8.0 (#116845) (#117061) (cherry picked from commit fc987d81fe27666286e0d28487a841a563212f55) --- .../migrate_8_0/rest-api-changes.asciidoc | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) diff --git a/docs/reference/migration/migrate_8_0/rest-api-changes.asciidoc b/docs/reference/migration/migrate_8_0/rest-api-changes.asciidoc index 87e7041109710..8eb2c842c22ee 100644 --- a/docs/reference/migration/migrate_8_0/rest-api-changes.asciidoc +++ b/docs/reference/migration/migrate_8_0/rest-api-changes.asciidoc @@ -1136,3 +1136,20 @@ for both cases. *Impact* + To detect a server timeout, check the `timed_out` field of the JSON response. ==== + +.The `Content-Type` response header no longer specifies the charset. +[%collapsible] +==== +*Details* + +The `Content-Type` response header no longer specifies the charset. This information is not required when transferring JSON data, because JSON text will always be encoded in Unicode, with UTF-8 being the default encoding. + +*Impact* + +Some applications and utilities, such as PowerShell's https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.utility/invoke-restmethod[Invoke-RestMethod], must receive charset information to display data correctly. If your application or utility relies on charset information in the `Content-Type` response header, UTF-8 encoded characters will be rendered incorrectly in the response body. + +As a workaround, to render non-ASCII characters, include an HTTP `Accept` header in your requests, specifying the charset: + +[source,sh] +---- +Accept: application/json; charset=utf-8 +---- +==== \ No newline at end of file