Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update azure-search-documents After Release #22176

Merged
merged 1 commit into from
Jun 9, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion eng/jacoco-test-coverage/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -229,7 +229,7 @@
<dependency>
<groupId>com.azure</groupId>
<artifactId>azure-search-documents</artifactId>
<version>11.4.0-beta.3</version> <!-- {x-version-update;com.azure:azure-search-documents;current} -->
<version>11.5.0-beta.1</version> <!-- {x-version-update;com.azure:azure-search-documents;current} -->
</dependency>
<dependency>
<groupId>com.azure</groupId>
Expand Down
2 changes: 1 addition & 1 deletion eng/versioning/version_client.txt
Original file line number Diff line number Diff line change
Expand Up @@ -101,7 +101,7 @@ com.azure:azure-mixedreality-remoterendering;1.0.0;1.1.0-beta.1
com.azure:azure-monitor-opentelemetry-exporter;1.0.0-beta.4;1.0.0-beta.5
com.azure:azure-monitor-query;1.0.0-beta.1;1.0.0-beta.1
com.azure:azure-quantum-jobs;1.0.0-beta.1;1.0.0-beta.2
com.azure:azure-search-documents;11.3.2;11.4.0-beta.3
com.azure:azure-search-documents;11.4.0;11.5.0-beta.1
com.azure:azure-search-perf;1.0.0-beta.1;1.0.0-beta.1
com.azure:azure-security-attestation;1.0.0-beta.1;1.0.0-beta.2
com.azure:azure-security-confidentialledger;1.0.0-beta.2;1.0.0-beta.2
Expand Down
32 changes: 31 additions & 1 deletion sdk/search/azure-search-documents/CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,36 @@
# Release History

## 11.4.0-beta.3 (Unreleased)
## 11.5.0-beta.1 (Unreleased)

## 11.4.0 (2021-06-08)

### Features Added

- Added the ability to configure Knowledge Store in skillsets.
- Added factory method to `SynonymMap` to enable creation from a file.
- Added support for `Edm.GeographyPoint` in `FieldBuilder` when property has type `GeoPoint`.
- Added support for geography based filtering in `SearchFilter` when `GeoPosition`, `GeoPoint`, `GeoLineString`, or
`GeoPolygon` are used as formatting arguments.
- Added new skills `CustomEntityLookupSkill` and `DocumentExtractionSkill` and new skill versions for
`KeyPhraseExtractionSkill` and `LanguageDetectionSkill`.
- Added support for the ADLS Gen 2 Indexer data source type.
- Added skillset counts to `SearchServiceCounters`.
- Added additional log messages to `SearchIndexingBufferedSender` and `SearchIndexingBufferedAsyncSender`.

### Breaking Changes

- Removed support for service version `2020-06-30-Preview`. Default version is now `2020-06-30`.
- Removed Semantic Search capability to `SearchClient` and `SearchAsyncClient`.
- Removed support for Normalizers in `SearchField` and `SearchIndex` with `CustomNormalizer` and `LexicalNormalizer`.

### Dependency Updates

- Updated `azure-core` from `1.16.0` to `1.17.0`.
- Updated `azure-core-http-netty` from `1.9.2` to `1.10.0`.
- Updated `azure-core-serializer-json-jackson` from `1.2.3` to `1.2.4`.
- Updated Jackson from `2.12.2` to `2.12.3`.
- Updated Reactor from `3.4.5` to `3.4.6`.
- Updated Reactor Netty from `1.0.6` to `1.0.7`.

## 11.3.2 (2021-05-11)

Expand Down
2 changes: 1 addition & 1 deletion sdk/search/azure-search-documents/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ Use the Azure Cognitive Search client library to:
<dependency>
<groupId>com.azure</groupId>
<artifactId>azure-search-documents</artifactId>
<version>11.3.2</version>
<version>11.4.0</version>
</dependency>
```
[//]: # ({x-version-update-end})
Expand Down
2 changes: 1 addition & 1 deletion sdk/search/azure-search-documents/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@

<groupId>com.azure</groupId>
<artifactId>azure-search-documents</artifactId>
<version>11.4.0-beta.3</version> <!-- {x-version-update;com.azure:azure-search-documents;current} -->
<version>11.5.0-beta.1</version> <!-- {x-version-update;com.azure:azure-search-documents;current} -->
<packaging>jar</packaging>

<properties>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,6 @@
package com.azure.search.documents.indexes.implementation.models;

import com.azure.core.annotation.Fluent;
import com.azure.core.annotation.JsonFlatten;
import com.fasterxml.jackson.annotation.JsonCreator;
import com.fasterxml.jackson.annotation.JsonProperty;
import com.fasterxml.jackson.annotation.JsonTypeInfo;
Expand All @@ -18,11 +17,10 @@
* "Basic Latin" Unicode block) into their ASCII equivalents, if such equivalents exist. This token filter is
* implemented using Apache Lucene.
*/
@JsonTypeInfo(use = JsonTypeInfo.Id.NAME, include = JsonTypeInfo.As.PROPERTY, property = "@odata\\.type")
@JsonTypeInfo(use = JsonTypeInfo.Id.NAME, include = JsonTypeInfo.As.PROPERTY, property = "@odata.type")
@JsonTypeName("#Microsoft.Azure.Search.AsciiFoldingTokenFilter")
@JsonFlatten
@Fluent
public class AsciiFoldingTokenFilter extends TokenFilter {
public final class AsciiFoldingTokenFilter extends TokenFilter {
/*
* A value indicating whether the original token will be kept. Default is
* false.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,6 @@
package com.azure.search.documents.indexes.implementation.models;

import com.azure.core.annotation.Fluent;
import com.azure.core.annotation.JsonFlatten;
import com.azure.search.documents.indexes.models.CjkBigramTokenFilterScripts;
import com.fasterxml.jackson.annotation.JsonCreator;
import com.fasterxml.jackson.annotation.JsonProperty;
Expand All @@ -19,11 +18,10 @@
* Forms bigrams of CJK terms that are generated from the standard tokenizer. This token filter is implemented using
* Apache Lucene.
*/
@JsonTypeInfo(use = JsonTypeInfo.Id.NAME, include = JsonTypeInfo.As.PROPERTY, property = "@odata\\.type")
@JsonTypeInfo(use = JsonTypeInfo.Id.NAME, include = JsonTypeInfo.As.PROPERTY, property = "@odata.type")
@JsonTypeName("#Microsoft.Azure.Search.CjkBigramTokenFilter")
@JsonFlatten
@Fluent
public class CjkBigramTokenFilter extends TokenFilter {
public final class CjkBigramTokenFilter extends TokenFilter {
/*
* The scripts to ignore.
*/
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,6 @@
package com.azure.search.documents.indexes.implementation.models;

import com.azure.core.annotation.Fluent;
import com.azure.core.annotation.JsonFlatten;
import com.fasterxml.jackson.annotation.JsonCreator;
import com.fasterxml.jackson.annotation.JsonProperty;
import com.fasterxml.jackson.annotation.JsonTypeInfo;
Expand All @@ -17,11 +16,10 @@
* Grammar-based tokenizer that is suitable for processing most European-language documents. This tokenizer is
* implemented using Apache Lucene.
*/
@JsonTypeInfo(use = JsonTypeInfo.Id.NAME, include = JsonTypeInfo.As.PROPERTY, property = "@odata\\.type")
@JsonTypeInfo(use = JsonTypeInfo.Id.NAME, include = JsonTypeInfo.As.PROPERTY, property = "@odata.type")
@JsonTypeName("#Microsoft.Azure.Search.ClassicTokenizer")
@JsonFlatten
@Fluent
public class ClassicTokenizer extends LexicalTokenizer {
public final class ClassicTokenizer extends LexicalTokenizer {
/*
* The maximum token length. Default is 255. Tokens longer than the maximum
* length are split. The maximum token length that can be used is 300
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,6 @@
package com.azure.search.documents.indexes.implementation.models;

import com.azure.core.annotation.Fluent;
import com.azure.core.annotation.JsonFlatten;
import com.fasterxml.jackson.annotation.JsonCreator;
import com.fasterxml.jackson.annotation.JsonProperty;
import com.fasterxml.jackson.annotation.JsonTypeInfo;
Expand All @@ -18,11 +17,10 @@
* Construct bigrams for frequently occurring terms while indexing. Single terms are still indexed too, with bigrams
* overlaid. This token filter is implemented using Apache Lucene.
*/
@JsonTypeInfo(use = JsonTypeInfo.Id.NAME, include = JsonTypeInfo.As.PROPERTY, property = "@odata\\.type")
@JsonTypeInfo(use = JsonTypeInfo.Id.NAME, include = JsonTypeInfo.As.PROPERTY, property = "@odata.type")
@JsonTypeName("#Microsoft.Azure.Search.CommonGramTokenFilter")
@JsonFlatten
@Fluent
public class CommonGramTokenFilter extends TokenFilter {
public final class CommonGramTokenFilter extends TokenFilter {
/*
* The set of common words.
*/
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,19 +7,17 @@
package com.azure.search.documents.indexes.implementation.models;

import com.azure.core.annotation.Fluent;
import com.azure.core.annotation.JsonFlatten;
import com.fasterxml.jackson.annotation.JsonCreator;
import com.fasterxml.jackson.annotation.JsonProperty;
import com.fasterxml.jackson.annotation.JsonTypeInfo;
import com.fasterxml.jackson.annotation.JsonTypeName;
import java.util.List;

/** Decomposes compound words found in many Germanic languages. This token filter is implemented using Apache Lucene. */
@JsonTypeInfo(use = JsonTypeInfo.Id.NAME, include = JsonTypeInfo.As.PROPERTY, property = "@odata\\.type")
@JsonTypeInfo(use = JsonTypeInfo.Id.NAME, include = JsonTypeInfo.As.PROPERTY, property = "@odata.type")
@JsonTypeName("#Microsoft.Azure.Search.DictionaryDecompounderTokenFilter")
@JsonFlatten
@Fluent
public class DictionaryDecompounderTokenFilter extends TokenFilter {
public final class DictionaryDecompounderTokenFilter extends TokenFilter {
/*
* The list of words to match against.
*/
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,6 @@
package com.azure.search.documents.indexes.implementation.models;

import com.azure.core.annotation.Fluent;
import com.azure.core.annotation.JsonFlatten;
import com.azure.search.documents.indexes.models.EdgeNGramTokenFilterSide;
import com.fasterxml.jackson.annotation.JsonCreator;
import com.fasterxml.jackson.annotation.JsonProperty;
Expand All @@ -18,11 +17,10 @@
* Generates n-grams of the given size(s) starting from the front or the back of an input token. This token filter is
* implemented using Apache Lucene.
*/
@JsonTypeInfo(use = JsonTypeInfo.Id.NAME, include = JsonTypeInfo.As.PROPERTY, property = "@odata\\.type")
@JsonTypeInfo(use = JsonTypeInfo.Id.NAME, include = JsonTypeInfo.As.PROPERTY, property = "@odata.type")
@JsonTypeName("#Microsoft.Azure.Search.EdgeNGramTokenFilter")
@JsonFlatten
@Fluent
public class EdgeNGramTokenFilter extends TokenFilter {
public final class EdgeNGramTokenFilter extends TokenFilter {
/*
* The minimum n-gram length. Default is 1. Must be less than the value of
* maxGram.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,6 @@
package com.azure.search.documents.indexes.implementation.models;

import com.azure.core.annotation.Fluent;
import com.azure.core.annotation.JsonFlatten;
import com.azure.search.documents.indexes.models.EdgeNGramTokenFilterSide;
import com.fasterxml.jackson.annotation.JsonCreator;
import com.fasterxml.jackson.annotation.JsonProperty;
Expand All @@ -18,11 +17,10 @@
* Generates n-grams of the given size(s) starting from the front or the back of an input token. This token filter is
* implemented using Apache Lucene.
*/
@JsonTypeInfo(use = JsonTypeInfo.Id.NAME, include = JsonTypeInfo.As.PROPERTY, property = "@odata\\.type")
@JsonTypeInfo(use = JsonTypeInfo.Id.NAME, include = JsonTypeInfo.As.PROPERTY, property = "@odata.type")
@JsonTypeName("#Microsoft.Azure.Search.EdgeNGramTokenFilterV2")
@JsonFlatten
@Fluent
public class EdgeNGramTokenFilterV2 extends TokenFilter {
public final class EdgeNGramTokenFilterV2 extends TokenFilter {
/*
* The minimum n-gram length. Default is 1. Maximum is 300. Must be less
* than the value of maxGram.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,6 @@
package com.azure.search.documents.indexes.implementation.models;

import com.azure.core.annotation.Fluent;
import com.azure.core.annotation.JsonFlatten;
import com.azure.search.documents.indexes.models.TokenCharacterKind;
import com.fasterxml.jackson.annotation.JsonCreator;
import com.fasterxml.jackson.annotation.JsonProperty;
Expand All @@ -19,11 +18,10 @@
* Tokenizes the input from an edge into n-grams of the given size(s). This tokenizer is implemented using Apache
* Lucene.
*/
@JsonTypeInfo(use = JsonTypeInfo.Id.NAME, include = JsonTypeInfo.As.PROPERTY, property = "@odata\\.type")
@JsonTypeInfo(use = JsonTypeInfo.Id.NAME, include = JsonTypeInfo.As.PROPERTY, property = "@odata.type")
@JsonTypeName("#Microsoft.Azure.Search.EdgeNGramTokenizer")
@JsonFlatten
@Fluent
public class EdgeNGramTokenizer extends LexicalTokenizer {
public final class EdgeNGramTokenizer extends LexicalTokenizer {
/*
* The minimum n-gram length. Default is 1. Maximum is 300. Must be less
* than the value of maxGram.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,6 @@
package com.azure.search.documents.indexes.implementation.models;

import com.azure.core.annotation.Fluent;
import com.azure.core.annotation.JsonFlatten;
import com.fasterxml.jackson.annotation.JsonCreator;
import com.fasterxml.jackson.annotation.JsonProperty;
import com.fasterxml.jackson.annotation.JsonTypeInfo;
Expand All @@ -18,11 +17,10 @@
* Removes elisions. For example, "l'avion" (the plane) will be converted to "avion" (plane). This token filter is
* implemented using Apache Lucene.
*/
@JsonTypeInfo(use = JsonTypeInfo.Id.NAME, include = JsonTypeInfo.As.PROPERTY, property = "@odata\\.type")
@JsonTypeInfo(use = JsonTypeInfo.Id.NAME, include = JsonTypeInfo.As.PROPERTY, property = "@odata.type")
@JsonTypeName("#Microsoft.Azure.Search.ElisionTokenFilter")
@JsonFlatten
@Fluent
public class ElisionTokenFilter extends TokenFilter {
public final class ElisionTokenFilter extends TokenFilter {
/*
* The set of articles to remove.
*/
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,6 @@
package com.azure.search.documents.indexes.implementation.models;

import com.azure.core.annotation.Fluent;
import com.azure.core.annotation.JsonFlatten;
import com.fasterxml.jackson.annotation.JsonCreator;
import com.fasterxml.jackson.annotation.JsonProperty;
import com.fasterxml.jackson.annotation.JsonTypeInfo;
Expand All @@ -18,11 +17,10 @@
* A token filter that only keeps tokens with text contained in a specified list of words. This token filter is
* implemented using Apache Lucene.
*/
@JsonTypeInfo(use = JsonTypeInfo.Id.NAME, include = JsonTypeInfo.As.PROPERTY, property = "@odata\\.type")
@JsonTypeInfo(use = JsonTypeInfo.Id.NAME, include = JsonTypeInfo.As.PROPERTY, property = "@odata.type")
@JsonTypeName("#Microsoft.Azure.Search.KeepTokenFilter")
@JsonFlatten
@Fluent
public class KeepTokenFilter extends TokenFilter {
public final class KeepTokenFilter extends TokenFilter {
/*
* The list of words to keep.
*/
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,19 +7,17 @@
package com.azure.search.documents.indexes.implementation.models;

import com.azure.core.annotation.Fluent;
import com.azure.core.annotation.JsonFlatten;
import com.fasterxml.jackson.annotation.JsonCreator;
import com.fasterxml.jackson.annotation.JsonProperty;
import com.fasterxml.jackson.annotation.JsonTypeInfo;
import com.fasterxml.jackson.annotation.JsonTypeName;
import java.util.List;

/** Marks terms as keywords. This token filter is implemented using Apache Lucene. */
@JsonTypeInfo(use = JsonTypeInfo.Id.NAME, include = JsonTypeInfo.As.PROPERTY, property = "@odata\\.type")
@JsonTypeInfo(use = JsonTypeInfo.Id.NAME, include = JsonTypeInfo.As.PROPERTY, property = "@odata.type")
@JsonTypeName("#Microsoft.Azure.Search.KeywordMarkerTokenFilter")
@JsonFlatten
@Fluent
public class KeywordMarkerTokenFilter extends TokenFilter {
public final class KeywordMarkerTokenFilter extends TokenFilter {
/*
* A list of words to mark as keywords.
*/
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,18 +7,16 @@
package com.azure.search.documents.indexes.implementation.models;

import com.azure.core.annotation.Fluent;
import com.azure.core.annotation.JsonFlatten;
import com.fasterxml.jackson.annotation.JsonCreator;
import com.fasterxml.jackson.annotation.JsonProperty;
import com.fasterxml.jackson.annotation.JsonTypeInfo;
import com.fasterxml.jackson.annotation.JsonTypeName;

/** Emits the entire input as a single token. This tokenizer is implemented using Apache Lucene. */
@JsonTypeInfo(use = JsonTypeInfo.Id.NAME, include = JsonTypeInfo.As.PROPERTY, property = "@odata\\.type")
@JsonTypeInfo(use = JsonTypeInfo.Id.NAME, include = JsonTypeInfo.As.PROPERTY, property = "@odata.type")
@JsonTypeName("#Microsoft.Azure.Search.KeywordTokenizer")
@JsonFlatten
@Fluent
public class KeywordTokenizer extends LexicalTokenizer {
public final class KeywordTokenizer extends LexicalTokenizer {
/*
* The read buffer size in bytes. Default is 256.
*/
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,18 +7,16 @@
package com.azure.search.documents.indexes.implementation.models;

import com.azure.core.annotation.Fluent;
import com.azure.core.annotation.JsonFlatten;
import com.fasterxml.jackson.annotation.JsonCreator;
import com.fasterxml.jackson.annotation.JsonProperty;
import com.fasterxml.jackson.annotation.JsonTypeInfo;
import com.fasterxml.jackson.annotation.JsonTypeName;

/** Emits the entire input as a single token. This tokenizer is implemented using Apache Lucene. */
@JsonTypeInfo(use = JsonTypeInfo.Id.NAME, include = JsonTypeInfo.As.PROPERTY, property = "@odata\\.type")
@JsonTypeInfo(use = JsonTypeInfo.Id.NAME, include = JsonTypeInfo.As.PROPERTY, property = "@odata.type")
@JsonTypeName("#Microsoft.Azure.Search.KeywordTokenizerV2")
@JsonFlatten
@Fluent
public class KeywordTokenizerV2 extends LexicalTokenizer {
public final class KeywordTokenizerV2 extends LexicalTokenizer {
/*
* The maximum token length. Default is 256. Tokens longer than the maximum
* length are split. The maximum token length that can be used is 300
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,18 +7,16 @@
package com.azure.search.documents.indexes.implementation.models;

import com.azure.core.annotation.Fluent;
import com.azure.core.annotation.JsonFlatten;
import com.fasterxml.jackson.annotation.JsonCreator;
import com.fasterxml.jackson.annotation.JsonProperty;
import com.fasterxml.jackson.annotation.JsonTypeInfo;
import com.fasterxml.jackson.annotation.JsonTypeName;

/** Removes words that are too long or too short. This token filter is implemented using Apache Lucene. */
@JsonTypeInfo(use = JsonTypeInfo.Id.NAME, include = JsonTypeInfo.As.PROPERTY, property = "@odata\\.type")
@JsonTypeInfo(use = JsonTypeInfo.Id.NAME, include = JsonTypeInfo.As.PROPERTY, property = "@odata.type")
@JsonTypeName("#Microsoft.Azure.Search.LengthTokenFilter")
@JsonFlatten
@Fluent
public class LengthTokenFilter extends TokenFilter {
public final class LengthTokenFilter extends TokenFilter {
/*
* The minimum length in characters. Default is 0. Maximum is 300. Must be
* less than the value of max.
Expand Down
Loading