-
Notifications
You must be signed in to change notification settings - Fork 25k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Length filter doesnt work with keyword normalizer #57984
Comments
do we know any workaround for this or when it going to be fixed? Thanks! |
@SLavrynenko ,setting |
Pinging @elastic/es-search (:Search/Analysis) |
Normalizers for keywords typically should only modify the input slightly, like e.g. lowercasing, character conversions etc. The |
I have both text and keyword for some fields. Text for search and keyword for sorting, but for some reason we have values in those fields that are too long and I get en exception on entity saving, so I nee d an ability to truncate those values to be able to save them and sort on those fields. ignore_above - just totally removes the values and I need to save them (even if its truncated) |
I had to use text type for the fields Im using for sorting and a custom analyzer:
which is basically a keyword, right? It worked in 2.4 Elastic - I mean I could sort on fileds that has this analyzer but not now. Could it be a bug? As now it says
|
Can you ellaborate on that? What's the problem with saving large strings on keyword fields (except that its a bad idea doing this frequently). |
We have data in database that we need to sync into Elastic. I can not influence on that, and the bad thing of that is when you try to save entity with long string it wont be saved because of exception that term cant be longer than 8191 in Lucene. So the whole entity wont be saved because of the long value in one of its fields. |
That is news to me, I was only aware of a 32k limit for a single term. Would you mind sharing that exception and how you run into it?
|
Yeah, its exactly this 32k limit (8191 if its a Unicode string) |
@SLavrynenko sorry for the long radio silence, I just summarized parts of our discussion here in #60329 to get input on the options we might consider in this. Would you mind if we close this issue since its original topic ("Length filter doesnt work with keyword normalizer") is only part of several possible solutions? |
@cbuescher sure, Im fine with that. |
Great, lets move further discussion to #60329 then. |
Elasticsearch version 7.7.0
Steps to reproduce:
When I try to create an index with the following script:
{ "settings": { "index": { "number_of_shards": 3, "number_of_replicas": 0 }, "analysis": { "analyzer": { "text": { "tokenizer": "my_tokenizer", "filter": [ "lowercase", "trim", "snowball", "unicode_truncate" ], "char_filter": [ "html_strip", "quotes" ] }, "custom_analyzer": { "tokenizer": "my_tokenizer", "filter": [ "lowercase", "trim", "unicode_truncate" ], "char_filter": [ "html_strip", "quotes" ] }, "analyzer_keyword": { "tokenizer": "keyword", "filter": [ "lowercase", "trim", "unicode_truncate" ] } }, "normalizer": { "keyword_normalizer": { "type": "custom", "filter": [ "lowercase", "trim", "unicode_truncate" ] } }, "filter": { "unicode_truncate": { "type": "length", "max": 8191 } }, "char_filter": { "quotes": { "mappings": [ "\\u0091=>'", "\\u0092=>'", "\\u2018=>'", "\\u2019=>'", "\\uf0b7=>\\u0020" ], "type": "mapping" } }, "tokenizer": { "my_tokenizer": { "type": "pattern", "pattern": "[.;:\\s]*[ ,!?;\n\t\r]" } } } }, "mappings": { "dynamic": false, "properties": { "Status": { "type": "text", "analyzer": "text", "fields": { "raw": { "type": "text", "analyzer": "analyzer_keyword" } } }, "Ts": { "type": "long" }, "AddedBy": { "type": "integer" }, "AddedOn": { "type": "date", "format": "strict_date_optional_time||epoch_millis" }, "Address1": { "type": "text", "analyzer": "analyzer_keyword" }, "Address2": { "type": "text", "analyzer": "analyzer_keyword" }, "City": { "type": "text", "analyzer": "custom_analyzer", "fields": { "raw": { "type": "text", "analyzer": "analyzer_keyword" } } } } } }
it gives me an error, saying:
{ "error": { "root_cause": [ { "type": "illegal_argument_exception", "reason": "Custom normalizer [keyword_normalizer] may not use filter [unicode_truncate]" } ], "type": "illegal_argument_exception", "reason": "Custom normalizer [keyword_normalizer] may not use filter [unicode_truncate]" }, "status": 400 }
I believe that there is no reason of why length filter shouldnt work for normalizer
The text was updated successfully, but these errors were encountered: