Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Length filter doesnt work with keyword normalizer #57984

Closed
SLavrynenko opened this issue Jun 11, 2020 · 14 comments
Closed

Length filter doesnt work with keyword normalizer #57984

SLavrynenko opened this issue Jun 11, 2020 · 14 comments
Labels
feedback_needed :Search Relevance/Analysis How text is split into tokens Team:Search Relevance Meta label for the Search Relevance team in Elasticsearch

Comments

@SLavrynenko
Copy link

Elasticsearch version 7.7.0

Steps to reproduce:

When I try to create an index with the following script:

{ "settings": { "index": { "number_of_shards": 3, "number_of_replicas": 0 }, "analysis": { "analyzer": { "text": { "tokenizer": "my_tokenizer", "filter": [ "lowercase", "trim", "snowball", "unicode_truncate" ], "char_filter": [ "html_strip", "quotes" ] }, "custom_analyzer": { "tokenizer": "my_tokenizer", "filter": [ "lowercase", "trim", "unicode_truncate" ], "char_filter": [ "html_strip", "quotes" ] }, "analyzer_keyword": { "tokenizer": "keyword", "filter": [ "lowercase", "trim", "unicode_truncate" ] } }, "normalizer": { "keyword_normalizer": { "type": "custom", "filter": [ "lowercase", "trim", "unicode_truncate" ] } }, "filter": { "unicode_truncate": { "type": "length", "max": 8191 } }, "char_filter": { "quotes": { "mappings": [ "\\u0091=>'", "\\u0092=>'", "\\u2018=>'", "\\u2019=>'", "\\uf0b7=>\\u0020" ], "type": "mapping" } }, "tokenizer": { "my_tokenizer": { "type": "pattern", "pattern": "[.;:\\s]*[ ,!?;\n\t\r]" } } } }, "mappings": { "dynamic": false, "properties": { "Status": { "type": "text", "analyzer": "text", "fields": { "raw": { "type": "text", "analyzer": "analyzer_keyword" } } }, "Ts": { "type": "long" }, "AddedBy": { "type": "integer" }, "AddedOn": { "type": "date", "format": "strict_date_optional_time||epoch_millis" }, "Address1": { "type": "text", "analyzer": "analyzer_keyword" }, "Address2": { "type": "text", "analyzer": "analyzer_keyword" }, "City": { "type": "text", "analyzer": "custom_analyzer", "fields": { "raw": { "type": "text", "analyzer": "analyzer_keyword" } } } } } }

it gives me an error, saying:

{ "error": { "root_cause": [ { "type": "illegal_argument_exception", "reason": "Custom normalizer [keyword_normalizer] may not use filter [unicode_truncate]" } ], "type": "illegal_argument_exception", "reason": "Custom normalizer [keyword_normalizer] may not use filter [unicode_truncate]" }, "status": 400 }

I believe that there is no reason of why length filter shouldnt work for normalizer

@SLavrynenko SLavrynenko added >bug needs:triage Requires assignment of a team area label labels Jun 11, 2020
@SLavrynenko
Copy link
Author

do we know any workaround for this or when it going to be fixed?

Thanks!

@gaobinlong
Copy link
Contributor

@SLavrynenko ,setting ignore_above in the field mapping maybe a workaround.

@cbuescher cbuescher added the :Search Relevance/Analysis How text is split into tokens label Jun 15, 2020
@elasticmachine
Copy link
Collaborator

Pinging @elastic/es-search (:Search/Analysis)

@elasticmachine elasticmachine added the Team:Search Meta label for search team label Jun 15, 2020
@cbuescher cbuescher removed >bug Team:Search Meta label for search team needs:triage Requires assignment of a team area label labels Jun 15, 2020
@cbuescher
Copy link
Member

reason of why length filter shouldnt work for normalizer

Normalizers for keywords typically should only modify the input slightly, like e.g. lowercasing, character conversions etc. The length filter removes tokens from a stream instead, which makes sense on a text field where you might want to not index short tokens. I'm not sure this is what you are looking for here, especially with a max : 8191 setting. As @gaobinlong already mentioned there is ignore_above for this. Let me know if I'm missing something, otherwise I'd like to close this issue.

@SLavrynenko
Copy link
Author

I have both text and keyword for some fields. Text for search and keyword for sorting, but for some reason we have values in those fields that are too long and I get en exception on entity saving, so I nee d an ability to truncate those values to be able to save them and sort on those fields. ignore_above - just totally removes the values and I need to save them (even if its truncated)

@SLavrynenko
Copy link
Author

I had to use text type for the fields Im using for sorting and a custom analyzer:

        "analyzer_keyword": {
          "tokenizer": "keyword",
          "filter": [
            "lowercase",
            "trim",
            "unicode_truncate"
          ]
        }

which is basically a keyword, right? It worked in 2.4 Elastic - I mean I could sort on fileds that has this analyzer but not now. Could it be a bug? As now it says

Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on

@SLavrynenko
Copy link
Author

@cbuescher

@cbuescher
Copy link
Member

but for some reason we have values in those fields that are too long and I get en exception on entity saving

Can you ellaborate on that? What's the problem with saving large strings on keyword fields (except that its a bad idea doing this frequently).

@SLavrynenko
Copy link
Author

but for some reason we have values in those fields that are too long and I get en exception on entity saving

Can you ellaborate on that? What's the problem with saving large strings on keyword fields (except that its a bad idea doing this frequently).

We have data in database that we need to sync into Elastic. I can not influence on that, and the bad thing of that is when you try to save entity with long string it wont be saved because of exception that term cant be longer than 8191 in Lucene. So the whole entity wont be saved because of the long value in one of its fields.

@cbuescher
Copy link
Member

when you try to save entity with long string it wont be saved because of exception that term cant be longer than 8191 in Lucene

That is news to me, I was only aware of a 32k limit for a single term. Would you mind sharing that exception and how you run into it?
The other suggestion I'd have is to try an ingest processor (e.g. a script processor) that creates a truncated field conditioned on the input length. This is a quick sketch that certailny needs extension (edge cases, null handling etc...) but could give you an alternative:

PUT _ingest/pipeline/truncate
{
  "description": "truncate first 128 characters",
  "processors": [
    {
      "script": {
        "lang": "painless",
        "source": "ctx.truncated = ctx.status.substring(0, (int) Math.min(128,ctx.status.length()));"
      }
    }
  ]
}

POST /test/_doc/1?pipeline=truncate
{
  "status" : <...your long input here...>
}

@SLavrynenko
Copy link
Author

Yeah, its exactly this 32k limit (8191 if its a Unicode string)

@cbuescher
Copy link
Member

@SLavrynenko sorry for the long radio silence, I just summarized parts of our discussion here in #60329 to get input on the options we might consider in this. Would you mind if we close this issue since its original topic ("Length filter doesnt work with keyword normalizer") is only part of several possible solutions?

@SLavrynenko
Copy link
Author

@cbuescher sure, Im fine with that.

@cbuescher
Copy link
Member

Great, lets move further discussion to #60329 then.

@javanna javanna added the Team:Search Relevance Meta label for the Search Relevance team in Elasticsearch label Jul 16, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feedback_needed :Search Relevance/Analysis How text is split into tokens Team:Search Relevance Meta label for the Search Relevance team in Elasticsearch
Projects
None yet
Development

No branches or pull requests

5 participants