Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fields not appearing as searchable, aggregatable in 5.3.1 on upgrade from 2.3.3 and 5.2.2 #11377

Closed
vineet01 opened this issue Apr 22, 2017 · 5 comments

Comments

@vineet01
Copy link

Example mapping in ES 2.3.3 for a field:

{
type: "string",
fields: {
raw: {
type: "string",
index: "not_analyzed"
}
}
}

On upgrade to 5.2.2/5.3.1, mapping becomes like:

{
type: "string",
fields: {
raw: {
type: "string",
index: "not_analyzed",
fielddata: false
}
}
}

Now with this mapping, in 5.2.2 the field is still aggregatable, however, in 5.3.1 it is not. Same for fields with "long" type as well.

I am also not sure about the reason for above mapping change and addition of fielddata:false. Not sure if that might be causing the trouble. (https://discuss.elastic.co/t/mapping-change-on-upgrade-to-5-2-2/82622)

In field_stats I am able to see that it is marked as aggregatable:

messagetype: {
type: "string",
max_doc: 71657,
doc_count: 3490,
density: 4,
sum_doc_freq: 3490,
sum_total_term_freq: 3490,
searchable: true,
aggregatable: true,
min_value: "received",
max_value: "sent"
}

But Kibana is not showing it as aggregatable in index pattern listing. (Tried refreshing fields as well and upgrade from both 2.3.3 -> 5.3.1 and 5.2.2 -> 5.3.1)

@jmeekr
Copy link

jmeekr commented Apr 22, 2017

Setting fielddata to true will show the fields in Kibana drop downs. It seems this change was made to prevent heavy system load from happening. If you use console I believe it gives you this warning.

@vineet01
Copy link
Author

The fielddata:false is set automatically by elasticsearch on upgrading from 2.3.3. That isn't even the right syntax to disable fielddata I believe, it would be fielddata:{format:disabled}.

@Bargs
Copy link
Contributor

Bargs commented Apr 24, 2017

@vineet01 I wonder if you're running into #11379?

If you manually run the following request what does the response look like?

GET <index-pattern>/_field_stats?fields=*&level=cluster&allow_no_indices=false

@vineet01
Copy link
Author

For fields=* it gives an exception:

{
_shards: {
total: 9,
successful: 0,
failed: 9,
failures: [
{
shard: 0,
index: "test-index",
status: "INTERNAL_SERVER_ERROR",
reason: {
type: "exception",
reason: "java.util.concurrent.ExecutionException: java.lang.ArrayIndexOutOfBoundsException: 5",
caused_by: {
type: "execution_exception",
reason: "java.lang.ArrayIndexOutOfBoundsException: 5",
caused_by: {
type: "array_index_out_of_bounds_exception",
reason: "5"
}
}
}
}
]
},
indices: { }
}

for one field i.e messagetype:

{
_shards: {
total: 9,
successful: 9,
failed: 0
},
indices: {
_all: {
fields: {
messagetype: {
type: "string",
max_doc: 71657,
doc_count: 3490,
density: 4,
sum_doc_freq: 3490,
sum_total_term_freq: 3490,
searchable: true,
aggregatable: true,
min_value: "received",
max_value: "sent"
}
}
}
}
}

Field with geo_point type is the only field which is appearing as aggregatable and searchable, on querying field_stats for that as well I get exception:

{
_shards: {
total: 9,
successful: 0,
failed: 9,
failures: [
{
shard: 0,
index: "test-index",
status: "INTERNAL_SERVER_ERROR",
reason: {
type: "exception",
reason: "java.util.concurrent.ExecutionException: java.lang.ArrayIndexOutOfBoundsException: 5",
caused_by: {
type: "execution_exception",
reason: "java.lang.ArrayIndexOutOfBoundsException: 5",
caused_by: {
type: "array_index_out_of_bounds_exception",
reason: "5"
}
}
}
}
]
},
indices: { }
}

Stacktrace from elasticsearch for the same:

[2017-04-25T12:16:03,381][DEBUG][o.e.a.f.TransportFieldStatsAction] [-uemg33] [test-index][3], node[-uemg33tR1qBqXwtPbruEA], [P], s[STARTED], a[id=5GymDE4BRAOnO9dh7DieHg]: failed to execute [org.elasticsearch.action.fieldstats.FieldStatsRequest@497d4c2b]
org.elasticsearch.transport.RemoteTransportException: [-uemg33][127.0.0.1:9300][indices:data/read/field_stats[s]]
Caused by: org.elasticsearch.ElasticsearchException: java.util.concurrent.ExecutionException: java.lang.ArrayIndexOutOfBoundsException: 5
        at org.elasticsearch.ExceptionsHelper.convertToElastic(ExceptionsHelper.java:55) ~[elasticsearch-5.3.1.jar:5.3.1]
        at org.elasticsearch.action.fieldstats.TransportFieldStatsAction.shardOperation(TransportFieldStatsAction.java:202) ~[elasticsearch-5.3.1.jar:5.3.1]
        at org.elasticsearch.action.fieldstats.TransportFieldStatsAction.shardOperation(TransportFieldStatsAction.java:55) ~[elasticsearch-5.3.1.jar:5.3.1]
        at org.elasticsearch.action.support.broadcast.TransportBroadcastAction$ShardTransportHandler.messageReceived(TransportBroadcastAction.java:300) ~[elasticsearch-5.3.1.jar:5.3.1]
        at org.elasticsearch.action.support.broadcast.TransportBroadcastAction$ShardTransportHandler.messageReceived(TransportBroadcastAction.java:296) ~[elasticsearch-5.3.1.jar:5.3.1]
        at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:69) ~[elasticsearch-5.3.1.jar:5.3.1]
        at org.elasticsearch.transport.TransportService$7.doRun(TransportService.java:618) [elasticsearch-5.3.1.jar:5.3.1]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:638) [elasticsearch-5.3.1.jar:5.3.1]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-5.3.1.jar:5.3.1]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_72]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_72]
        at java.lang.Thread.run(Thread.java:745) [?:1.8.0_72]
Caused by: java.util.concurrent.ExecutionException: java.lang.ArrayIndexOutOfBoundsException: 5
        at org.elasticsearch.common.cache.Cache.computeIfAbsent(Cache.java:401) ~[elasticsearch-5.3.1.jar:5.3.1]
        at org.elasticsearch.indices.IndicesRequestCache.getOrCompute(IndicesRequestCache.java:116) ~[elasticsearch-5.3.1.jar:5.3.1]
        at org.elasticsearch.indices.IndicesService.cacheShardLevelResult(IndicesService.java:1195) ~[elasticsearch-5.3.1.jar:5.3.1]
        at org.elasticsearch.indices.IndicesService.getFieldStats(IndicesService.java:1152) ~[elasticsearch-5.3.1.jar:5.3.1]
        at org.elasticsearch.action.fieldstats.TransportFieldStatsAction.shardOperation(TransportFieldStatsAction.java:196) ~[elasticsearch-5.3.1.jar:5.3.1]
        at org.elasticsearch.action.fieldstats.TransportFieldStatsAction.shardOperation(TransportFieldStatsAction.java:55) ~[elasticsearch-5.3.1.jar:5.3.1]
        at org.elasticsearch.action.support.broadcast.TransportBroadcastAction$ShardTransportHandler.messageReceived(TransportBroadcastAction.java:300) ~[elasticsearch-5.3.1.jar:5.3.1]
        at org.elasticsearch.action.support.broadcast.TransportBroadcastAction$ShardTransportHandler.messageReceived(TransportBroadcastAction.java:296) ~[elasticsearch-5.3.1.jar:5.3.1]
        at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:69) ~[elasticsearch-5.3.1.jar:5.3.1]
        at org.elasticsearch.transport.TransportService$7.doRun(TransportService.java:618) ~[elasticsearch-5.3.1.jar:5.3.1]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:638) ~[elasticsearch-5.3.1.jar:5.3.1]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-5.3.1.jar:5.3.1]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[?:1.8.0_72]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[?:1.8.0_72]
        at java.lang.Thread.run(Thread.java:745) ~[?:1.8.0_72]
Caused by: java.lang.ArrayIndexOutOfBoundsException: 5
        at org.apache.lucene.util.NumericUtils.sortableBytesToLong(NumericUtils.java:183) ~[lucene-core-6.4.2.jar:6.4.2 34a975ca3d4bd7fa121340e5bcbf165929e0542f - ishan - 2017-03-01 23:23:13]
        at org.elasticsearch.index.mapper.BaseGeoPointFieldMapper$LegacyGeoPointFieldType.stats(BaseGeoPointFieldMapper.java:440) ~[elasticsearch-5.3.1.jar:5.3.1]
        at org.elasticsearch.index.mapper.BaseGeoPointFieldMapper$LegacyGeoPointFieldType.stats(BaseGeoPointFieldMapper.java:302) ~[elasticsearch-5.3.1.jar:5.3.1]
        at org.elasticsearch.indices.IndicesService.lambda$getFieldStats$17(IndicesService.java:1154) ~[elasticsearch-5.3.1.jar:5.3.1]
        at org.elasticsearch.indices.IndicesService.lambda$cacheShardLevelResult$18(IndicesService.java:1189) ~[elasticsearch-5.3.1.jar:5.3.1]
        at org.elasticsearch.indices.IndicesRequestCache$Loader.load(IndicesRequestCache.java:160) ~[elasticsearch-5.3.1.jar:5.3.1]
        at org.elasticsearch.indices.IndicesRequestCache$Loader.load(IndicesRequestCache.java:143) ~[elasticsearch-5.3.1.jar:5.3.1]
        at org.elasticsearch.common.cache.Cache.computeIfAbsent(Cache.java:398) ~[elasticsearch-5.3.1.jar:5.3.1]
        at org.elasticsearch.indices.IndicesRequestCache.getOrCompute(IndicesRequestCache.java:116) ~[elasticsearch-5.3.1.jar:5.3.1]
        at org.elasticsearch.indices.IndicesService.cacheShardLevelResult(IndicesService.java:1195) ~[elasticsearch-5.3.1.jar:5.3.1]
        at org.elasticsearch.indices.IndicesService.getFieldStats(IndicesService.java:1152) ~[elasticsearch-5.3.1.jar:5.3.1]
        at org.elasticsearch.action.fieldstats.TransportFieldStatsAction.shardOperation(TransportFieldStatsAction.java:196) ~[elasticsearch-5.3.1.jar:5.3.1]
        at org.elasticsearch.action.fieldstats.TransportFieldStatsAction.shardOperation(TransportFieldStatsAction.java:55) ~[elasticsearch-5.3.1.jar:5.3.1]
        at org.elasticsearch.action.support.broadcast.TransportBroadcastAction$ShardTransportHandler.messageReceived(TransportBroadcastAction.java:300) ~[elasticsearch-5.3.1.jar:5.3.1]
        at org.elasticsearch.action.support.broadcast.TransportBroadcastAction$ShardTransportHandler.messageReceived(TransportBroadcastAction.java:296) ~[elasticsearch-5.3.1.jar:5.3.1]
        at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:69) ~[elasticsearch-5.3.1.jar:5.3.1]
        at org.elasticsearch.transport.TransportService$7.doRun(TransportService.java:618) ~[elasticsearch-5.3.1.jar:5.3.1]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:638) ~[elasticsearch-5.3.1.jar:5.3.1]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-5.3.1.jar:5.3.1]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[?:1.8.0_72]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[?:1.8.0_72]
        at java.lang.Thread.run(Thread.java:745) ~[?:1.8.0_72]

@Bargs
Copy link
Contributor

Bargs commented Apr 25, 2017

Thanks for the extra info @vineet01. This is definitely the same bug as #11379 which is ultimately caused by elastic/elasticsearch#24275. I don't have a workaround yet, I'll try to come up with something and post it to #11379. Going to close this ticket as a dupe.

@Bargs Bargs closed this as completed Apr 25, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants