Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

saving configuration causes Facet: [0]: (key) field [@timestamp] not found in histogram panel #834

Closed
lokivog opened this issue Jan 15, 2014 · 11 comments

Comments

@lokivog
Copy link

lokivog commented Jan 15, 2014

I am seeing an error with v3.0.0milestone4 using logstash that I can reproduce every time and it's driving me crazy. Please someone let me know what I'm doing wrong.

To reproduce:

  1. I extracted v3.0.0milestone4
  2. Start elasticsearch-0.90.9 for the first time (no previous data)
  3. Use logstash to add a few entries into elasticsearch
  4. Navigate to kibana and select sample dashboard
  5. Add historgram panel
  6. Save configuration
  7. Error continues to be thrown each time a histogram query is made

The error is only thrown after you save your current configuration.

Here is a screencast video showing the steps and error.
http://screencast.com/t/HevrrG5BTwg

Based on the error, I have confirmed that every entry in elasticsearch does have a @timestamp property and value so I don't know why it's throwing this.

Here is the error.

org.elasticsearch.search.SearchParseException: [kibana-int][3]: from[-1],size[-1]: Parse Failure [Failed to parse source [{"facets":{"0":{"date_histogram":{"field":"@timestamp","interval":"1s"},"facet_filter":{"fquery":{"query":{"filtered":{"query":{"query_string":{"query":"level:"ERROR""}},"filter":{"bool":{"must":[{"match_all":{}},{"bool":{"must":[{"match_all":{}}]}}]}}}}}}},"1":{"date_histogram":{"field":"@timestamp","interval":"1s"},"facet_filter":{"fquery":{"query":{"filtered":{"query":{"query_string":{"query":"level:"INFO""}},"filter":{"bool":{"must":[{"match_all":{}},{"bool":{"must":[{"match_all":{}}]}}]}}}}}}}},"size":0}]]
at org.elasticsearch.search.SearchService.parseSource(SearchService.java:571)
at org.elasticsearch.search.SearchService.createContext(SearchService.java:474)
at org.elasticsearch.search.SearchService.createContext(SearchService.java:459)
at org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:452)
at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:224)
at org.elasticsearch.search.action.SearchServiceTransportAction.sendExecuteQuery(SearchServiceTransportAction.java:202)
at org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$AsyncAction.sendExecuteFirstPhase(TransportSearchQueryThenFetchAction.java:80)
at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.performFirstPhase(TransportSearchTypeAction.java:216)
at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.performFirstPhase(TransportSearchTypeAction.java:203)
at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction$2.run(TransportSearchTypeAction.java:186)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Caused by: org.elasticsearch.search.facet.FacetPhaseExecutionException: Facet [0]: (key) field [@timestamp] not found
at org.elasticsearch.search.facet.datehistogram.DateHistogramFacetParser.parse(DateHistogramFacetParser.java:160)
at org.elasticsearch.search.facet.FacetParseElement.parse(FacetParseElement.java:94)
at org.elasticsearch.search.SearchService.parseSource(SearchService.java:559)

@lokivog
Copy link
Author

lokivog commented Jan 15, 2014

Selecting the logstash default configuration from the home page fixes the issue. Not sure why this happens since it is the same data, but I'll close the issue.

@lokivog lokivog closed this as completed Jan 15, 2014
@jyomaj
Copy link

jyomaj commented Mar 27, 2014

Can this be re-opened please? I experience a similar problem in kibana-3.0.0milestone4, elasticsearch 1.0.1. It happens when there is no data for part of the date range. If you zoom in, the red message "Oops! FacetPhaseExecutionException[Facet [0]: (value) field [xxx] not found]" goes away. However, this makes it somewhat unusable if you just want to draw a graph of min/max/total/mean values over a week or a month without having to specify exact ranges.

To reproduce:

  1. Delete all indices from elasticsearch (development only)
  2. Use logstash to add a few entries into elasticsearch
  3. Navigate to kibana and click 'Logstash Dashboard'
  4. Change the range to 'Last 7d'
  5. Change the histogram interval to '1d'
  6. Configure the histogram panel with Mode: mean (or min//max/total), Time Field: @timestamp, Value Field: xxx
  7. Click Close
  8. Error in red "Oops! FacetPhaseExecutionException[Facet [0]: (value) field [xxx] not found]" is displayed above the histogram
  9. Zoom in to select a subset of the data and the error goes away
  10. Zoom out and the error comes back

Thanks in advance for any replies.

@pythianali
Copy link

I am also experiencing this issue with:

Logstash 1.4.1
Elasticsearch 1.2.0
Kibana 3.1

Any time that a histogram is used there is excessive logging of these kinds of messages:

2014-05-29 15:17:33,172][DEBUG][action.search.type ] [elasticsearch2] [kibana-int][2], node[XOEVBI5vRAS6nZFsxPFp7Q], [P], s[STARTED]: Failed to execute [org.elasticsearch.action.search.SearchRequest@3ef1b5a3]
org.elasticsearch.search.SearchParseException: [kibana-int][2]: from[-1],size[-1]: Parse Failure [Failed to parse source [{"facets":{"0":{"date_histogram":{"field":"@timestamp","interval":"1s"},"global":true,"facet_filter":{"fquery":{"query":{"filtered":{"query":{"query_string":{"query":"*"}},"filter":{"bool":{"must":[{"range":{"@timestamp":{"from":1401390753086,"to":1401391053086}}}]}}}}}}}},"size":0}]]

With no histogram enabled there is no issue. The elasticsearch indexes rapidly fill up. As a temporary workaround I am considering shushing this particular log so it doesn't get ingested by logstash.

@pythianali
Copy link

Forgot to mention that the issue occurs regardless if the kibana settings have been saved or not. Only the histogram needs to be activated.

@pythianjoseph
Copy link

Looks like this happens because kibana searches all the indices by default while using panel objects.

curl '10.177.128.74:9200/_all/_search?pretty' -d '{"facets":{"0":{"date_histogram":{"field":"@timestamp","interval":"1s"},"global":true,"facet_filter":{"fquery":{"query":{"filtered":{"query":{"query_string":{"query":"*"}},"filter":{"bool":{"must":[{"range":{"@timestamp":{"from":1401339163286,"to":1401339463286}}}]}}}}}}}},"size":0}'

Fix could be to exclude kibana-int while searching from histogram.

@jasongdove
Copy link

The default logstash dashboard has the correct index settings while the blank dashboard defaults to _all. If you go into your dashboard index settings and change the Timestamping to "day" the pattern defaults to one that only includes logstash indexes: [logstash-]YYYY.MM.DD

This seems to have stopped the errors for me (I started with a blank dashboard).

@pythianali
Copy link

Yes I see this now, if you start with the logstash.json as your base dashboard and build from that there is no issue. Thanks

@jayhendren
Copy link

I have run into this issue as well with kibana 3.1.1 against elasticsearch 1.4.2, but using the default logstash dashboard in kibana. (For some reason, I only have this problem if I shut down an elasticsearch cluster containing logstash data and then restart it. I cannot recreate the problem by starting elasticsearch, then starting logstash, then populating elasticsearch with log data via logstash, then running queries with kibana. I am not sure if this is a separate issue or the same one as discussed in this thread.) I have confirmed that my configuration is set to use the default logstash indexing as described by @jasongdove.

As mentioned above, the problem arises when attempting to draw a histogram against a date range for which there is at least one date that does not correspond to a logstash index in elasticsearch (i.e., there are no data/logs for that date). The workaround, then, is to run queries against date ranges for which indices/data definitely exist when using the histogram widget.

I propose to re-open this issue, though I wonder if this is an elasticsearch issue rather than strictly a kibana issue. Let me know if logs or other information would help clarify the issue.

@lorenooliveira
Copy link

I'm having apparently the same problem here. My setup is:

ElasticSearch 1.4.0
logstash 1.4.2
kibana 3.1.2

The only difference is that I'm using a custom field instead of @timestamp. Even using the kibana's pre configured dashboard for logstash, I'm having problems. The problem is reproducible every time I open the default logstash dashboard, click on configure in the histogram widget, select mean as chart value, and type the name of my custom numerical field in the value field box.

Is there any workaround other than using the kibana's pre configured dashboard for logstash?

@ruudgrosmann
Copy link

I think this issue occurs whenever elasticsearch searches at least one index with 0 documents in it.
Kibana can workaround that 'feature' by not including those indexes in the query.
It has to be more careful in building the elasticsearch-query.

@keving12
Copy link

I'm experiencing the same issue. I have the following versions:
Elasticsearch 1.2.2
Kibana 3.1.0
Logstash 1.4.2

I get the same issue after doing the following:

  • Stopping logstash and Elasticsearch
  • Removing the nodes directory and restarting Elasticsearch so as to clear the data
  • Selecting the logstash.json dashboard

After inspecting the element the curl command behind the histogram appears to be querying the index for the current day and the previous day. I only have data for the current day. I am able to reproduce by copying the same curl command to a terminal and executing against my elasticsearch instance but when I remove the index that I know I have no data for from the curl command and executing there are no exceptions.

It seems to me Kibana needs to better handle the scenario of querying multiple indices where some may have no data.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

10 participants