-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create built in alert type for index threshold #53041
Comments
Pinging @elastic/kibana-alerting-services (Team:Alerting Services) |
I'm looking into replacing the watcher APIs, specifically if there are already some other APIs we can use rather than copy the watcher ones. The watcher server APIs we use are:
For the second - So seems like copying the watcher ones, maybe trimming them down if they have more function than we need, will be the cleanest way. Here's an example of the data plugin endpoint though, for fields: $ curl -v -k https://elastic:changeme@localhost:5601/api/index_patterns/_fields_for_wildcard?pattern=.kibana* | json
{
"fields": [
{
"name": "@timestamp",
"type": "date",
"esTypes": [
"date"
],
"searchable": true,
"aggregatable": true,
"readFromDocValues": true
},
... code is here: https://github.com/elastic/kibana/blob/master/src/plugins/data/server/index_patterns/routes.ts |
I was curious what the existing watcher APIs we use do, so here's some results: $ curl -v -k https://elastic:changeme@localhost:5601/api/watcher/indices \
-H "kbn-xsrf: foo" \
-d '{"pattern": "*"}' | json
{
"indices": [
".kibana"
".kibana_1",
...
]
}
$ curl -v -k https://elastic:changeme@localhost:5601/api/watcher/fields \
-H "content-type: application/json" -H "kbn-xsrf: foo" \
-d '{"indexes": [".kibana"]}' | json
{
"fields": [
{
"name": "action.actionTypeId",
"type": "keyword",
"normalizedType": "keyword",
"aggregatable": true,
"searchable": true
},
...
]
} I wasn't sure what the inputs for the
|
It looks like the data plugin can probably provide two pieces of what we need:
Here's the plugin start interface: kibana/src/plugins/data/public/types.ts Lines 46 to 56 in 1df0190
There is a query service as well, but looks like perhaps it only deals with saved searches, and I don't think we want to make a customer create a saved search just to use it in the alert. Assuming we can use the data plugin to replace watcher api usage for indices/fields, it's still going to be limited to what the user has created in terms of kibana index patterns. Currently the watcher APIs return all the indices available. So, it's more limiting than the watcher APIs, however it should also be familiar to existing Kibana users who are likely already using Kibana index patterns. Seems like they're hard to avoid :-) If all that's right, then it's a matter of replacing the calls to the watcher api in the ui plugin, with calls to the data plugin instead, when getting lists of indices and fields. Guessing we'll want a new http endpoint to run the query though. We'll need it in the alert type, for it to make the es calls, and so can then just expose an endpoint to the particular bit that makes the es call, for the ui's need of running it to get data to display in the visualization. |
Looked a bit deeper, it looks like |
Re-thinking this. I think I'd like to have the chart data be sourced from the alert-type itself, rather than getting the chart data from an independent browser-based query. If for some reason we change the query in the alert-type, we'd have to make a corresponding change to the browser-based one, and ... we'll forget, for sure :-) I'm going to see if this can be designed so the two queries - what the alert-type runs during it's interval executions, and what is run to generate chart data, share as much as possible the same query structure. |
This would be awesome and easier for the developer! |
Well I'm the developer, and I do always like making things easier for myself. I think you're suggesting that we try to set a good example for other alert-type implementors, since they will presumably have the same issue. I've been wondering if we can build this into the alerting framework itself - we can start by allowing an alert-type to provide an additional function to generate visualization data. And then have a new http endpoint in alerting to request that data, which would call that function. What the inputs and outputs of that http endpoint are - that's where it gets tricky as it's likely to be very alert-type-specific. |
I like that too. |
Ya, unless for some reason it becomes clear that it doesn't make sense, I think having the same chart in create available in details should be easy and useful. I suspect we may want to show some chart data AFTER the date of the triggered event, so a customer can see if the event got worse, or resolved. |
With the built-in index threshold alertType PR about to be merged, I'm going to start in on the following:
Expectation is that this should get us to the point where you can create/edit index threshold alerts, and won't need a gold+ license (watcher APIs don't work in basic). |
) Adds the first built-in alertType for Kibana alerting, an index threshold alert, and associated HTTP endpoint to generate preview data for it. addresses the server-side requirements for issue #53041
…stic#57030) Adds the first built-in alertType for Kibana alerting, an index threshold alert, and associated HTTP endpoint to generate preview data for it. addresses the server-side requirements for issue elastic#53041
…ew API (elastic#59385) Changes the alerting UI to use the new time series query HTTP endpoint provided by the builtin index threshold alertType; previously it used a watcher HTTP endpoint. This is part of the ongoing index threshold work tracked in elastic#53041
…elastic#59475) Prior to this PR, the alerting UI used two HTTP endpoints provided by the Kibana watcher plugin, to list index and field names. There are now two HTTP endpoints in the alerting_builtins plugin which will be used instead. The code for the new endpoints was largely copied from the existing watcher endpoints, and the HTTP request/response bodies kept pretty much the same. resolves elastic#53041
…#59475) (#59713) Prior to this PR, the alerting UI used two HTTP endpoints provided by the Kibana watcher plugin, to list index and field names. There are now two HTTP endpoints in the alerting_builtins plugin which will be used instead. The code for the new endpoints was largely copied from the existing watcher endpoints, and the HTTP request/response bodies kept pretty much the same. resolves #53041
TODOs from initial PR #48959.
The text was updated successfully, but these errors were encountered: